Nov 21 09:40:53 crc systemd[1]: Starting Kubernetes Kubelet... Nov 21 09:40:53 crc restorecon[4731]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 09:40:53 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 09:40:54 crc restorecon[4731]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 09:40:54 crc restorecon[4731]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 21 09:40:55 crc kubenswrapper[4972]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 09:40:55 crc kubenswrapper[4972]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 21 09:40:55 crc kubenswrapper[4972]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 09:40:55 crc kubenswrapper[4972]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 09:40:55 crc kubenswrapper[4972]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 21 09:40:55 crc kubenswrapper[4972]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.434779 4972 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450291 4972 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450341 4972 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450353 4972 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450360 4972 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450369 4972 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450376 4972 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450384 4972 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450392 4972 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450398 4972 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450405 4972 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450411 4972 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450418 4972 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450427 4972 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450435 4972 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450444 4972 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450452 4972 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450461 4972 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450469 4972 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450477 4972 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450484 4972 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450490 4972 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450497 4972 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450506 4972 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450532 4972 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450539 4972 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450546 4972 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450553 4972 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450561 4972 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450569 4972 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450576 4972 feature_gate.go:330] unrecognized feature gate: Example Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450583 4972 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450592 4972 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450600 4972 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450607 4972 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450614 4972 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450620 4972 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450627 4972 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450634 4972 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450642 4972 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450648 4972 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450655 4972 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450662 4972 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450669 4972 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450676 4972 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450682 4972 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450689 4972 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450696 4972 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450705 4972 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450712 4972 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450718 4972 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450727 4972 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450734 4972 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450742 4972 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450752 4972 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450759 4972 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450766 4972 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450773 4972 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450781 4972 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450787 4972 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450794 4972 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450800 4972 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450806 4972 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450813 4972 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450822 4972 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450852 4972 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450858 4972 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450865 4972 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450871 4972 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450878 4972 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450885 4972 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.450891 4972 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451062 4972 flags.go:64] FLAG: --address="0.0.0.0" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451083 4972 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451095 4972 flags.go:64] FLAG: --anonymous-auth="true" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451106 4972 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451115 4972 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451122 4972 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451131 4972 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451140 4972 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451148 4972 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451154 4972 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451161 4972 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451170 4972 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451176 4972 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451183 4972 flags.go:64] FLAG: --cgroup-root="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451190 4972 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451197 4972 flags.go:64] FLAG: --client-ca-file="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451204 4972 flags.go:64] FLAG: --cloud-config="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451210 4972 flags.go:64] FLAG: --cloud-provider="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451216 4972 flags.go:64] FLAG: --cluster-dns="[]" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451226 4972 flags.go:64] FLAG: --cluster-domain="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451232 4972 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451238 4972 flags.go:64] FLAG: --config-dir="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451244 4972 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451251 4972 flags.go:64] FLAG: --container-log-max-files="5" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451260 4972 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451267 4972 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451274 4972 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451280 4972 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451287 4972 flags.go:64] FLAG: --contention-profiling="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451293 4972 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451299 4972 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451306 4972 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451312 4972 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451321 4972 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451328 4972 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451334 4972 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451340 4972 flags.go:64] FLAG: --enable-load-reader="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451347 4972 flags.go:64] FLAG: --enable-server="true" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451353 4972 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451361 4972 flags.go:64] FLAG: --event-burst="100" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451370 4972 flags.go:64] FLAG: --event-qps="50" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451377 4972 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451384 4972 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451391 4972 flags.go:64] FLAG: --eviction-hard="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451400 4972 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451408 4972 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451415 4972 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451423 4972 flags.go:64] FLAG: --eviction-soft="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451430 4972 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451436 4972 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451443 4972 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451450 4972 flags.go:64] FLAG: --experimental-mounter-path="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451456 4972 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451464 4972 flags.go:64] FLAG: --fail-swap-on="true" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451471 4972 flags.go:64] FLAG: --feature-gates="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451479 4972 flags.go:64] FLAG: --file-check-frequency="20s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451486 4972 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451493 4972 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451499 4972 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451505 4972 flags.go:64] FLAG: --healthz-port="10248" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451512 4972 flags.go:64] FLAG: --help="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451518 4972 flags.go:64] FLAG: --hostname-override="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451524 4972 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451531 4972 flags.go:64] FLAG: --http-check-frequency="20s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451537 4972 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451543 4972 flags.go:64] FLAG: --image-credential-provider-config="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451549 4972 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451555 4972 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451561 4972 flags.go:64] FLAG: --image-service-endpoint="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451568 4972 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451575 4972 flags.go:64] FLAG: --kube-api-burst="100" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451582 4972 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451589 4972 flags.go:64] FLAG: --kube-api-qps="50" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451596 4972 flags.go:64] FLAG: --kube-reserved="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451603 4972 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451609 4972 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451616 4972 flags.go:64] FLAG: --kubelet-cgroups="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451621 4972 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451629 4972 flags.go:64] FLAG: --lock-file="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451635 4972 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451641 4972 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451648 4972 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451666 4972 flags.go:64] FLAG: --log-json-split-stream="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451673 4972 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451680 4972 flags.go:64] FLAG: --log-text-split-stream="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451686 4972 flags.go:64] FLAG: --logging-format="text" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451692 4972 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451700 4972 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451706 4972 flags.go:64] FLAG: --manifest-url="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451712 4972 flags.go:64] FLAG: --manifest-url-header="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451721 4972 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451728 4972 flags.go:64] FLAG: --max-open-files="1000000" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451737 4972 flags.go:64] FLAG: --max-pods="110" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451743 4972 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451750 4972 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451756 4972 flags.go:64] FLAG: --memory-manager-policy="None" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451762 4972 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451769 4972 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451775 4972 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451782 4972 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451796 4972 flags.go:64] FLAG: --node-status-max-images="50" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451802 4972 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451809 4972 flags.go:64] FLAG: --oom-score-adj="-999" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451816 4972 flags.go:64] FLAG: --pod-cidr="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451822 4972 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451854 4972 flags.go:64] FLAG: --pod-manifest-path="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451861 4972 flags.go:64] FLAG: --pod-max-pids="-1" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451868 4972 flags.go:64] FLAG: --pods-per-core="0" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451874 4972 flags.go:64] FLAG: --port="10250" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451881 4972 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451887 4972 flags.go:64] FLAG: --provider-id="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451894 4972 flags.go:64] FLAG: --qos-reserved="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451900 4972 flags.go:64] FLAG: --read-only-port="10255" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451906 4972 flags.go:64] FLAG: --register-node="true" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451912 4972 flags.go:64] FLAG: --register-schedulable="true" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451918 4972 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451929 4972 flags.go:64] FLAG: --registry-burst="10" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451936 4972 flags.go:64] FLAG: --registry-qps="5" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451942 4972 flags.go:64] FLAG: --reserved-cpus="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451949 4972 flags.go:64] FLAG: --reserved-memory="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451957 4972 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451963 4972 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451969 4972 flags.go:64] FLAG: --rotate-certificates="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451976 4972 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451982 4972 flags.go:64] FLAG: --runonce="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451988 4972 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.451994 4972 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452001 4972 flags.go:64] FLAG: --seccomp-default="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452007 4972 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452013 4972 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452020 4972 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452026 4972 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452033 4972 flags.go:64] FLAG: --storage-driver-password="root" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452039 4972 flags.go:64] FLAG: --storage-driver-secure="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452047 4972 flags.go:64] FLAG: --storage-driver-table="stats" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452053 4972 flags.go:64] FLAG: --storage-driver-user="root" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452060 4972 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452068 4972 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452075 4972 flags.go:64] FLAG: --system-cgroups="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452081 4972 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452091 4972 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452097 4972 flags.go:64] FLAG: --tls-cert-file="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452104 4972 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452114 4972 flags.go:64] FLAG: --tls-min-version="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452120 4972 flags.go:64] FLAG: --tls-private-key-file="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452127 4972 flags.go:64] FLAG: --topology-manager-policy="none" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452133 4972 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452140 4972 flags.go:64] FLAG: --topology-manager-scope="container" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452146 4972 flags.go:64] FLAG: --v="2" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452156 4972 flags.go:64] FLAG: --version="false" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452165 4972 flags.go:64] FLAG: --vmodule="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452172 4972 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452179 4972 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452361 4972 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452369 4972 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452384 4972 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452390 4972 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452395 4972 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452401 4972 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452407 4972 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452412 4972 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452417 4972 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452422 4972 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452428 4972 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452433 4972 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452438 4972 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452443 4972 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452450 4972 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452457 4972 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452465 4972 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452472 4972 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452477 4972 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452484 4972 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452489 4972 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452496 4972 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452503 4972 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452508 4972 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452513 4972 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452520 4972 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452527 4972 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452533 4972 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452538 4972 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452543 4972 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452548 4972 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452554 4972 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452559 4972 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452564 4972 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452570 4972 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452575 4972 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452580 4972 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452587 4972 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452601 4972 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452607 4972 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452612 4972 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452618 4972 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452624 4972 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452629 4972 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452634 4972 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452639 4972 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452645 4972 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452650 4972 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452656 4972 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452661 4972 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452667 4972 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452672 4972 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452677 4972 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452683 4972 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452688 4972 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452694 4972 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452699 4972 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452706 4972 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452713 4972 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452718 4972 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452724 4972 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452730 4972 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452736 4972 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452741 4972 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452747 4972 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452752 4972 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452758 4972 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452763 4972 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452768 4972 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452774 4972 feature_gate.go:330] unrecognized feature gate: Example Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.452779 4972 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.452788 4972 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.467902 4972 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.467971 4972 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468115 4972 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468137 4972 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468146 4972 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468155 4972 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468163 4972 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468172 4972 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468180 4972 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468188 4972 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468196 4972 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468204 4972 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468211 4972 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468222 4972 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468267 4972 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468278 4972 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468288 4972 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468297 4972 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468305 4972 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468313 4972 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468320 4972 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468328 4972 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468336 4972 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468344 4972 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468351 4972 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468359 4972 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468370 4972 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468381 4972 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468390 4972 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468400 4972 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468409 4972 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468419 4972 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468429 4972 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468439 4972 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468449 4972 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468461 4972 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468475 4972 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468488 4972 feature_gate.go:330] unrecognized feature gate: Example Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468497 4972 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468505 4972 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468513 4972 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468521 4972 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468528 4972 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468536 4972 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468544 4972 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468552 4972 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468559 4972 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468567 4972 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468574 4972 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468583 4972 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468590 4972 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468598 4972 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468606 4972 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468616 4972 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468626 4972 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468635 4972 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468644 4972 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468652 4972 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468661 4972 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468669 4972 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468677 4972 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468685 4972 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468692 4972 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468701 4972 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468710 4972 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468717 4972 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468725 4972 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468733 4972 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468740 4972 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468748 4972 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468756 4972 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468764 4972 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.468775 4972 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.468790 4972 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469132 4972 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469148 4972 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469156 4972 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469164 4972 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469172 4972 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469183 4972 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469194 4972 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469202 4972 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469211 4972 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469220 4972 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469228 4972 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469236 4972 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469245 4972 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469255 4972 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469265 4972 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469274 4972 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469282 4972 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469291 4972 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469299 4972 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469307 4972 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469315 4972 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469323 4972 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469332 4972 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469340 4972 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469349 4972 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469357 4972 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469366 4972 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469376 4972 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469386 4972 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469395 4972 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469406 4972 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469416 4972 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469424 4972 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469432 4972 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469457 4972 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469468 4972 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469477 4972 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469485 4972 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469493 4972 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469501 4972 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469509 4972 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469518 4972 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469526 4972 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469533 4972 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469541 4972 feature_gate.go:330] unrecognized feature gate: Example Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469548 4972 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469557 4972 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469564 4972 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469572 4972 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469580 4972 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469588 4972 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469596 4972 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469603 4972 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469613 4972 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469622 4972 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469630 4972 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469639 4972 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469647 4972 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469654 4972 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469662 4972 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469669 4972 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469677 4972 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469684 4972 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469693 4972 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469700 4972 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469708 4972 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469716 4972 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469724 4972 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469731 4972 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469739 4972 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.469759 4972 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.469772 4972 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.470170 4972 server.go:940] "Client rotation is on, will bootstrap in background" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.477903 4972 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.478142 4972 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.481070 4972 server.go:997] "Starting client certificate rotation" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.481170 4972 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.484171 4972 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-12 10:28:47.235493921 +0000 UTC Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.484442 4972 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.531191 4972 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.533918 4972 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.538318 4972 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.564297 4972 log.go:25] "Validated CRI v1 runtime API" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.603200 4972 log.go:25] "Validated CRI v1 image API" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.604980 4972 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.612693 4972 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-21-09-36-00-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.612753 4972 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.632574 4972 manager.go:217] Machine: {Timestamp:2025-11-21 09:40:55.629340977 +0000 UTC m=+0.738483495 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:da538fee-18a0-417f-878c-3556afbb76c2 BootID:a234290f-71bd-4d0a-b5a3-5342e5c9c28a Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:84:f6:f0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:84:f6:f0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2e:5d:ab Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:16:43:78 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:11:f9:d4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c3:3e:74 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:5e:1a:e9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a6:30:97:af:26:84 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:52:f5:2e:92:4c:a0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.632927 4972 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.633183 4972 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.633499 4972 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.633647 4972 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.633680 4972 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.635732 4972 topology_manager.go:138] "Creating topology manager with none policy" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.635757 4972 container_manager_linux.go:303] "Creating device plugin manager" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.636272 4972 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.636305 4972 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.637220 4972 state_mem.go:36] "Initialized new in-memory state store" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.637305 4972 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.641188 4972 kubelet.go:418] "Attempting to sync node with API server" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.641210 4972 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.641225 4972 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.641238 4972 kubelet.go:324] "Adding apiserver pod source" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.641251 4972 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.650171 4972 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.650260 4972 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.650519 4972 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.650625 4972 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.654212 4972 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.658946 4972 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.660921 4972 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.664031 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.664063 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.664075 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.664083 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.664097 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.664106 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.664114 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.664128 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.664138 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.664148 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.664160 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.664168 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.665892 4972 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.666503 4972 server.go:1280] "Started kubelet" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.666702 4972 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.667681 4972 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.667673 4972 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.668683 4972 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 21 09:40:55 crc systemd[1]: Started Kubernetes Kubelet. Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.670774 4972 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.670893 4972 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.670912 4972 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:02:58.842252045 +0000 UTC Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.670991 4972 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 683h22m3.171264101s for next certificate rotation Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.671229 4972 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.685667 4972 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.685702 4972 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.686543 4972 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.686501 4972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="200ms" Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.688469 4972 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.688744 4972 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.689428 4972 factory.go:55] Registering systemd factory Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.689467 4972 factory.go:221] Registration of the systemd container factory successfully Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.690557 4972 factory.go:153] Registering CRI-O factory Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.690624 4972 factory.go:221] Registration of the crio container factory successfully Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.691317 4972 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.691439 4972 factory.go:103] Registering Raw factory Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.691544 4972 manager.go:1196] Started watching for new ooms in manager Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.692627 4972 manager.go:319] Starting recovery of all containers Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.705502 4972 server.go:460] "Adding debug handlers to kubelet server" Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.709848 4972 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.176:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1879fc3a1ff14795 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-21 09:40:55.666468757 +0000 UTC m=+0.775611275,LastTimestamp:2025-11-21 09:40:55.666468757 +0000 UTC m=+0.775611275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717019 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717103 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717129 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717155 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717177 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717198 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717226 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717247 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717280 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717302 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717324 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717346 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717378 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717403 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717436 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717459 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717480 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717507 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717529 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717549 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717574 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717596 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717617 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717641 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717671 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717695 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717742 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717784 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717807 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717851 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717875 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717898 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717927 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717947 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717975 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.717999 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718021 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718042 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718064 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718088 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718109 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718129 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718148 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718214 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718236 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718258 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718306 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718328 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718350 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718376 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718399 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718420 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718450 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718472 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718494 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718515 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718537 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718558 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718579 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718598 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718617 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718637 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718662 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718682 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718706 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718725 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718743 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718764 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718790 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718812 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718857 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718907 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718926 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718945 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718965 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.718986 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719005 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719025 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719046 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719067 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719091 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719116 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719135 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719155 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719174 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719194 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719212 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719231 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719250 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719272 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719292 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719312 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719332 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719353 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719375 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719395 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719417 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719438 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719460 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719480 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719503 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719523 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719544 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719565 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719596 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719619 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719641 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719664 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719685 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719707 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719732 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719753 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719776 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719802 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719822 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719884 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719908 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719929 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719951 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719969 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.719991 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720012 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720034 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720054 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720074 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720092 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720112 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720131 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720153 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720174 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720195 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720218 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720237 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720255 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720276 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720296 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720317 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720337 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720359 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720389 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720410 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720433 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720455 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720481 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720510 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720542 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720571 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720591 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720610 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720631 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720651 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720670 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720689 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720708 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720728 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720749 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720770 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720790 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720811 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720862 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720887 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720916 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720953 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.720972 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721004 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721025 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721050 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721072 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721097 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721124 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721143 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721162 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721190 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721212 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721235 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721257 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721274 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721295 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721310 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721331 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721353 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721368 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721389 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721407 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721427 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721448 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721463 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721480 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721495 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721510 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721524 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721543 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721560 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721576 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721592 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721610 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721626 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721645 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721661 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721677 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721693 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721707 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721721 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721741 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.721761 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.725288 4972 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.725363 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.725385 4972 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.725397 4972 reconstruct.go:97] "Volume reconstruction finished" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.725406 4972 reconciler.go:26] "Reconciler: start to sync state" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.736995 4972 manager.go:324] Recovery completed Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.752544 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.755990 4972 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.756048 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.756095 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.756104 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.758093 4972 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.758145 4972 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.758174 4972 kubelet.go:2335] "Starting kubelet main sync loop" Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.758277 4972 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 21 09:40:55 crc kubenswrapper[4972]: W1121 09:40:55.760049 4972 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.760182 4972 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.762706 4972 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.762743 4972 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.762770 4972 state_mem.go:36] "Initialized new in-memory state store" Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.771413 4972 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.851857 4972 policy_none.go:49] "None policy: Start" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.853491 4972 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.853592 4972 state_mem.go:35] "Initializing new in-memory state store" Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.858772 4972 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.871925 4972 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.888239 4972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="400ms" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.927240 4972 manager.go:334] "Starting Device Plugin manager" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.927296 4972 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.927309 4972 server.go:79] "Starting device plugin registration server" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.927973 4972 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.928007 4972 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.930422 4972 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.930513 4972 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 21 09:40:55 crc kubenswrapper[4972]: I1121 09:40:55.930522 4972 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 21 09:40:55 crc kubenswrapper[4972]: E1121 09:40:55.941527 4972 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.028822 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.030201 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.030239 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.030249 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.030277 4972 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 09:40:56 crc kubenswrapper[4972]: E1121 09:40:56.031040 4972 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.059281 4972 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.059432 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.061548 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.061617 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.061642 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.061818 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.062442 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.062535 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.063125 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.063176 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.063196 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.063429 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.063764 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.063819 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.064164 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.064200 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.064217 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.064745 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.064801 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.064859 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.065107 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.065286 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.065341 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.065108 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.065947 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.065977 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.066299 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.066327 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.066336 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.066535 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.066591 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.066611 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.066921 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.067025 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.067069 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.068466 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.068511 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.068526 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.068705 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.068775 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.068804 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.069149 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.069221 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.070461 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.070502 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.070531 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.129679 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.129765 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.129809 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.129893 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.129936 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.129972 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.130191 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.130335 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.130562 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.130682 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.130940 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.131069 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.131175 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.131270 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.131371 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.232337 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.234928 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235074 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235174 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235276 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235375 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235446 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235510 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235605 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235698 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235776 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235876 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235948 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236010 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236023 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236165 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236030 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236371 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236244 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236490 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236608 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236663 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236713 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236748 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236801 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235898 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.235810 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236887 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236914 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236965 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.236430 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.242160 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.242260 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.242282 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.242330 4972 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 09:40:56 crc kubenswrapper[4972]: E1121 09:40:56.243257 4972 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Nov 21 09:40:56 crc kubenswrapper[4972]: E1121 09:40:56.289350 4972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="800ms" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.398934 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.409145 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.434809 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.458109 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.465225 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 09:40:56 crc kubenswrapper[4972]: W1121 09:40:56.493703 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b2b4bd63c8bf362a6a32073bfaceee17c5315ff2fb492256a522791add151965 WatchSource:0}: Error finding container b2b4bd63c8bf362a6a32073bfaceee17c5315ff2fb492256a522791add151965: Status 404 returned error can't find the container with id b2b4bd63c8bf362a6a32073bfaceee17c5315ff2fb492256a522791add151965 Nov 21 09:40:56 crc kubenswrapper[4972]: W1121 09:40:56.498282 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a8f8fccb0e756fe8267619482711b27c6beeba71f97f2e20decd68ff41091b97 WatchSource:0}: Error finding container a8f8fccb0e756fe8267619482711b27c6beeba71f97f2e20decd68ff41091b97: Status 404 returned error can't find the container with id a8f8fccb0e756fe8267619482711b27c6beeba71f97f2e20decd68ff41091b97 Nov 21 09:40:56 crc kubenswrapper[4972]: W1121 09:40:56.503070 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1c435db7e7c879fddf972378a4f10ee84ca2a5e49d21e2e9237b55a00faad6e6 WatchSource:0}: Error finding container 1c435db7e7c879fddf972378a4f10ee84ca2a5e49d21e2e9237b55a00faad6e6: Status 404 returned error can't find the container with id 1c435db7e7c879fddf972378a4f10ee84ca2a5e49d21e2e9237b55a00faad6e6 Nov 21 09:40:56 crc kubenswrapper[4972]: W1121 09:40:56.504760 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-7013629d5611996c43c61a4dbcc8d7646619f790ae4abb6da011bb3e7b6d5505 WatchSource:0}: Error finding container 7013629d5611996c43c61a4dbcc8d7646619f790ae4abb6da011bb3e7b6d5505: Status 404 returned error can't find the container with id 7013629d5611996c43c61a4dbcc8d7646619f790ae4abb6da011bb3e7b6d5505 Nov 21 09:40:56 crc kubenswrapper[4972]: W1121 09:40:56.510726 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-27a94ff66876315805c68674af857e0b9a59078c47cb649989a4bd6fb4d75ffa WatchSource:0}: Error finding container 27a94ff66876315805c68674af857e0b9a59078c47cb649989a4bd6fb4d75ffa: Status 404 returned error can't find the container with id 27a94ff66876315805c68674af857e0b9a59078c47cb649989a4bd6fb4d75ffa Nov 21 09:40:56 crc kubenswrapper[4972]: W1121 09:40:56.614397 4972 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:56 crc kubenswrapper[4972]: E1121 09:40:56.614496 4972 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.644339 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.646336 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.646401 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.646417 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.646450 4972 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 09:40:56 crc kubenswrapper[4972]: E1121 09:40:56.647034 4972 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.668962 4972 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.767165 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"27a94ff66876315805c68674af857e0b9a59078c47cb649989a4bd6fb4d75ffa"} Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.768807 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7013629d5611996c43c61a4dbcc8d7646619f790ae4abb6da011bb3e7b6d5505"} Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.770100 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1c435db7e7c879fddf972378a4f10ee84ca2a5e49d21e2e9237b55a00faad6e6"} Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.771402 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a8f8fccb0e756fe8267619482711b27c6beeba71f97f2e20decd68ff41091b97"} Nov 21 09:40:56 crc kubenswrapper[4972]: I1121 09:40:56.773080 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b2b4bd63c8bf362a6a32073bfaceee17c5315ff2fb492256a522791add151965"} Nov 21 09:40:56 crc kubenswrapper[4972]: W1121 09:40:56.938219 4972 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:56 crc kubenswrapper[4972]: E1121 09:40:56.938392 4972 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:40:57 crc kubenswrapper[4972]: E1121 09:40:57.090489 4972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="1.6s" Nov 21 09:40:57 crc kubenswrapper[4972]: W1121 09:40:57.116171 4972 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:57 crc kubenswrapper[4972]: E1121 09:40:57.116249 4972 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:40:57 crc kubenswrapper[4972]: W1121 09:40:57.144543 4972 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:57 crc kubenswrapper[4972]: E1121 09:40:57.144755 4972 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.448071 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.451707 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.451776 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.451789 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.451877 4972 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 09:40:57 crc kubenswrapper[4972]: E1121 09:40:57.452783 4972 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.565945 4972 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 21 09:40:57 crc kubenswrapper[4972]: E1121 09:40:57.567570 4972 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.668914 4972 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.777978 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4d67fa8619a34f3e88f04abaa17689ad94fb4fc75a92b0a1ab3190b2a8e0919a"} Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.778080 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.779090 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.779126 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.779138 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.780456 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.780012 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63"} Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.786431 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.786508 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.786527 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.787604 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654"} Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.787707 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.789274 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67"} Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.789739 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.789774 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.789786 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.790796 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d"} Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.790954 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.791707 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.791737 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:57 crc kubenswrapper[4972]: I1121 09:40:57.791749 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.669567 4972 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:58 crc kubenswrapper[4972]: E1121 09:40:58.691647 4972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="3.2s" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.799472 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985"} Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.799575 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4"} Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.799602 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a"} Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.800086 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.802376 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.802441 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.802453 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.802765 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d"} Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.802623 4972 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d" exitCode=0 Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.802959 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.804071 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.804128 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.804151 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.806403 4972 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4d67fa8619a34f3e88f04abaa17689ad94fb4fc75a92b0a1ab3190b2a8e0919a" exitCode=0 Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.806502 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4d67fa8619a34f3e88f04abaa17689ad94fb4fc75a92b0a1ab3190b2a8e0919a"} Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.806599 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.807442 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.808607 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.808675 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.808704 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.809419 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.809471 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.809489 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.811710 4972 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63" exitCode=0 Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.811824 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63"} Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.811964 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.813441 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.813495 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.813525 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.814576 4972 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654" exitCode=0 Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.814632 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654"} Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.814805 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.817007 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.817067 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:58 crc kubenswrapper[4972]: I1121 09:40:58.817085 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.053491 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.057335 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.057366 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.057378 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.057404 4972 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 09:40:59 crc kubenswrapper[4972]: E1121 09:40:59.058005 4972 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Nov 21 09:40:59 crc kubenswrapper[4972]: W1121 09:40:59.289460 4972 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:59 crc kubenswrapper[4972]: E1121 09:40:59.289650 4972 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:40:59 crc kubenswrapper[4972]: W1121 09:40:59.413601 4972 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:59 crc kubenswrapper[4972]: E1121 09:40:59.413716 4972 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.669335 4972 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.819622 4972 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c3e5812ec15e435a3ccd913b0b1ee41d3cc6f099ae03f6a76f0b6e7d8cc95d81" exitCode=0 Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.819724 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.819720 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c3e5812ec15e435a3ccd913b0b1ee41d3cc6f099ae03f6a76f0b6e7d8cc95d81"} Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.820579 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.820606 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.820617 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.823122 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"82e8457b59ef21238dc544bad22e50462262f2a8dccb77f227e3b71c0e42a00e"} Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.823239 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.824390 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.824424 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.824437 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.828390 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b"} Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.828416 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.828442 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3"} Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.828471 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c"} Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.829615 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.829684 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.829708 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.833762 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e"} Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.833809 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a"} Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.833864 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.833863 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e"} Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.834994 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.835044 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.835068 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:40:59 crc kubenswrapper[4972]: E1121 09:40:59.886962 4972 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.176:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1879fc3a1ff14795 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-21 09:40:55.666468757 +0000 UTC m=+0.775611275,LastTimestamp:2025-11-21 09:40:55.666468757 +0000 UTC m=+0.775611275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 21 09:40:59 crc kubenswrapper[4972]: I1121 09:40:59.977253 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:41:00 crc kubenswrapper[4972]: W1121 09:41:00.255603 4972 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:41:00 crc kubenswrapper[4972]: E1121 09:41:00.255697 4972 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:41:00 crc kubenswrapper[4972]: W1121 09:41:00.330524 4972 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:41:00 crc kubenswrapper[4972]: E1121 09:41:00.330631 4972 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.332762 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.669294 4972 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.792642 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.842638 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7140dc438780f0e2a0cb522ddb2272b6ec1962c5638ae8ce0fb772950a1df165"} Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.842718 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424"} Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.842966 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.844410 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.844457 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.844472 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.847958 4972 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f12b7996df5f04a9de4d951f6a8dc8dbaabf215ad2456e856d4761c9d61c25b7" exitCode=0 Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.848091 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.848108 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.848114 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f12b7996df5f04a9de4d951f6a8dc8dbaabf215ad2456e856d4761c9d61c25b7"} Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.848149 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.848218 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.848331 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.849312 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.849349 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.849359 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.849418 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.849444 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.849457 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.849545 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.849569 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.849580 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.850112 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.850127 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:00 crc kubenswrapper[4972]: I1121 09:41:00.850138 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.668946 4972 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.857101 4972 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 21 09:41:01 crc kubenswrapper[4972]: E1121 09:41:01.859765 4972 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.860927 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"81ab0939450523918fded50f64c32d1410a9385908f262694fa0aa00c715b0fc"} Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.861016 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"439323d8d0e6c0054fc262ac12bef2a332fba1eb57e55caad80b4d8fa6ed2dc9"} Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.861047 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7b26ecc9ce73ced93f1c9d2fd1e465a3a8bbd7d847ae4f046bf5fc3880dacffd"} Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.861093 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.861137 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.861155 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.861104 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.863381 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.863413 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.863457 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.863477 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.863427 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.863544 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.863564 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.863588 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:01 crc kubenswrapper[4972]: I1121 09:41:01.863615 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:01 crc kubenswrapper[4972]: E1121 09:41:01.893599 4972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="6.4s" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.259120 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.261569 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.261644 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.261661 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.261714 4972 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 09:41:02 crc kubenswrapper[4972]: E1121 09:41:02.262642 4972 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.870045 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"20d7841bd7d64c8ff139dc30847018b4fade86c23d114a7abd8ad26ccdf5a03c"} Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.870109 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"975f8c634034002c5f208c94a752aa6137b8a6b2dac933fab40a4fc068c564d8"} Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.870218 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.871772 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.871874 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.871910 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.872043 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.873866 4972 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7140dc438780f0e2a0cb522ddb2272b6ec1962c5638ae8ce0fb772950a1df165" exitCode=255 Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.873944 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.873932 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7140dc438780f0e2a0cb522ddb2272b6ec1962c5638ae8ce0fb772950a1df165"} Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.874921 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.874951 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.874962 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:02 crc kubenswrapper[4972]: I1121 09:41:02.875541 4972 scope.go:117] "RemoveContainer" containerID="7140dc438780f0e2a0cb522ddb2272b6ec1962c5638ae8ce0fb772950a1df165" Nov 21 09:41:03 crc kubenswrapper[4972]: I1121 09:41:03.884946 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 21 09:41:03 crc kubenswrapper[4972]: I1121 09:41:03.889106 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:03 crc kubenswrapper[4972]: I1121 09:41:03.889095 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d"} Nov 21 09:41:03 crc kubenswrapper[4972]: I1121 09:41:03.889393 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:03 crc kubenswrapper[4972]: I1121 09:41:03.890985 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:03 crc kubenswrapper[4972]: I1121 09:41:03.891053 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:03 crc kubenswrapper[4972]: I1121 09:41:03.891078 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:03 crc kubenswrapper[4972]: I1121 09:41:03.892254 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:03 crc kubenswrapper[4972]: I1121 09:41:03.892334 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:03 crc kubenswrapper[4972]: I1121 09:41:03.892356 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:04 crc kubenswrapper[4972]: I1121 09:41:04.608262 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:41:04 crc kubenswrapper[4972]: I1121 09:41:04.891375 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:04 crc kubenswrapper[4972]: I1121 09:41:04.891464 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:41:04 crc kubenswrapper[4972]: I1121 09:41:04.892901 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:04 crc kubenswrapper[4972]: I1121 09:41:04.892944 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:04 crc kubenswrapper[4972]: I1121 09:41:04.892967 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:04 crc kubenswrapper[4972]: I1121 09:41:04.995249 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 21 09:41:04 crc kubenswrapper[4972]: I1121 09:41:04.995540 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:04 crc kubenswrapper[4972]: I1121 09:41:04.997154 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:04 crc kubenswrapper[4972]: I1121 09:41:04.997217 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:04 crc kubenswrapper[4972]: I1121 09:41:04.997230 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:05 crc kubenswrapper[4972]: I1121 09:41:05.276680 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:41:05 crc kubenswrapper[4972]: I1121 09:41:05.276948 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:05 crc kubenswrapper[4972]: I1121 09:41:05.278378 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:05 crc kubenswrapper[4972]: I1121 09:41:05.278423 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:05 crc kubenswrapper[4972]: I1121 09:41:05.278438 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:05 crc kubenswrapper[4972]: I1121 09:41:05.707000 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:41:05 crc kubenswrapper[4972]: I1121 09:41:05.894192 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:05 crc kubenswrapper[4972]: I1121 09:41:05.898397 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:05 crc kubenswrapper[4972]: I1121 09:41:05.898438 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:05 crc kubenswrapper[4972]: I1121 09:41:05.898449 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:05 crc kubenswrapper[4972]: E1121 09:41:05.941772 4972 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 21 09:41:06 crc kubenswrapper[4972]: I1121 09:41:06.097424 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:41:06 crc kubenswrapper[4972]: I1121 09:41:06.097639 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:06 crc kubenswrapper[4972]: I1121 09:41:06.098925 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:06 crc kubenswrapper[4972]: I1121 09:41:06.098965 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:06 crc kubenswrapper[4972]: I1121 09:41:06.098976 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:06 crc kubenswrapper[4972]: I1121 09:41:06.897650 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:06 crc kubenswrapper[4972]: I1121 09:41:06.898971 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:06 crc kubenswrapper[4972]: I1121 09:41:06.899031 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:06 crc kubenswrapper[4972]: I1121 09:41:06.899041 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:08 crc kubenswrapper[4972]: I1121 09:41:08.277302 4972 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 09:41:08 crc kubenswrapper[4972]: I1121 09:41:08.277428 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 09:41:08 crc kubenswrapper[4972]: I1121 09:41:08.663513 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:08 crc kubenswrapper[4972]: I1121 09:41:08.665020 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:08 crc kubenswrapper[4972]: I1121 09:41:08.665079 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:08 crc kubenswrapper[4972]: I1121 09:41:08.665099 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:08 crc kubenswrapper[4972]: I1121 09:41:08.665138 4972 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 09:41:09 crc kubenswrapper[4972]: I1121 09:41:09.940563 4972 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 21 09:41:11 crc kubenswrapper[4972]: I1121 09:41:11.458847 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 21 09:41:11 crc kubenswrapper[4972]: I1121 09:41:11.459136 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:11 crc kubenswrapper[4972]: I1121 09:41:11.460315 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:11 crc kubenswrapper[4972]: I1121 09:41:11.460372 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:11 crc kubenswrapper[4972]: I1121 09:41:11.460385 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:11 crc kubenswrapper[4972]: I1121 09:41:11.493502 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 21 09:41:11 crc kubenswrapper[4972]: I1121 09:41:11.910527 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:11 crc kubenswrapper[4972]: I1121 09:41:11.911451 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:11 crc kubenswrapper[4972]: I1121 09:41:11.911481 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:11 crc kubenswrapper[4972]: I1121 09:41:11.911493 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:11 crc kubenswrapper[4972]: I1121 09:41:11.923778 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 21 09:41:12 crc kubenswrapper[4972]: I1121 09:41:12.344211 4972 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 21 09:41:12 crc kubenswrapper[4972]: I1121 09:41:12.344318 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 21 09:41:12 crc kubenswrapper[4972]: I1121 09:41:12.351513 4972 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 21 09:41:12 crc kubenswrapper[4972]: I1121 09:41:12.351588 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 21 09:41:12 crc kubenswrapper[4972]: I1121 09:41:12.913046 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:12 crc kubenswrapper[4972]: I1121 09:41:12.913805 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:12 crc kubenswrapper[4972]: I1121 09:41:12.913898 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:12 crc kubenswrapper[4972]: I1121 09:41:12.913918 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.712741 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.712952 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.713382 4972 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.713426 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.714211 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.714237 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.714249 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.718479 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.794553 4972 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.794626 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.919882 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.920291 4972 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.920350 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.920657 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.920689 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:15 crc kubenswrapper[4972]: I1121 09:41:15.920699 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:15 crc kubenswrapper[4972]: E1121 09:41:15.942708 4972 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 21 09:41:16 crc kubenswrapper[4972]: I1121 09:41:16.567036 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:41:16 crc kubenswrapper[4972]: I1121 09:41:16.567205 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:16 crc kubenswrapper[4972]: I1121 09:41:16.568210 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:16 crc kubenswrapper[4972]: I1121 09:41:16.568239 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:16 crc kubenswrapper[4972]: I1121 09:41:16.568247 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.335570 4972 trace.go:236] Trace[103326079]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Nov-2025 09:41:05.763) (total time: 11572ms): Nov 21 09:41:17 crc kubenswrapper[4972]: Trace[103326079]: ---"Objects listed" error: 11572ms (09:41:17.335) Nov 21 09:41:17 crc kubenswrapper[4972]: Trace[103326079]: [11.572320575s] [11.572320575s] END Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.335601 4972 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.336615 4972 trace.go:236] Trace[1215280020]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Nov-2025 09:41:03.412) (total time: 13924ms): Nov 21 09:41:17 crc kubenswrapper[4972]: Trace[1215280020]: ---"Objects listed" error: 13924ms (09:41:17.336) Nov 21 09:41:17 crc kubenswrapper[4972]: Trace[1215280020]: [13.924150254s] [13.924150254s] END Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.336643 4972 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.336968 4972 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.338035 4972 trace.go:236] Trace[1063549711]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Nov-2025 09:41:04.480) (total time: 12857ms): Nov 21 09:41:17 crc kubenswrapper[4972]: Trace[1063549711]: ---"Objects listed" error: 12857ms (09:41:17.337) Nov 21 09:41:17 crc kubenswrapper[4972]: Trace[1063549711]: [12.857250803s] [12.857250803s] END Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.338052 4972 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.338291 4972 trace.go:236] Trace[45486037]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Nov-2025 09:41:04.670) (total time: 12668ms): Nov 21 09:41:17 crc kubenswrapper[4972]: Trace[45486037]: ---"Objects listed" error: 12667ms (09:41:17.338) Nov 21 09:41:17 crc kubenswrapper[4972]: Trace[45486037]: [12.668086986s] [12.668086986s] END Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.338326 4972 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.367504 4972 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.421784 4972 csr.go:261] certificate signing request csr-bpmr7 is approved, waiting to be issued Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.429581 4972 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.435035 4972 csr.go:257] certificate signing request csr-bpmr7 is issued Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.474569 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.481102 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.541151 4972 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58328->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.541215 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58328->192.168.126.11:17697: read: connection reset by peer" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.652713 4972 apiserver.go:52] "Watching apiserver" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.655327 4972 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.655627 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.656298 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.656373 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.656412 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.656447 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.656453 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.656508 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.656541 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.656894 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.656943 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.658565 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.659310 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-grwbs"] Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.659615 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-grwbs" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.660159 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.660159 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.661288 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.661339 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.661569 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.662098 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.662857 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.663095 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.666442 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.674974 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.678621 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.685315 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.687679 4972 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.699397 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.710068 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.719427 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.727472 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.731638 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.731863 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.731983 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732077 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.731988 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732173 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732249 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732340 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732425 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732503 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732563 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732578 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732726 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732844 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732962 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733062 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733160 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733259 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733365 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733467 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732559 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733582 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733658 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733684 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732759 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733706 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732844 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732955 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.732969 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733152 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733166 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733763 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733383 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733502 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733580 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733593 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733676 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733730 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733912 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733936 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733952 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733968 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733984 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733978 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734000 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734019 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734036 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734053 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734401 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734423 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734440 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.733982 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734013 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734208 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734338 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734443 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734460 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734520 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734510 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734538 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734542 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734574 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734614 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734640 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734637 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734664 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734692 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734719 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734727 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734733 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734746 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734741 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734766 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734797 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734849 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734856 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734876 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734904 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734933 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734960 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734986 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735011 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735035 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735061 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735096 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735121 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735144 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735170 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735248 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735274 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735303 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735329 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735353 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735381 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735405 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735431 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735458 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735482 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735510 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735535 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735559 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734895 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735615 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734931 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.734976 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735026 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735085 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735086 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735171 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735306 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735480 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735594 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735697 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735700 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735881 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735644 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735917 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735935 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735952 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735954 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735967 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.735984 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736007 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736036 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736059 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736078 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736095 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736114 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736134 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736156 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736182 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736204 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736225 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736245 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736264 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736286 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736312 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736334 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736357 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736378 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736404 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736424 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736441 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736457 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736472 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736488 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736522 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736538 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736560 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736576 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736598 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736622 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736639 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736662 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736689 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736709 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736729 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736753 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736781 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736799 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736901 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.736818 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.737730 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.737757 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.737777 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.737794 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.737810 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.737844 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.737864 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.737884 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.737912 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.737933 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738012 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738049 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738147 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738178 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738189 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738205 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738229 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738247 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738270 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738293 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738321 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738341 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738360 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738384 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738405 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738423 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738443 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738464 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738481 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738498 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738519 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738539 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738556 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738574 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738667 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738689 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738707 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738726 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738744 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738766 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738784 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738802 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738823 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738861 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738882 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738906 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738926 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738945 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738963 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738986 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739007 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739028 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739052 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739071 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739092 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739113 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739134 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739153 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739171 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739190 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739209 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739231 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739251 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739271 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739290 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739319 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739340 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739359 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739377 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739395 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739420 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739440 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739459 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739475 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739492 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739509 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739527 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739547 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739567 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739586 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739605 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739661 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739720 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttvmc\" (UniqueName: \"kubernetes.io/projected/6baa0cbc-fe21-4bda-8e20-505496c26832-kube-api-access-ttvmc\") pod \"node-resolver-grwbs\" (UID: \"6baa0cbc-fe21-4bda-8e20-505496c26832\") " pod="openshift-dns/node-resolver-grwbs" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739748 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739769 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6baa0cbc-fe21-4bda-8e20-505496c26832-hosts-file\") pod \"node-resolver-grwbs\" (UID: \"6baa0cbc-fe21-4bda-8e20-505496c26832\") " pod="openshift-dns/node-resolver-grwbs" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739793 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739814 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739866 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739887 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739910 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739935 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739967 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739991 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740011 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740035 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740386 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740534 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740653 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740672 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740689 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740705 4972 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740719 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740733 4972 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740747 4972 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740760 4972 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740775 4972 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740790 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740806 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740822 4972 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740854 4972 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740867 4972 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740880 4972 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740895 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740910 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740924 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740939 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740955 4972 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740969 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740984 4972 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740998 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741012 4972 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741026 4972 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741040 4972 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741053 4972 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741159 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741175 4972 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741188 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741202 4972 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741216 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741236 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741250 4972 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741263 4972 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741276 4972 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741287 4972 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741300 4972 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741313 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741327 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741342 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741355 4972 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741384 4972 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741397 4972 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741411 4972 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741424 4972 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741444 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741459 4972 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.742457 4972 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.743650 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.743703 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.752157 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738245 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738490 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738631 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738666 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738822 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.755962 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.738929 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739061 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739141 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739663 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739735 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739854 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.739954 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.740184 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741448 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.741522 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.742250 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.742437 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.742579 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.757993 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.758226 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.758306 4972 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.758329 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.758749 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.758778 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.758861 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.742712 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.742725 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.742886 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.743241 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.743873 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.744050 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.744772 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.745056 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.745253 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.745943 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.746679 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.746698 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.746707 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.747080 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.747533 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.747817 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.748260 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.748281 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.748334 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.748486 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.748595 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.749339 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.749383 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.749606 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.749642 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.749693 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.749715 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:41:18.249698458 +0000 UTC m=+23.358840956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.759140 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:18.259122559 +0000 UTC m=+23.368265057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.749977 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.750027 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.750054 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.750449 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.750820 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.751081 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.751411 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.751462 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.751680 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.751886 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.751891 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.752162 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.752932 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.753135 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.753415 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.753614 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.753617 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.753827 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.759328 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.759136 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.754037 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.754054 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.754314 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.754337 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.754430 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.754635 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.754708 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.754815 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.754961 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.755332 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.755358 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.755449 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.755458 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.755881 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.756377 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.756442 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.756461 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.756780 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.756933 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.757001 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.757160 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.757503 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.760600 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.761141 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.761173 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.761493 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.761591 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.762529 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.763551 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.763888 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.763932 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.764843 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.765341 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.765663 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.766319 4972 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.766382 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:18.26636433 +0000 UTC m=+23.375506828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.768187 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.768254 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.768294 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.768403 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.768553 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.768621 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.768724 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.768780 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.769054 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.769281 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.769284 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.769677 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.772008 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.772199 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.772480 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.772526 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.774240 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.777076 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.778637 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.779492 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.779611 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.780305 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.783225 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.783441 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.783654 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.784052 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.785180 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.785204 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.785441 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.785461 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.785474 4972 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.785529 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:18.28551044 +0000 UTC m=+23.394652938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.785535 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.785556 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.785553 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.785722 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.785569 4972 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.786966 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.787140 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.787251 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:18.287216497 +0000 UTC m=+23.396358995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.787242 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.787265 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.787290 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.787568 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.788200 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.788328 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.788957 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.789793 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.790542 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.791369 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.791451 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.793281 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.793869 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.796543 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.797243 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.798085 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.798211 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.798350 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.799000 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.800543 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.804445 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.804512 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.805287 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.806116 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.808624 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.809265 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.810456 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.812089 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.812345 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.814204 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.815154 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.816960 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.818703 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.819960 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.820984 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.821592 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.822189 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.822815 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.824752 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.825303 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.826127 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.827274 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.827742 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.829106 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.829910 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.831066 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.831717 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.832268 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.833321 4972 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.833440 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.833657 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.834169 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.835224 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.836319 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.836726 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.838346 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.839357 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.839885 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.840897 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.841552 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.842032 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.842779 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.842820 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.842870 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttvmc\" (UniqueName: \"kubernetes.io/projected/6baa0cbc-fe21-4bda-8e20-505496c26832-kube-api-access-ttvmc\") pod \"node-resolver-grwbs\" (UID: \"6baa0cbc-fe21-4bda-8e20-505496c26832\") " pod="openshift-dns/node-resolver-grwbs" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.842892 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6baa0cbc-fe21-4bda-8e20-505496c26832-hosts-file\") pod \"node-resolver-grwbs\" (UID: \"6baa0cbc-fe21-4bda-8e20-505496c26832\") " pod="openshift-dns/node-resolver-grwbs" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843015 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843062 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843174 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843316 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6baa0cbc-fe21-4bda-8e20-505496c26832-hosts-file\") pod \"node-resolver-grwbs\" (UID: \"6baa0cbc-fe21-4bda-8e20-505496c26832\") " pod="openshift-dns/node-resolver-grwbs" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843376 4972 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843403 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843419 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843430 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843441 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843452 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843462 4972 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843472 4972 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843480 4972 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843489 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843498 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843508 4972 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843527 4972 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843537 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843546 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843554 4972 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843564 4972 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843576 4972 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843586 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843596 4972 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843605 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843614 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843624 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843633 4972 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843643 4972 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843653 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843662 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843672 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843682 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843691 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843701 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843710 4972 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843720 4972 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843729 4972 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843738 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843747 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843756 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843766 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843775 4972 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843784 4972 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843794 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843805 4972 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843930 4972 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843943 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843955 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843966 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843974 4972 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843983 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.843992 4972 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844003 4972 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844012 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844020 4972 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844028 4972 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844036 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844044 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844052 4972 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844060 4972 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844068 4972 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844076 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844084 4972 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844092 4972 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844101 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844104 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844109 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844128 4972 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844152 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844161 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844170 4972 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844177 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844185 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844193 4972 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844226 4972 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844235 4972 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844243 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844315 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844354 4972 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844368 4972 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844383 4972 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844396 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844406 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844417 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844428 4972 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844439 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844450 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844461 4972 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844472 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844484 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844495 4972 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844506 4972 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844517 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844529 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844712 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844752 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844772 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844786 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844797 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844808 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844819 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844848 4972 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844859 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844877 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844889 4972 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844901 4972 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844915 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844932 4972 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844946 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844960 4972 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844973 4972 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844986 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.844997 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845008 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845019 4972 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845028 4972 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845039 4972 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845049 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845059 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845068 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845078 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845091 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845101 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845114 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845125 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845136 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845145 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845156 4972 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845168 4972 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845179 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845196 4972 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845207 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845220 4972 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845233 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845251 4972 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845262 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845274 4972 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845284 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845295 4972 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845306 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845319 4972 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845332 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845345 4972 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845357 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845369 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845379 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845390 4972 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845400 4972 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845451 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845461 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845471 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845488 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.845547 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.846104 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.847050 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.854432 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.854932 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.855778 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.856266 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.856933 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.857936 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.858160 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.858511 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.862355 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttvmc\" (UniqueName: \"kubernetes.io/projected/6baa0cbc-fe21-4bda-8e20-505496c26832-kube-api-access-ttvmc\") pod \"node-resolver-grwbs\" (UID: \"6baa0cbc-fe21-4bda-8e20-505496c26832\") " pod="openshift-dns/node-resolver-grwbs" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.926229 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.926745 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.928265 4972 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d" exitCode=255 Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.928349 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d"} Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.928403 4972 scope.go:117] "RemoveContainer" containerID="7140dc438780f0e2a0cb522ddb2272b6ec1962c5638ae8ce0fb772950a1df165" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.937486 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.949800 4972 scope.go:117] "RemoveContainer" containerID="b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d" Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.950073 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.950401 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 21 09:41:17 crc kubenswrapper[4972]: E1121 09:41:17.950878 4972 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.956213 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.971234 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.974233 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.984269 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 09:41:17 crc kubenswrapper[4972]: W1121 09:41:17.985757 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-7d3e48fe4261e952d10fdea2a009bbd5dd4aa4a498fb2b59aabe58a4be18bec6 WatchSource:0}: Error finding container 7d3e48fe4261e952d10fdea2a009bbd5dd4aa4a498fb2b59aabe58a4be18bec6: Status 404 returned error can't find the container with id 7d3e48fe4261e952d10fdea2a009bbd5dd4aa4a498fb2b59aabe58a4be18bec6 Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.986328 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:17 crc kubenswrapper[4972]: I1121 09:41:17.994991 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.001307 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-grwbs" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.006751 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.023231 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.033805 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: W1121 09:41:18.035511 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6baa0cbc_fe21_4bda_8e20_505496c26832.slice/crio-513ecdcb438175a46af546533dde05d8a17a20ef4638e5edd881c4f62d9d0d27 WatchSource:0}: Error finding container 513ecdcb438175a46af546533dde05d8a17a20ef4638e5edd881c4f62d9d0d27: Status 404 returned error can't find the container with id 513ecdcb438175a46af546533dde05d8a17a20ef4638e5edd881c4f62d9d0d27 Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.047570 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-bgtmb"] Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.048187 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.048348 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-z4gd8"] Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.058636 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-9l6cj"] Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.059139 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.059285 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxwhb"] Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.059556 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.062135 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.062544 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.063096 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.063127 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.063602 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.063902 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.066482 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.066975 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.067668 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.067678 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.067933 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.068118 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.068767 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.072867 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.076509 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.077236 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.077411 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.077616 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.077988 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.077564 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.078436 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.082187 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.134072 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149178 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-system-cni-dir\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149217 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-multus-socket-dir-parent\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149236 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-run-netns\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149253 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-cnibin\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149265 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-os-release\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149279 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-run-k8s-cni-cncf-io\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149293 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-var-lib-cni-bin\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149308 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c159725e-4c82-4474-96d9-211f7d8db47f-ovn-node-metrics-cert\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149325 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4d5971d3-55cc-43d2-a604-149eeb23f1e2-cni-binary-copy\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149340 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ff4929f7-ed2f-4332-af3c-31b2333bda3d-cni-binary-copy\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149353 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-cni-bin\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149383 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-kubelet\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149405 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149440 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-systemd-units\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149548 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-slash\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149575 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-var-lib-openvswitch\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149601 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg8k7\" (UniqueName: \"kubernetes.io/projected/c159725e-4c82-4474-96d9-211f7d8db47f-kube-api-access-zg8k7\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149623 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-var-lib-cni-multus\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149658 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-etc-kubernetes\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149686 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-var-lib-kubelet\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149708 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-env-overrides\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149756 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-run-multus-certs\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149780 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149802 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4d5971d3-55cc-43d2-a604-149eeb23f1e2-os-release\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149826 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-etc-openvswitch\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149860 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-node-log\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149878 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6tgz\" (UniqueName: \"kubernetes.io/projected/ff4929f7-ed2f-4332-af3c-31b2333bda3d-kube-api-access-m6tgz\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149893 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ec41c003-c1ce-4c2f-8eed-62ff2974cd8a-rootfs\") pod \"machine-config-daemon-9l6cj\" (UID: \"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\") " pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149909 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-multus-conf-dir\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149924 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-openvswitch\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149941 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-ovnkube-config\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149958 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec41c003-c1ce-4c2f-8eed-62ff2974cd8a-proxy-tls\") pod \"machine-config-daemon-9l6cj\" (UID: \"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\") " pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149974 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-multus-cni-dir\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.149989 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k727p\" (UniqueName: \"kubernetes.io/projected/4d5971d3-55cc-43d2-a604-149eeb23f1e2-kube-api-access-k727p\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150008 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec41c003-c1ce-4c2f-8eed-62ff2974cd8a-mcd-auth-proxy-config\") pod \"machine-config-daemon-9l6cj\" (UID: \"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\") " pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150026 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-ovn\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150468 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-log-socket\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150512 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-cni-netd\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150549 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4d5971d3-55cc-43d2-a604-149eeb23f1e2-cnibin\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150576 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d5971d3-55cc-43d2-a604-149eeb23f1e2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150599 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-systemd\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150633 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ztjb\" (UniqueName: \"kubernetes.io/projected/ec41c003-c1ce-4c2f-8eed-62ff2974cd8a-kube-api-access-6ztjb\") pod \"machine-config-daemon-9l6cj\" (UID: \"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\") " pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150648 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-ovnkube-script-lib\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150661 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4d5971d3-55cc-43d2-a604-149eeb23f1e2-system-cni-dir\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150675 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d5971d3-55cc-43d2-a604-149eeb23f1e2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150712 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ff4929f7-ed2f-4332-af3c-31b2333bda3d-multus-daemon-config\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150745 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-hostroot\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.150759 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-run-netns\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.152942 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.179885 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.214207 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252088 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252314 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-env-overrides\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.252388 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:41:19.252359096 +0000 UTC m=+24.361501594 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252486 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-var-lib-kubelet\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252525 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-run-multus-certs\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252546 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252565 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4d5971d3-55cc-43d2-a604-149eeb23f1e2-os-release\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252603 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6tgz\" (UniqueName: \"kubernetes.io/projected/ff4929f7-ed2f-4332-af3c-31b2333bda3d-kube-api-access-m6tgz\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252620 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-etc-openvswitch\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252635 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-node-log\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252662 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-multus-conf-dir\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252677 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-openvswitch\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252687 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-var-lib-kubelet\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252735 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ec41c003-c1ce-4c2f-8eed-62ff2974cd8a-rootfs\") pod \"machine-config-daemon-9l6cj\" (UID: \"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\") " pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252699 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ec41c003-c1ce-4c2f-8eed-62ff2974cd8a-rootfs\") pod \"machine-config-daemon-9l6cj\" (UID: \"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\") " pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252769 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-run-multus-certs\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252780 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-ovnkube-config\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252793 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252802 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec41c003-c1ce-4c2f-8eed-62ff2974cd8a-proxy-tls\") pod \"machine-config-daemon-9l6cj\" (UID: \"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\") " pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252860 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k727p\" (UniqueName: \"kubernetes.io/projected/4d5971d3-55cc-43d2-a604-149eeb23f1e2-kube-api-access-k727p\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252881 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec41c003-c1ce-4c2f-8eed-62ff2974cd8a-mcd-auth-proxy-config\") pod \"machine-config-daemon-9l6cj\" (UID: \"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\") " pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252906 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-multus-cni-dir\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252925 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-cni-netd\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252939 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4d5971d3-55cc-43d2-a604-149eeb23f1e2-cnibin\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252954 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d5971d3-55cc-43d2-a604-149eeb23f1e2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252973 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-systemd\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.252987 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-ovn\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253003 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-log-socket\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253023 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ztjb\" (UniqueName: \"kubernetes.io/projected/ec41c003-c1ce-4c2f-8eed-62ff2974cd8a-kube-api-access-6ztjb\") pod \"machine-config-daemon-9l6cj\" (UID: \"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\") " pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253039 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d5971d3-55cc-43d2-a604-149eeb23f1e2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253056 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ff4929f7-ed2f-4332-af3c-31b2333bda3d-multus-daemon-config\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253073 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-ovnkube-script-lib\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253089 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4d5971d3-55cc-43d2-a604-149eeb23f1e2-system-cni-dir\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253107 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-hostroot\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253166 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-run-netns\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253179 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4d5971d3-55cc-43d2-a604-149eeb23f1e2-os-release\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253203 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-run-netns\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253183 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-run-netns\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253227 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-env-overrides\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253272 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-system-cni-dir\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253236 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-system-cni-dir\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253337 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-multus-socket-dir-parent\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253363 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-run-k8s-cni-cncf-io\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253380 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-var-lib-cni-bin\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253415 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c159725e-4c82-4474-96d9-211f7d8db47f-ovn-node-metrics-cert\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253435 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-cnibin\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253450 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-os-release\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253468 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4d5971d3-55cc-43d2-a604-149eeb23f1e2-cni-binary-copy\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253511 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ff4929f7-ed2f-4332-af3c-31b2333bda3d-cni-binary-copy\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253529 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-cni-bin\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253544 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-etc-openvswitch\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253571 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-kubelet\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253596 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253585 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-ovn\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253620 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-var-lib-openvswitch\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253659 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg8k7\" (UniqueName: \"kubernetes.io/projected/c159725e-4c82-4474-96d9-211f7d8db47f-kube-api-access-zg8k7\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253675 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-var-lib-cni-multus\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253691 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-etc-kubernetes\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253727 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-systemd-units\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253744 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-slash\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253823 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-slash\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253230 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-node-log\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253899 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-ovnkube-config\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.253972 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-log-socket\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.254090 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-multus-socket-dir-parent\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.254122 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-run-k8s-cni-cncf-io\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.254196 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-var-lib-cni-bin\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.254457 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.254609 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4d5971d3-55cc-43d2-a604-149eeb23f1e2-system-cni-dir\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.254899 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-hostroot\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.254958 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-run-netns\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.254990 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-cni-netd\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.254991 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-multus-conf-dir\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.255084 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-openvswitch\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.255246 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d5971d3-55cc-43d2-a604-149eeb23f1e2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.255312 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4d5971d3-55cc-43d2-a604-149eeb23f1e2-cnibin\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.255343 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-kubelet\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.255378 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-cnibin\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.255427 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-os-release\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.255489 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-multus-cni-dir\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.255669 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec41c003-c1ce-4c2f-8eed-62ff2974cd8a-mcd-auth-proxy-config\") pod \"machine-config-daemon-9l6cj\" (UID: \"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\") " pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.255913 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d5971d3-55cc-43d2-a604-149eeb23f1e2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.255941 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.255988 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-var-lib-openvswitch\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.255998 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-etc-kubernetes\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.256027 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-systemd-units\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.256040 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ff4929f7-ed2f-4332-af3c-31b2333bda3d-host-var-lib-cni-multus\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.256079 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-cni-bin\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.256110 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-systemd\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.256156 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-ovnkube-script-lib\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.256528 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ff4929f7-ed2f-4332-af3c-31b2333bda3d-multus-daemon-config\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.256672 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ff4929f7-ed2f-4332-af3c-31b2333bda3d-cni-binary-copy\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.262185 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4d5971d3-55cc-43d2-a604-149eeb23f1e2-cni-binary-copy\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.263428 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec41c003-c1ce-4c2f-8eed-62ff2974cd8a-proxy-tls\") pod \"machine-config-daemon-9l6cj\" (UID: \"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\") " pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.263476 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c159725e-4c82-4474-96d9-211f7d8db47f-ovn-node-metrics-cert\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.281279 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6tgz\" (UniqueName: \"kubernetes.io/projected/ff4929f7-ed2f-4332-af3c-31b2333bda3d-kube-api-access-m6tgz\") pod \"multus-bgtmb\" (UID: \"ff4929f7-ed2f-4332-af3c-31b2333bda3d\") " pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.281487 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k727p\" (UniqueName: \"kubernetes.io/projected/4d5971d3-55cc-43d2-a604-149eeb23f1e2-kube-api-access-k727p\") pod \"multus-additional-cni-plugins-z4gd8\" (UID: \"4d5971d3-55cc-43d2-a604-149eeb23f1e2\") " pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.285492 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ztjb\" (UniqueName: \"kubernetes.io/projected/ec41c003-c1ce-4c2f-8eed-62ff2974cd8a-kube-api-access-6ztjb\") pod \"machine-config-daemon-9l6cj\" (UID: \"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\") " pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.285958 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg8k7\" (UniqueName: \"kubernetes.io/projected/c159725e-4c82-4474-96d9-211f7d8db47f-kube-api-access-zg8k7\") pod \"ovnkube-node-bxwhb\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.287201 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.303336 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.319034 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7140dc438780f0e2a0cb522ddb2272b6ec1962c5638ae8ce0fb772950a1df165\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:02Z\\\",\\\"message\\\":\\\"W1121 09:41:00.928246 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1121 09:41:00.928490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763718060 cert, and key in /tmp/serving-cert-2450278299/serving-signer.crt, /tmp/serving-cert-2450278299/serving-signer.key\\\\nI1121 09:41:01.636802 1 observer_polling.go:159] Starting file observer\\\\nW1121 09:41:01.645955 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 09:41:01.646164 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 09:41:01.650912 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2450278299/tls.crt::/tmp/serving-cert-2450278299/tls.key\\\\\\\"\\\\nF1121 09:41:01.948324 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.333559 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.355033 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.355075 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.355097 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.355231 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.355190 4972 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.355317 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:19.355300526 +0000 UTC m=+24.464443024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.355243 4972 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.355333 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.355347 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.355358 4972 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.355372 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.355349 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:19.355343217 +0000 UTC m=+24.464485715 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.355414 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.355424 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:19.355411149 +0000 UTC m=+24.464553647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.355430 4972 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.355509 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:19.355487381 +0000 UTC m=+24.464629929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.372287 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.389664 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.398649 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bgtmb" Nov 21 09:41:18 crc kubenswrapper[4972]: W1121 09:41:18.408041 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff4929f7_ed2f_4332_af3c_31b2333bda3d.slice/crio-6089e00406451fdf062033ad840d539f19113c67f8763592e8d12b03259df395 WatchSource:0}: Error finding container 6089e00406451fdf062033ad840d539f19113c67f8763592e8d12b03259df395: Status 404 returned error can't find the container with id 6089e00406451fdf062033ad840d539f19113c67f8763592e8d12b03259df395 Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.408940 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7140dc438780f0e2a0cb522ddb2272b6ec1962c5638ae8ce0fb772950a1df165\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:02Z\\\",\\\"message\\\":\\\"W1121 09:41:00.928246 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1121 09:41:00.928490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763718060 cert, and key in /tmp/serving-cert-2450278299/serving-signer.crt, /tmp/serving-cert-2450278299/serving-signer.key\\\\nI1121 09:41:01.636802 1 observer_polling.go:159] Starting file observer\\\\nW1121 09:41:01.645955 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 09:41:01.646164 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 09:41:01.650912 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2450278299/tls.crt::/tmp/serving-cert-2450278299/tls.key\\\\\\\"\\\\nF1121 09:41:01.948324 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.409273 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.423003 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.425920 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" Nov 21 09:41:18 crc kubenswrapper[4972]: W1121 09:41:18.435545 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec41c003_c1ce_4c2f_8eed_62ff2974cd8a.slice/crio-5700d75a83e8aa514807b1c4878f47e369116508aef279a06f2c2f1f728b1a75 WatchSource:0}: Error finding container 5700d75a83e8aa514807b1c4878f47e369116508aef279a06f2c2f1f728b1a75: Status 404 returned error can't find the container with id 5700d75a83e8aa514807b1c4878f47e369116508aef279a06f2c2f1f728b1a75 Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.436740 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.436780 4972 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-21 09:36:17 +0000 UTC, rotation deadline is 2026-08-28 06:23:14.168537278 +0000 UTC Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.436822 4972 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6716h41m55.731717145s for next certificate rotation Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.442539 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.451351 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.469393 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.479954 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.489469 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.504256 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.514811 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.526095 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.536127 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: W1121 09:41:18.546119 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d5971d3_55cc_43d2_a604_149eeb23f1e2.slice/crio-d9b8a0f5227aaf98114e39944498a130b053ebc27128d4a1916578f50658b23d WatchSource:0}: Error finding container d9b8a0f5227aaf98114e39944498a130b053ebc27128d4a1916578f50658b23d: Status 404 returned error can't find the container with id d9b8a0f5227aaf98114e39944498a130b053ebc27128d4a1916578f50658b23d Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.551275 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: W1121 09:41:18.562663 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc159725e_4c82_4474_96d9_211f7d8db47f.slice/crio-71839c1a071543e2eebc4b541e695c9c12c5c08ecd82d0aa48cbe9ac34b02581 WatchSource:0}: Error finding container 71839c1a071543e2eebc4b541e695c9c12c5c08ecd82d0aa48cbe9ac34b02581: Status 404 returned error can't find the container with id 71839c1a071543e2eebc4b541e695c9c12c5c08ecd82d0aa48cbe9ac34b02581 Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.565621 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.932322 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" event={"ID":"4d5971d3-55cc-43d2-a604-149eeb23f1e2","Type":"ContainerStarted","Data":"d9b8a0f5227aaf98114e39944498a130b053ebc27128d4a1916578f50658b23d"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.933241 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"5700d75a83e8aa514807b1c4878f47e369116508aef279a06f2c2f1f728b1a75"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.935941 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.935970 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.935982 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7d3e48fe4261e952d10fdea2a009bbd5dd4aa4a498fb2b59aabe58a4be18bec6"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.937557 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"42b7958cfadce0c052850f1a0d15f0bbfd7f4a76959128cf4a498ac3cef0dccc"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.940112 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.940133 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"13d27cb704ad20c5a3aa45ba1f42407ebdbfa23bd30361e1011004e3a850ff8d"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.944099 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.945845 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.947733 4972 scope.go:117] "RemoveContainer" containerID="b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d" Nov 21 09:41:18 crc kubenswrapper[4972]: E1121 09:41:18.947904 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.948124 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"71839c1a071543e2eebc4b541e695c9c12c5c08ecd82d0aa48cbe9ac34b02581"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.949789 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bgtmb" event={"ID":"ff4929f7-ed2f-4332-af3c-31b2333bda3d","Type":"ContainerStarted","Data":"a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.949819 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bgtmb" event={"ID":"ff4929f7-ed2f-4332-af3c-31b2333bda3d","Type":"ContainerStarted","Data":"6089e00406451fdf062033ad840d539f19113c67f8763592e8d12b03259df395"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.951478 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-grwbs" event={"ID":"6baa0cbc-fe21-4bda-8e20-505496c26832","Type":"ContainerStarted","Data":"16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.951562 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-grwbs" event={"ID":"6baa0cbc-fe21-4bda-8e20-505496c26832","Type":"ContainerStarted","Data":"513ecdcb438175a46af546533dde05d8a17a20ef4638e5edd881c4f62d9d0d27"} Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.958220 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.967699 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.979741 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7140dc438780f0e2a0cb522ddb2272b6ec1962c5638ae8ce0fb772950a1df165\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:02Z\\\",\\\"message\\\":\\\"W1121 09:41:00.928246 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1121 09:41:00.928490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763718060 cert, and key in /tmp/serving-cert-2450278299/serving-signer.crt, /tmp/serving-cert-2450278299/serving-signer.key\\\\nI1121 09:41:01.636802 1 observer_polling.go:159] Starting file observer\\\\nW1121 09:41:01.645955 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 09:41:01.646164 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 09:41:01.650912 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2450278299/tls.crt::/tmp/serving-cert-2450278299/tls.key\\\\\\\"\\\\nF1121 09:41:01.948324 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.987898 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:18 crc kubenswrapper[4972]: I1121 09:41:18.997410 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.005262 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.019784 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.028331 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.040588 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.051795 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.061904 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.076384 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.084540 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.092487 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.102077 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.110375 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.146441 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.183265 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.222027 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.264196 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.264649 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.264858 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:41:21.264816319 +0000 UTC m=+26.373958967 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.301793 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.343148 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.366441 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.366508 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.366549 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.366591 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.366668 4972 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.366689 4972 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.366758 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:21.366735681 +0000 UTC m=+26.475878259 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.366780 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:21.366770482 +0000 UTC m=+26.475913080 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.366794 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.366823 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.366872 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.366895 4972 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.366915 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.366933 4972 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.366982 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:21.366961167 +0000 UTC m=+26.476103735 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.367012 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:21.366998048 +0000 UTC m=+26.476140656 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.381965 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.422875 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.474081 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.758930 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.759056 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.759244 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.759508 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.759627 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:19 crc kubenswrapper[4972]: E1121 09:41:19.759732 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.763157 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.955232 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903" exitCode=0 Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.955370 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903"} Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.957427 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" event={"ID":"4d5971d3-55cc-43d2-a604-149eeb23f1e2","Type":"ContainerStarted","Data":"1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5"} Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.959545 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b"} Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.968390 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.981947 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:19 crc kubenswrapper[4972]: I1121 09:41:19.994786 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.012628 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.024351 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.036048 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.048756 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.064197 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.075880 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.086392 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.097032 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.109876 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.121638 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.132513 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.168023 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.176167 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.184447 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.194367 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.222805 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.239983 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-h79hr"] Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.240392 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-h79hr" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.259073 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.274212 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.295148 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.314783 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.334968 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.377988 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jt9l\" (UniqueName: \"kubernetes.io/projected/455db960-eb74-4f4e-b297-b06c4d32009a-kube-api-access-2jt9l\") pod \"node-ca-h79hr\" (UID: \"455db960-eb74-4f4e-b297-b06c4d32009a\") " pod="openshift-image-registry/node-ca-h79hr" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.378024 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/455db960-eb74-4f4e-b297-b06c4d32009a-host\") pod \"node-ca-h79hr\" (UID: \"455db960-eb74-4f4e-b297-b06c4d32009a\") " pod="openshift-image-registry/node-ca-h79hr" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.378052 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/455db960-eb74-4f4e-b297-b06c4d32009a-serviceca\") pod \"node-ca-h79hr\" (UID: \"455db960-eb74-4f4e-b297-b06c4d32009a\") " pod="openshift-image-registry/node-ca-h79hr" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.381501 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.423985 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.461787 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.479096 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/455db960-eb74-4f4e-b297-b06c4d32009a-serviceca\") pod \"node-ca-h79hr\" (UID: \"455db960-eb74-4f4e-b297-b06c4d32009a\") " pod="openshift-image-registry/node-ca-h79hr" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.479196 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jt9l\" (UniqueName: \"kubernetes.io/projected/455db960-eb74-4f4e-b297-b06c4d32009a-kube-api-access-2jt9l\") pod \"node-ca-h79hr\" (UID: \"455db960-eb74-4f4e-b297-b06c4d32009a\") " pod="openshift-image-registry/node-ca-h79hr" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.479221 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/455db960-eb74-4f4e-b297-b06c4d32009a-host\") pod \"node-ca-h79hr\" (UID: \"455db960-eb74-4f4e-b297-b06c4d32009a\") " pod="openshift-image-registry/node-ca-h79hr" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.479276 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/455db960-eb74-4f4e-b297-b06c4d32009a-host\") pod \"node-ca-h79hr\" (UID: \"455db960-eb74-4f4e-b297-b06c4d32009a\") " pod="openshift-image-registry/node-ca-h79hr" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.480192 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/455db960-eb74-4f4e-b297-b06c4d32009a-serviceca\") pod \"node-ca-h79hr\" (UID: \"455db960-eb74-4f4e-b297-b06c4d32009a\") " pod="openshift-image-registry/node-ca-h79hr" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.502407 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.547786 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jt9l\" (UniqueName: \"kubernetes.io/projected/455db960-eb74-4f4e-b297-b06c4d32009a-kube-api-access-2jt9l\") pod \"node-ca-h79hr\" (UID: \"455db960-eb74-4f4e-b297-b06c4d32009a\") " pod="openshift-image-registry/node-ca-h79hr" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.559316 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-h79hr" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.562653 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.606877 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:20Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.643782 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:20Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.682869 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:20Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.724751 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:20Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.764153 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:20Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.802469 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:20Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.845246 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:20Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.889640 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:20Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.926212 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:20Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.964422 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-h79hr" event={"ID":"455db960-eb74-4f4e-b297-b06c4d32009a","Type":"ContainerStarted","Data":"2a1f070c2280deea98151eecdddead7eb6ad99779810b00970f91d39886711ba"} Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.966319 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0"} Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.968107 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:20Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:20 crc kubenswrapper[4972]: I1121 09:41:20.969173 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e"} Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.004717 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.050930 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.084205 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.135018 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.165542 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.205598 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.243913 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.286956 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.289387 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.289632 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:41:25.289597471 +0000 UTC m=+30.398739969 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.323889 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.376114 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.391617 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.391704 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.391813 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.391910 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.391977 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.392027 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.392055 4972 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.392064 4972 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.392150 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:25.39212286 +0000 UTC m=+30.501265398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.392177 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:25.392164161 +0000 UTC m=+30.501306699 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.392246 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.392270 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.392291 4972 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.392352 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:25.392327735 +0000 UTC m=+30.501470283 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.392274 4972 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.392421 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:25.392403308 +0000 UTC m=+30.501545856 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.409802 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.443639 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.486926 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.526174 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.566620 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.605783 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.648928 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.685449 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.727554 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.758866 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.758908 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.759006 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.759136 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.759302 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:21 crc kubenswrapper[4972]: E1121 09:41:21.759498 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.977031 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343"} Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.978556 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567"} Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.980428 4972 generic.go:334] "Generic (PLEG): container finished" podID="4d5971d3-55cc-43d2-a604-149eeb23f1e2" containerID="1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5" exitCode=0 Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.980498 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" event={"ID":"4d5971d3-55cc-43d2-a604-149eeb23f1e2","Type":"ContainerDied","Data":"1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5"} Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.982641 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-h79hr" event={"ID":"455db960-eb74-4f4e-b297-b06c4d32009a","Type":"ContainerStarted","Data":"fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff"} Nov 21 09:41:21 crc kubenswrapper[4972]: I1121 09:41:21.995146 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:21Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.013595 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.031183 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.045310 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.065889 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.104423 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.136609 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.151983 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.169241 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.180930 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.192403 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.204779 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.250040 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.284381 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.323231 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.369695 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.408868 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.449223 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.488673 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.528677 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.563816 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.613145 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.645277 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.688635 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.728750 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.766000 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.806483 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.848321 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:22Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.987398 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" event={"ID":"4d5971d3-55cc-43d2-a604-149eeb23f1e2","Type":"ContainerStarted","Data":"30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4"} Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.991454 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3"} Nov 21 09:41:22 crc kubenswrapper[4972]: I1121 09:41:22.991545 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c"} Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.004647 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.029982 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.042642 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.057359 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.074113 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.091255 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.126000 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.168366 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.209534 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.245902 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.285527 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.324505 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.363480 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.407051 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:23Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.759021 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.759049 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:23 crc kubenswrapper[4972]: I1121 09:41:23.759162 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:23 crc kubenswrapper[4972]: E1121 09:41:23.759335 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:23 crc kubenswrapper[4972]: E1121 09:41:23.759488 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:23 crc kubenswrapper[4972]: E1121 09:41:23.760058 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.000205 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3"} Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.337542 4972 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.340679 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.340727 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.340748 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.340939 4972 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.352436 4972 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.352794 4972 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.354496 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.354565 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.354584 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.354609 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.354626 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:24Z","lastTransitionTime":"2025-11-21T09:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:24 crc kubenswrapper[4972]: E1121 09:41:24.387065 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:24Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.392091 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.392172 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.392200 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.392236 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.392261 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:24Z","lastTransitionTime":"2025-11-21T09:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:24 crc kubenswrapper[4972]: E1121 09:41:24.415027 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:24Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.419910 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.419959 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.419973 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.419993 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.420006 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:24Z","lastTransitionTime":"2025-11-21T09:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:24 crc kubenswrapper[4972]: E1121 09:41:24.435874 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:24Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.440172 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.440213 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.440224 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.440240 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.440253 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:24Z","lastTransitionTime":"2025-11-21T09:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:24 crc kubenswrapper[4972]: E1121 09:41:24.461910 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:24Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.466771 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.466811 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.466821 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.466859 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.466872 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:24Z","lastTransitionTime":"2025-11-21T09:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:24 crc kubenswrapper[4972]: E1121 09:41:24.482528 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:24Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:24 crc kubenswrapper[4972]: E1121 09:41:24.482693 4972 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.484793 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.484853 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.484872 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.484892 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.484905 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:24Z","lastTransitionTime":"2025-11-21T09:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.587594 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.588439 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.588497 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.588526 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.588546 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:24Z","lastTransitionTime":"2025-11-21T09:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.692048 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.692120 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.692136 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.692172 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.692199 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:24Z","lastTransitionTime":"2025-11-21T09:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.804261 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.804330 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.804344 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.804369 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.804385 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:24Z","lastTransitionTime":"2025-11-21T09:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.907229 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.907273 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.907283 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.907300 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:24 crc kubenswrapper[4972]: I1121 09:41:24.907310 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:24Z","lastTransitionTime":"2025-11-21T09:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.009703 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.009773 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.009796 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.009862 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.009888 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:25Z","lastTransitionTime":"2025-11-21T09:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.112675 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.112726 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.112741 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.112760 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.112771 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:25Z","lastTransitionTime":"2025-11-21T09:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.216081 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.216164 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.216183 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.216627 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.216978 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:25Z","lastTransitionTime":"2025-11-21T09:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.320601 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.320667 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.320687 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.320711 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.320729 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:25Z","lastTransitionTime":"2025-11-21T09:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.335044 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.335222 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:41:33.335192525 +0000 UTC m=+38.444335064 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.423650 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.424157 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.424180 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.424242 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.424261 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:25Z","lastTransitionTime":"2025-11-21T09:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.436420 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.436485 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.436558 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.436594 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.436627 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.436657 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.436668 4972 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.436721 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:33.436702986 +0000 UTC m=+38.545845484 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.436720 4972 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.436759 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:33.436751918 +0000 UTC m=+38.545894416 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.436784 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.436805 4972 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.436820 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.436855 4972 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.436911 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:33.436888881 +0000 UTC m=+38.546031419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.436939 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:33.436926702 +0000 UTC m=+38.546069230 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.481046 4972 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.526306 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.526372 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.526388 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.526410 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.526427 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:25Z","lastTransitionTime":"2025-11-21T09:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.630581 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.630652 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.630671 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.630700 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.630719 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:25Z","lastTransitionTime":"2025-11-21T09:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.734304 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.734371 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.734384 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.734413 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.734436 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:25Z","lastTransitionTime":"2025-11-21T09:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.759001 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.759047 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.759185 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.759298 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.759449 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.759531 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.783621 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.794643 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.795987 4972 scope.go:117] "RemoveContainer" containerID="b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d" Nov 21 09:41:25 crc kubenswrapper[4972]: E1121 09:41:25.796273 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.804196 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.832420 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.838022 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.838117 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.838133 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.838159 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.838174 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:25Z","lastTransitionTime":"2025-11-21T09:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.851486 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.868966 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.884412 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.906113 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.923129 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.941156 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.941212 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.941228 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.941279 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.941299 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:25Z","lastTransitionTime":"2025-11-21T09:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.947810 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.970963 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:25 crc kubenswrapper[4972]: I1121 09:41:25.994088 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.008580 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.013776 4972 generic.go:334] "Generic (PLEG): container finished" podID="4d5971d3-55cc-43d2-a604-149eeb23f1e2" containerID="30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4" exitCode=0 Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.013894 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" event={"ID":"4d5971d3-55cc-43d2-a604-149eeb23f1e2","Type":"ContainerDied","Data":"30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4"} Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.019779 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c"} Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.037455 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.055375 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.058261 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.058296 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.058308 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.058328 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.058342 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:26Z","lastTransitionTime":"2025-11-21T09:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.072180 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.091611 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.103634 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.118899 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.134449 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.152654 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.160584 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.160639 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.160657 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.160681 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.160700 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:26Z","lastTransitionTime":"2025-11-21T09:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.168322 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.182219 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.197165 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.213231 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.226760 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.243066 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.260817 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.263581 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.263640 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.263652 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.263671 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.263687 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:26Z","lastTransitionTime":"2025-11-21T09:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.277643 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.366254 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.366301 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.366311 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.366326 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.366337 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:26Z","lastTransitionTime":"2025-11-21T09:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.470901 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.471058 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.471086 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.471109 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.471123 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:26Z","lastTransitionTime":"2025-11-21T09:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.574668 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.574714 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.574725 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.574743 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.574754 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:26Z","lastTransitionTime":"2025-11-21T09:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.677587 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.677629 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.677641 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.677662 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.677682 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:26Z","lastTransitionTime":"2025-11-21T09:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.779923 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.779976 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.779990 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.780015 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.780031 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:26Z","lastTransitionTime":"2025-11-21T09:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.882866 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.882927 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.882945 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.882972 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.882989 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:26Z","lastTransitionTime":"2025-11-21T09:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.986354 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.986398 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.986414 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.986433 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:26 crc kubenswrapper[4972]: I1121 09:41:26.986448 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:26Z","lastTransitionTime":"2025-11-21T09:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.025710 4972 generic.go:334] "Generic (PLEG): container finished" podID="4d5971d3-55cc-43d2-a604-149eeb23f1e2" containerID="8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17" exitCode=0 Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.025756 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" event={"ID":"4d5971d3-55cc-43d2-a604-149eeb23f1e2","Type":"ContainerDied","Data":"8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17"} Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.039939 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.057250 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.072734 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.086645 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.088592 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.088623 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.088635 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.088651 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.088664 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:27Z","lastTransitionTime":"2025-11-21T09:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.107101 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.119325 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.132527 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.144855 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.156392 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.170302 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.183965 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.190510 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.190567 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.190582 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.190600 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.190612 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:27Z","lastTransitionTime":"2025-11-21T09:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.202472 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.214637 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.227047 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.294306 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.294375 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.294394 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.294420 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.294435 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:27Z","lastTransitionTime":"2025-11-21T09:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.398263 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.398319 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.398329 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.398350 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.398365 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:27Z","lastTransitionTime":"2025-11-21T09:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.500802 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.500859 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.500871 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.500886 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.500895 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:27Z","lastTransitionTime":"2025-11-21T09:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.604910 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.604986 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.605004 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.605034 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.605051 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:27Z","lastTransitionTime":"2025-11-21T09:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.709075 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.709264 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.709288 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.709313 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.709331 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:27Z","lastTransitionTime":"2025-11-21T09:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.758652 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.758662 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.758784 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:27 crc kubenswrapper[4972]: E1121 09:41:27.759038 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:27 crc kubenswrapper[4972]: E1121 09:41:27.759142 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:27 crc kubenswrapper[4972]: E1121 09:41:27.759323 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.812271 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.812342 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.812362 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.812387 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.812405 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:27Z","lastTransitionTime":"2025-11-21T09:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.914731 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.914785 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.914799 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.914820 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:27 crc kubenswrapper[4972]: I1121 09:41:27.914854 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:27Z","lastTransitionTime":"2025-11-21T09:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.017465 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.017520 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.017534 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.017551 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.017566 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:28Z","lastTransitionTime":"2025-11-21T09:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.035498 4972 generic.go:334] "Generic (PLEG): container finished" podID="4d5971d3-55cc-43d2-a604-149eeb23f1e2" containerID="f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a" exitCode=0 Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.035589 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" event={"ID":"4d5971d3-55cc-43d2-a604-149eeb23f1e2","Type":"ContainerDied","Data":"f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a"} Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.040578 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd"} Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.050703 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.071626 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.086277 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.101280 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.113632 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.120089 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.120137 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.120149 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.120166 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.120177 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:28Z","lastTransitionTime":"2025-11-21T09:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.123911 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.135199 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.147426 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.158757 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.171352 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.184753 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.195803 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.210178 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.220887 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.222300 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.222339 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.222347 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.222362 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.222372 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:28Z","lastTransitionTime":"2025-11-21T09:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.324523 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.324599 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.324622 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.324650 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.324670 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:28Z","lastTransitionTime":"2025-11-21T09:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.426951 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.426988 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.427002 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.427020 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.427031 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:28Z","lastTransitionTime":"2025-11-21T09:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.530145 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.530209 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.530227 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.530252 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.530271 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:28Z","lastTransitionTime":"2025-11-21T09:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.633248 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.633333 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.633358 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.633391 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.633414 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:28Z","lastTransitionTime":"2025-11-21T09:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.737377 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.737432 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.737450 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.737474 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.737491 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:28Z","lastTransitionTime":"2025-11-21T09:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.841183 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.841247 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.841273 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.841306 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.841329 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:28Z","lastTransitionTime":"2025-11-21T09:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.944579 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.944641 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.944658 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.944685 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:28 crc kubenswrapper[4972]: I1121 09:41:28.944702 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:28Z","lastTransitionTime":"2025-11-21T09:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.047164 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.047238 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.047264 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.047293 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.047316 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:29Z","lastTransitionTime":"2025-11-21T09:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.048625 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" event={"ID":"4d5971d3-55cc-43d2-a604-149eeb23f1e2","Type":"ContainerStarted","Data":"e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66"} Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.065353 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.086463 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.100959 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.113197 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.126756 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.138694 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.149464 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.149513 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.149532 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.149556 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.149573 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:29Z","lastTransitionTime":"2025-11-21T09:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.152693 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.163648 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.177076 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.190883 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.201682 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.213207 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.224423 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.237028 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.251604 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.251653 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.251665 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.251683 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.251695 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:29Z","lastTransitionTime":"2025-11-21T09:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.355504 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.355563 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.355576 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.355599 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.355634 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:29Z","lastTransitionTime":"2025-11-21T09:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.459629 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.459701 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.459721 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.459750 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.459770 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:29Z","lastTransitionTime":"2025-11-21T09:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.561679 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.561724 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.561736 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.561753 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.561768 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:29Z","lastTransitionTime":"2025-11-21T09:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.664987 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.665051 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.665061 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.665081 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.665096 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:29Z","lastTransitionTime":"2025-11-21T09:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.759353 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.759466 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:29 crc kubenswrapper[4972]: E1121 09:41:29.759561 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:29 crc kubenswrapper[4972]: E1121 09:41:29.759674 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.760030 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:29 crc kubenswrapper[4972]: E1121 09:41:29.760323 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.767747 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.767808 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.767824 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.767866 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.767884 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:29Z","lastTransitionTime":"2025-11-21T09:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.870634 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.870733 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.870749 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.870776 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.870801 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:29Z","lastTransitionTime":"2025-11-21T09:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.974898 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.975225 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.975235 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.975257 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:29 crc kubenswrapper[4972]: I1121 09:41:29.975271 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:29Z","lastTransitionTime":"2025-11-21T09:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.063591 4972 generic.go:334] "Generic (PLEG): container finished" podID="4d5971d3-55cc-43d2-a604-149eeb23f1e2" containerID="e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66" exitCode=0 Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.063714 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" event={"ID":"4d5971d3-55cc-43d2-a604-149eeb23f1e2","Type":"ContainerDied","Data":"e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66"} Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.080658 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.080700 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.080715 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.080742 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.080755 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:30Z","lastTransitionTime":"2025-11-21T09:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.089397 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.103576 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.117494 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.130111 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.141080 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.161762 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.172919 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.183114 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.183157 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.183169 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.183185 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.183196 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:30Z","lastTransitionTime":"2025-11-21T09:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.185130 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.198951 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.213147 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.228739 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.244253 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.259035 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.271608 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:30Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.285439 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.285507 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.285536 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.285567 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.285593 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:30Z","lastTransitionTime":"2025-11-21T09:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.387612 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.387658 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.387673 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.387690 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.387703 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:30Z","lastTransitionTime":"2025-11-21T09:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.490394 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.490453 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.490470 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.490488 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.490502 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:30Z","lastTransitionTime":"2025-11-21T09:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.592916 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.592947 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.592959 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.592974 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.592985 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:30Z","lastTransitionTime":"2025-11-21T09:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.695289 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.695332 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.695341 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.695355 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.695365 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:30Z","lastTransitionTime":"2025-11-21T09:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.798226 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.798269 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.798279 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.798312 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.798323 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:30Z","lastTransitionTime":"2025-11-21T09:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.901561 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.901641 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.901669 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.901701 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:30 crc kubenswrapper[4972]: I1121 09:41:30.901725 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:30Z","lastTransitionTime":"2025-11-21T09:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.004873 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.004912 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.004926 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.004943 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.004954 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:31Z","lastTransitionTime":"2025-11-21T09:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.085206 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.085567 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.085582 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.093865 4972 generic.go:334] "Generic (PLEG): container finished" podID="4d5971d3-55cc-43d2-a604-149eeb23f1e2" containerID="33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d" exitCode=0 Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.093914 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" event={"ID":"4d5971d3-55cc-43d2-a604-149eeb23f1e2","Type":"ContainerDied","Data":"33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.100722 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.110705 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.110762 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.110875 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.110908 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.110923 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:31Z","lastTransitionTime":"2025-11-21T09:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.121597 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.126181 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.133569 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.134219 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl"] Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.135143 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.136643 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.136717 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.152489 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.164403 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.175521 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.186443 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.198598 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.213132 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.213174 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.213186 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.213208 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.213223 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:31Z","lastTransitionTime":"2025-11-21T09:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.213277 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.226874 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.240218 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.252725 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.266545 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.279716 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.292039 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.295498 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctj6j\" (UniqueName: \"kubernetes.io/projected/0a546c25-18b2-417a-a58b-4017476895fe-kube-api-access-ctj6j\") pod \"ovnkube-control-plane-749d76644c-jcfcl\" (UID: \"0a546c25-18b2-417a-a58b-4017476895fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.295701 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a546c25-18b2-417a-a58b-4017476895fe-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jcfcl\" (UID: \"0a546c25-18b2-417a-a58b-4017476895fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.295867 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a546c25-18b2-417a-a58b-4017476895fe-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jcfcl\" (UID: \"0a546c25-18b2-417a-a58b-4017476895fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.295973 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a546c25-18b2-417a-a58b-4017476895fe-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jcfcl\" (UID: \"0a546c25-18b2-417a-a58b-4017476895fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.305150 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.316467 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.316531 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.316543 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.316561 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.316572 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:31Z","lastTransitionTime":"2025-11-21T09:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.317113 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.332695 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.348815 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.362157 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.376322 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.386782 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.396905 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a546c25-18b2-417a-a58b-4017476895fe-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jcfcl\" (UID: \"0a546c25-18b2-417a-a58b-4017476895fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.396953 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a546c25-18b2-417a-a58b-4017476895fe-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jcfcl\" (UID: \"0a546c25-18b2-417a-a58b-4017476895fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.396979 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctj6j\" (UniqueName: \"kubernetes.io/projected/0a546c25-18b2-417a-a58b-4017476895fe-kube-api-access-ctj6j\") pod \"ovnkube-control-plane-749d76644c-jcfcl\" (UID: \"0a546c25-18b2-417a-a58b-4017476895fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.397022 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a546c25-18b2-417a-a58b-4017476895fe-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jcfcl\" (UID: \"0a546c25-18b2-417a-a58b-4017476895fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.397791 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a546c25-18b2-417a-a58b-4017476895fe-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jcfcl\" (UID: \"0a546c25-18b2-417a-a58b-4017476895fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.397974 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a546c25-18b2-417a-a58b-4017476895fe-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jcfcl\" (UID: \"0a546c25-18b2-417a-a58b-4017476895fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.398146 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.402236 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a546c25-18b2-417a-a58b-4017476895fe-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jcfcl\" (UID: \"0a546c25-18b2-417a-a58b-4017476895fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.413561 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctj6j\" (UniqueName: \"kubernetes.io/projected/0a546c25-18b2-417a-a58b-4017476895fe-kube-api-access-ctj6j\") pod \"ovnkube-control-plane-749d76644c-jcfcl\" (UID: \"0a546c25-18b2-417a-a58b-4017476895fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.418545 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.420247 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.420282 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.420311 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.420329 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.420339 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:31Z","lastTransitionTime":"2025-11-21T09:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.430218 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.442919 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.455421 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.460422 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.466537 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: W1121 09:41:31.473582 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a546c25_18b2_417a_a58b_4017476895fe.slice/crio-24d6a41576dfcf53bfef3104d56bde46ffbea3cdbd3e7246b1e9a1fe3c4bec0c WatchSource:0}: Error finding container 24d6a41576dfcf53bfef3104d56bde46ffbea3cdbd3e7246b1e9a1fe3c4bec0c: Status 404 returned error can't find the container with id 24d6a41576dfcf53bfef3104d56bde46ffbea3cdbd3e7246b1e9a1fe3c4bec0c Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.479113 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.499197 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.523211 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.523247 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.523256 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.523272 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.523283 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:31Z","lastTransitionTime":"2025-11-21T09:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.625009 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.625051 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.625064 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.625087 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.625101 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:31Z","lastTransitionTime":"2025-11-21T09:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.728182 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.728228 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.728239 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.728255 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.728266 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:31Z","lastTransitionTime":"2025-11-21T09:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.758625 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.758692 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.758649 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:31 crc kubenswrapper[4972]: E1121 09:41:31.758811 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:31 crc kubenswrapper[4972]: E1121 09:41:31.758902 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:31 crc kubenswrapper[4972]: E1121 09:41:31.759038 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.830802 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.830856 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.830868 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.830887 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.830898 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:31Z","lastTransitionTime":"2025-11-21T09:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.863667 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-k9mnh"] Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.864174 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:31 crc kubenswrapper[4972]: E1121 09:41:31.864238 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.891406 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.907001 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.917953 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.929060 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.933108 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.933155 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.933172 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.933193 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.933207 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:31Z","lastTransitionTime":"2025-11-21T09:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.942974 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.954899 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.969620 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.980862 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:31 crc kubenswrapper[4972]: I1121 09:41:31.996888 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:31Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.004975 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.005067 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9vt\" (UniqueName: \"kubernetes.io/projected/df5e96f4-727c-44c1-8e2f-e624c912430b-kube-api-access-8n9vt\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.013491 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:32Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.030589 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:32Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.035356 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.035395 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.035407 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.035442 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.035454 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:32Z","lastTransitionTime":"2025-11-21T09:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.043977 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:32Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.064930 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:32Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.077551 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:32Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.093050 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:32Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.100643 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" event={"ID":"0a546c25-18b2-417a-a58b-4017476895fe","Type":"ContainerStarted","Data":"24d6a41576dfcf53bfef3104d56bde46ffbea3cdbd3e7246b1e9a1fe3c4bec0c"} Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.100790 4972 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.106538 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:32 crc kubenswrapper[4972]: E1121 09:41:32.106669 4972 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:41:32 crc kubenswrapper[4972]: E1121 09:41:32.106732 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs podName:df5e96f4-727c-44c1-8e2f-e624c912430b nodeName:}" failed. No retries permitted until 2025-11-21 09:41:32.606715187 +0000 UTC m=+37.715857675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs") pod "network-metrics-daemon-k9mnh" (UID: "df5e96f4-727c-44c1-8e2f-e624c912430b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.106725 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n9vt\" (UniqueName: \"kubernetes.io/projected/df5e96f4-727c-44c1-8e2f-e624c912430b-kube-api-access-8n9vt\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.110235 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:32Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.131721 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n9vt\" (UniqueName: \"kubernetes.io/projected/df5e96f4-727c-44c1-8e2f-e624c912430b-kube-api-access-8n9vt\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.138763 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.138819 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.138854 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.138873 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.138885 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:32Z","lastTransitionTime":"2025-11-21T09:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.242286 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.242357 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.242382 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.242412 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.242437 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:32Z","lastTransitionTime":"2025-11-21T09:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.345208 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.345279 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.345296 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.345321 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.345339 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:32Z","lastTransitionTime":"2025-11-21T09:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.447733 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.447807 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.447867 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.447894 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.447915 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:32Z","lastTransitionTime":"2025-11-21T09:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.552533 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.552587 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.552612 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.552635 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.552649 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:32Z","lastTransitionTime":"2025-11-21T09:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.612399 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:32 crc kubenswrapper[4972]: E1121 09:41:32.612701 4972 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:41:32 crc kubenswrapper[4972]: E1121 09:41:32.613004 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs podName:df5e96f4-727c-44c1-8e2f-e624c912430b nodeName:}" failed. No retries permitted until 2025-11-21 09:41:33.612983416 +0000 UTC m=+38.722125914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs") pod "network-metrics-daemon-k9mnh" (UID: "df5e96f4-727c-44c1-8e2f-e624c912430b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.656819 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.656913 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.656931 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.656956 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.656969 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:32Z","lastTransitionTime":"2025-11-21T09:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.759066 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.759100 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.759112 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.759128 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.759156 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:32Z","lastTransitionTime":"2025-11-21T09:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.862443 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.862501 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.862513 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.862533 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.862544 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:32Z","lastTransitionTime":"2025-11-21T09:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.964905 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.964957 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.964972 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.964989 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:32 crc kubenswrapper[4972]: I1121 09:41:32.965001 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:32Z","lastTransitionTime":"2025-11-21T09:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.068105 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.068155 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.068168 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.068187 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.068200 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:33Z","lastTransitionTime":"2025-11-21T09:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.107323 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" event={"ID":"0a546c25-18b2-417a-a58b-4017476895fe","Type":"ContainerStarted","Data":"a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.107370 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" event={"ID":"0a546c25-18b2-417a-a58b-4017476895fe","Type":"ContainerStarted","Data":"b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.113537 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" event={"ID":"4d5971d3-55cc-43d2-a604-149eeb23f1e2","Type":"ContainerStarted","Data":"1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.113627 4972 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.123051 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.139528 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.155553 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.170962 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.171036 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.171051 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.171073 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.171108 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:33Z","lastTransitionTime":"2025-11-21T09:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.173135 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.191558 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.209245 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.225996 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.243995 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.258558 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.273030 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.273086 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.273130 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.273151 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.273162 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:33Z","lastTransitionTime":"2025-11-21T09:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.276386 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.289666 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.305097 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.317887 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.330970 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.352542 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.366123 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.375572 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.375626 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.375641 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.375662 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.375675 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:33Z","lastTransitionTime":"2025-11-21T09:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.380571 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.398704 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.411537 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.420244 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.420511 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:41:49.420476904 +0000 UTC m=+54.529619402 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.429511 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.450989 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.465445 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.477882 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.477926 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.477937 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.477957 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.477970 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:33Z","lastTransitionTime":"2025-11-21T09:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.480946 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.494730 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.513280 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.521316 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.521352 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.521401 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.521427 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.521510 4972 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.521548 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.521565 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.521569 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.521582 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.521588 4972 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.521593 4972 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.521608 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:49.521586254 +0000 UTC m=+54.630728782 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.521523 4972 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.521629 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:49.521620555 +0000 UTC m=+54.630763053 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.521650 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:49.521642446 +0000 UTC m=+54.630785034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.521662 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:41:49.521657186 +0000 UTC m=+54.630799684 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.523197 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.532599 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.544201 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.557618 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.569622 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.580595 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.580635 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.580648 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.580665 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.580678 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:33Z","lastTransitionTime":"2025-11-21T09:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.580846 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.592919 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:33Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.622366 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.622562 4972 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.622637 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs podName:df5e96f4-727c-44c1-8e2f-e624c912430b nodeName:}" failed. No retries permitted until 2025-11-21 09:41:35.622618702 +0000 UTC m=+40.731761200 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs") pod "network-metrics-daemon-k9mnh" (UID: "df5e96f4-727c-44c1-8e2f-e624c912430b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.683808 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.683906 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.683924 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.683947 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.683965 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:33Z","lastTransitionTime":"2025-11-21T09:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.758581 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.758648 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.758704 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.758735 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.758938 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.759104 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.759349 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:33 crc kubenswrapper[4972]: E1121 09:41:33.759422 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.786909 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.786974 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.786992 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.787018 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.787037 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:33Z","lastTransitionTime":"2025-11-21T09:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.890505 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.890575 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.890593 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.890619 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.890639 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:33Z","lastTransitionTime":"2025-11-21T09:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.993688 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.993771 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.993805 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.993897 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:33 crc kubenswrapper[4972]: I1121 09:41:33.993924 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:33Z","lastTransitionTime":"2025-11-21T09:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.097325 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.097394 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.097404 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.097420 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.097431 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.201200 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.201280 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.201306 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.201338 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.201363 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.305261 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.305341 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.305363 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.305396 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.305422 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.408292 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.408349 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.408361 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.408378 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.408395 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.510679 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.510714 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.510723 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.510736 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.510746 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.613445 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.613514 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.613534 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.613561 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.613581 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.716102 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.716156 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.716167 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.716187 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.716199 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.802814 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.802883 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.802899 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.802920 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.802938 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: E1121 09:41:34.816478 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:34Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.820097 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.820158 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.820173 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.820196 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.820211 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: E1121 09:41:34.832412 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:34Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.836516 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.836566 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.836579 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.836596 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.836638 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: E1121 09:41:34.849247 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:34Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.853855 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.853892 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.853904 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.853919 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.853929 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: E1121 09:41:34.867644 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:34Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.872460 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.872511 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.872526 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.872544 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.872556 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: E1121 09:41:34.884700 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:34Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:34 crc kubenswrapper[4972]: E1121 09:41:34.884820 4972 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.886312 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.886342 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.886352 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.886368 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.886378 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.989298 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.989352 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.989364 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.989382 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:34 crc kubenswrapper[4972]: I1121 09:41:34.989395 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:34Z","lastTransitionTime":"2025-11-21T09:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.093069 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.093136 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.093161 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.093193 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.093216 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:35Z","lastTransitionTime":"2025-11-21T09:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.124218 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/0.log" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.127397 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6" exitCode=1 Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.127459 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6"} Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.128803 4972 scope.go:117] "RemoveContainer" containerID="8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.150544 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.173853 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:35Z\\\",\\\"message\\\":\\\"/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995565 6242 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995716 6242 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995951 6242 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.996341 6242 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:34.996383 6242 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:34.996422 6242 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:34.996484 6242 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:34.996505 6242 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:34.996549 6242 factory.go:656] Stopping watch factory\\\\nI1121 09:41:34.996579 6242 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:34.996620 6242 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:34.996647 6242 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:34.996670 6242 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:34.996693 6242 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.187800 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.195155 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.195223 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.195232 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.195246 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.195256 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:35Z","lastTransitionTime":"2025-11-21T09:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.202674 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.220115 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.239892 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.258127 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.274393 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.294563 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.297352 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.297393 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.297405 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.297423 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.297435 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:35Z","lastTransitionTime":"2025-11-21T09:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.309798 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.328328 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.347685 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.364188 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.377888 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.392698 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.400267 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.400340 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.400353 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.400377 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.400393 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:35Z","lastTransitionTime":"2025-11-21T09:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.406892 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.502703 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.502767 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.502784 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.502814 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.502861 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:35Z","lastTransitionTime":"2025-11-21T09:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.605860 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.605960 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.605976 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.605998 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.606007 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:35Z","lastTransitionTime":"2025-11-21T09:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.645057 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:35 crc kubenswrapper[4972]: E1121 09:41:35.645250 4972 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:41:35 crc kubenswrapper[4972]: E1121 09:41:35.645337 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs podName:df5e96f4-727c-44c1-8e2f-e624c912430b nodeName:}" failed. No retries permitted until 2025-11-21 09:41:39.645314216 +0000 UTC m=+44.754456714 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs") pod "network-metrics-daemon-k9mnh" (UID: "df5e96f4-727c-44c1-8e2f-e624c912430b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.709307 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.709355 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.709366 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.709385 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.709400 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:35Z","lastTransitionTime":"2025-11-21T09:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.758423 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.758480 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.758572 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:35 crc kubenswrapper[4972]: E1121 09:41:35.758734 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.758793 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:35 crc kubenswrapper[4972]: E1121 09:41:35.758944 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:35 crc kubenswrapper[4972]: E1121 09:41:35.759092 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:35 crc kubenswrapper[4972]: E1121 09:41:35.759193 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.776409 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.793441 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.806706 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.811874 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.811913 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.811925 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.811941 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.811951 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:35Z","lastTransitionTime":"2025-11-21T09:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.831429 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.851366 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.861815 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.873235 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.884700 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.899491 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.914391 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.914811 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.914862 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.914874 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.914891 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.914901 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:35Z","lastTransitionTime":"2025-11-21T09:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.927012 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.937806 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.949669 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.961781 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.979028 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:35Z\\\",\\\"message\\\":\\\"/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995565 6242 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995716 6242 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995951 6242 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.996341 6242 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:34.996383 6242 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:34.996422 6242 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:34.996484 6242 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:34.996505 6242 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:34.996549 6242 factory.go:656] Stopping watch factory\\\\nI1121 09:41:34.996579 6242 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:34.996620 6242 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:34.996647 6242 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:34.996670 6242 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:34.996693 6242 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:35 crc kubenswrapper[4972]: I1121 09:41:35.991425 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.017463 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.017498 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.017508 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.017521 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.017530 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:36Z","lastTransitionTime":"2025-11-21T09:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.121282 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.121320 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.121330 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.121348 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.121358 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:36Z","lastTransitionTime":"2025-11-21T09:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.131161 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/0.log" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.133719 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a"} Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.133877 4972 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.147047 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.159906 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.170113 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.181585 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.191167 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.202395 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.214274 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.223598 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.223644 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.223654 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.223672 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.223686 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:36Z","lastTransitionTime":"2025-11-21T09:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.267204 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:35Z\\\",\\\"message\\\":\\\"/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995565 6242 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995716 6242 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995951 6242 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.996341 6242 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:34.996383 6242 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:34.996422 6242 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:34.996484 6242 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:34.996505 6242 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:34.996549 6242 factory.go:656] Stopping watch factory\\\\nI1121 09:41:34.996579 6242 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:34.996620 6242 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:34.996647 6242 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:34.996670 6242 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:34.996693 6242 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.285349 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.293345 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.303775 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.313292 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.321385 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.326066 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.326093 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.326104 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.326118 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.326128 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:36Z","lastTransitionTime":"2025-11-21T09:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.331582 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.341061 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.351590 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.428412 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.428650 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.428667 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.428682 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.428701 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:36Z","lastTransitionTime":"2025-11-21T09:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.532202 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.532259 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.532276 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.532296 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.532313 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:36Z","lastTransitionTime":"2025-11-21T09:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.635785 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.635869 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.635886 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.635914 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.635931 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:36Z","lastTransitionTime":"2025-11-21T09:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.739219 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.739261 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.739272 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.739290 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.739301 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:36Z","lastTransitionTime":"2025-11-21T09:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.842859 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.842939 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.842956 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.842984 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.843000 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:36Z","lastTransitionTime":"2025-11-21T09:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.945474 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.945529 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.945541 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.945556 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:36 crc kubenswrapper[4972]: I1121 09:41:36.945570 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:36Z","lastTransitionTime":"2025-11-21T09:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.048449 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.048523 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.048547 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.048578 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.048604 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:37Z","lastTransitionTime":"2025-11-21T09:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.139466 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/1.log" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.140282 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/0.log" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.143921 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a" exitCode=1 Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.143958 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a"} Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.143990 4972 scope.go:117] "RemoveContainer" containerID="8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.144643 4972 scope.go:117] "RemoveContainer" containerID="8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a" Nov 21 09:41:37 crc kubenswrapper[4972]: E1121 09:41:37.144861 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.152410 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.152467 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.152489 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.152516 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.152537 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:37Z","lastTransitionTime":"2025-11-21T09:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.172130 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.193764 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.209398 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.227759 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.242627 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.255584 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.255620 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.255629 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.255660 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.255672 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:37Z","lastTransitionTime":"2025-11-21T09:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.267942 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:35Z\\\",\\\"message\\\":\\\"/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995565 6242 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995716 6242 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995951 6242 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.996341 6242 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:34.996383 6242 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:34.996422 6242 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:34.996484 6242 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:34.996505 6242 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:34.996549 6242 factory.go:656] Stopping watch factory\\\\nI1121 09:41:34.996579 6242 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:34.996620 6242 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:34.996647 6242 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:34.996670 6242 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:34.996693 6242 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:36Z\\\",\\\"message\\\":\\\" 6446 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 09:41:36.090780 6446 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:41:36.090819 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:36.090824 6446 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:36.090877 6446 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:36.090900 6446 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:41:36.090912 6446 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:36.090916 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:36.090947 6446 factory.go:656] Stopping watch factory\\\\nI1121 09:41:36.090964 6446 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:36.091000 6446 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 09:41:36.091006 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:36.091012 6446 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:36.091018 6446 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:36.091022 6446 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:41:36.091028 6446 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.281308 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.294210 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.308233 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.323105 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.338853 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.355771 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.358070 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.358108 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.358119 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.358136 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.358148 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:37Z","lastTransitionTime":"2025-11-21T09:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.373769 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.389431 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.406329 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.422543 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:37Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.465285 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.465335 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.465355 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.465398 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.465412 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:37Z","lastTransitionTime":"2025-11-21T09:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.568458 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.568526 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.568549 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.568580 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.568602 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:37Z","lastTransitionTime":"2025-11-21T09:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.671514 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.671787 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.672001 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.672159 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.672328 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:37Z","lastTransitionTime":"2025-11-21T09:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.759290 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.759365 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.759447 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:37 crc kubenswrapper[4972]: E1121 09:41:37.759942 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.759513 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:37 crc kubenswrapper[4972]: E1121 09:41:37.760082 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:37 crc kubenswrapper[4972]: E1121 09:41:37.760039 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:37 crc kubenswrapper[4972]: E1121 09:41:37.760245 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.774768 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.775331 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.775358 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.775385 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.775407 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:37Z","lastTransitionTime":"2025-11-21T09:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.879030 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.879083 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.879099 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.879123 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.879140 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:37Z","lastTransitionTime":"2025-11-21T09:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.982934 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.982997 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.983009 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.983031 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:37 crc kubenswrapper[4972]: I1121 09:41:37.983043 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:37Z","lastTransitionTime":"2025-11-21T09:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.087363 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.087407 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.087422 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.087443 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.087459 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:38Z","lastTransitionTime":"2025-11-21T09:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.149623 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/1.log" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.189592 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.189644 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.189663 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.189685 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.189703 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:38Z","lastTransitionTime":"2025-11-21T09:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.293200 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.293247 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.293297 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.293321 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.293339 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:38Z","lastTransitionTime":"2025-11-21T09:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.397016 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.397069 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.397087 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.397110 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.397128 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:38Z","lastTransitionTime":"2025-11-21T09:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.500109 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.500150 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.500160 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.500175 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.500185 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:38Z","lastTransitionTime":"2025-11-21T09:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.603212 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.603273 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.603287 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.603305 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.603318 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:38Z","lastTransitionTime":"2025-11-21T09:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.706322 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.706419 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.706439 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.706461 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.706517 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:38Z","lastTransitionTime":"2025-11-21T09:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.809772 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.809818 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.809860 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.809883 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.809899 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:38Z","lastTransitionTime":"2025-11-21T09:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.912624 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.912715 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.912738 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.912761 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:38 crc kubenswrapper[4972]: I1121 09:41:38.912776 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:38Z","lastTransitionTime":"2025-11-21T09:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.015774 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.016426 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.016661 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.016953 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.017196 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:39Z","lastTransitionTime":"2025-11-21T09:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.120008 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.120089 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.120114 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.120145 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.120167 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:39Z","lastTransitionTime":"2025-11-21T09:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.223588 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.223671 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.223690 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.223721 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.223742 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:39Z","lastTransitionTime":"2025-11-21T09:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.327911 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.328025 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.328045 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.328077 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.328102 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:39Z","lastTransitionTime":"2025-11-21T09:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.431732 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.431790 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.431807 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.431871 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.431988 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:39Z","lastTransitionTime":"2025-11-21T09:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.535397 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.535568 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.535602 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.535633 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.535657 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:39Z","lastTransitionTime":"2025-11-21T09:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.638821 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.638904 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.638921 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.638945 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.638962 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:39Z","lastTransitionTime":"2025-11-21T09:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.691812 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:39 crc kubenswrapper[4972]: E1121 09:41:39.692064 4972 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:41:39 crc kubenswrapper[4972]: E1121 09:41:39.692196 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs podName:df5e96f4-727c-44c1-8e2f-e624c912430b nodeName:}" failed. No retries permitted until 2025-11-21 09:41:47.692163446 +0000 UTC m=+52.801305974 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs") pod "network-metrics-daemon-k9mnh" (UID: "df5e96f4-727c-44c1-8e2f-e624c912430b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.742315 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.742379 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.742392 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.742410 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.742421 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:39Z","lastTransitionTime":"2025-11-21T09:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.758985 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.759135 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.759225 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:39 crc kubenswrapper[4972]: E1121 09:41:39.759227 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.759261 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:39 crc kubenswrapper[4972]: E1121 09:41:39.759384 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:39 crc kubenswrapper[4972]: E1121 09:41:39.759949 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.760474 4972 scope.go:117] "RemoveContainer" containerID="b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d" Nov 21 09:41:39 crc kubenswrapper[4972]: E1121 09:41:39.760913 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.845196 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.845254 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.845273 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.845296 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.845313 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:39Z","lastTransitionTime":"2025-11-21T09:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.947611 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.947642 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.947654 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.947692 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:39 crc kubenswrapper[4972]: I1121 09:41:39.947704 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:39Z","lastTransitionTime":"2025-11-21T09:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.050997 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.051055 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.051072 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.051098 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.051119 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:40Z","lastTransitionTime":"2025-11-21T09:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.154233 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.154554 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.154658 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.154743 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.154824 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:40Z","lastTransitionTime":"2025-11-21T09:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.163714 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.166023 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc"} Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.166367 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.186935 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.205453 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.220568 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.240632 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.257920 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.258193 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.258344 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.258471 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.258771 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:40Z","lastTransitionTime":"2025-11-21T09:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.258707 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.277805 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.298986 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.311548 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.325155 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.337896 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.350010 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.360973 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.361024 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.361038 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.361057 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.361072 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:40Z","lastTransitionTime":"2025-11-21T09:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.362048 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.372217 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.384240 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.405465 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5f1216ced2fbf547b3da553f52fa3efa0c5c1c0d55f7e101e43695636baab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:35Z\\\",\\\"message\\\":\\\"/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995565 6242 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995716 6242 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.995951 6242 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 09:41:34.996341 6242 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:34.996383 6242 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:34.996422 6242 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:34.996484 6242 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:34.996505 6242 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:34.996549 6242 factory.go:656] Stopping watch factory\\\\nI1121 09:41:34.996579 6242 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:34.996620 6242 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:34.996647 6242 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:34.996670 6242 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:34.996693 6242 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:36Z\\\",\\\"message\\\":\\\" 6446 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 09:41:36.090780 6446 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:41:36.090819 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:36.090824 6446 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:36.090877 6446 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:36.090900 6446 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:41:36.090912 6446 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:36.090916 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:36.090947 6446 factory.go:656] Stopping watch factory\\\\nI1121 09:41:36.090964 6446 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:36.091000 6446 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 09:41:36.091006 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:36.091012 6446 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:36.091018 6446 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:36.091022 6446 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:41:36.091028 6446 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.414400 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:40Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.463860 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.463892 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.463901 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.463915 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.463923 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:40Z","lastTransitionTime":"2025-11-21T09:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.566706 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.566772 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.566788 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.566809 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.566848 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:40Z","lastTransitionTime":"2025-11-21T09:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.670148 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.670222 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.670236 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.670254 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.670266 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:40Z","lastTransitionTime":"2025-11-21T09:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.773427 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.773486 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.773503 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.773526 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.773545 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:40Z","lastTransitionTime":"2025-11-21T09:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.876597 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.876676 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.876695 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.876721 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.876740 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:40Z","lastTransitionTime":"2025-11-21T09:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.979047 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.979108 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.979125 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.979154 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:40 crc kubenswrapper[4972]: I1121 09:41:40.979176 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:40Z","lastTransitionTime":"2025-11-21T09:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.081819 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.081914 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.081938 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.081967 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.081990 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:41Z","lastTransitionTime":"2025-11-21T09:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.185044 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.185117 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.185132 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.185153 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.185166 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:41Z","lastTransitionTime":"2025-11-21T09:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.288114 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.288171 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.288195 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.288216 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.288229 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:41Z","lastTransitionTime":"2025-11-21T09:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.390941 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.390991 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.391006 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.391025 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.391037 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:41Z","lastTransitionTime":"2025-11-21T09:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.493067 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.493101 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.493110 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.493124 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.493132 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:41Z","lastTransitionTime":"2025-11-21T09:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.595590 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.595654 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.595672 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.595699 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.595717 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:41Z","lastTransitionTime":"2025-11-21T09:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.698667 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.698728 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.698742 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.698762 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.698775 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:41Z","lastTransitionTime":"2025-11-21T09:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.759406 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.759542 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.759608 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:41 crc kubenswrapper[4972]: E1121 09:41:41.759561 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.759542 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:41 crc kubenswrapper[4972]: E1121 09:41:41.759784 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:41 crc kubenswrapper[4972]: E1121 09:41:41.760051 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:41 crc kubenswrapper[4972]: E1121 09:41:41.760161 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.801781 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.801910 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.801935 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.801964 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.801987 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:41Z","lastTransitionTime":"2025-11-21T09:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.904440 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.904483 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.904496 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.904513 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:41 crc kubenswrapper[4972]: I1121 09:41:41.904525 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:41Z","lastTransitionTime":"2025-11-21T09:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.007327 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.007394 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.007414 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.007439 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.007458 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:42Z","lastTransitionTime":"2025-11-21T09:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.111362 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.111451 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.111492 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.111530 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.111554 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:42Z","lastTransitionTime":"2025-11-21T09:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.214208 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.214310 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.214321 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.214338 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.214350 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:42Z","lastTransitionTime":"2025-11-21T09:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.317316 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.318152 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.318193 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.318226 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.318246 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:42Z","lastTransitionTime":"2025-11-21T09:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.420845 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.420900 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.420913 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.420937 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.420951 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:42Z","lastTransitionTime":"2025-11-21T09:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.524147 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.524217 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.524236 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.524272 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.524291 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:42Z","lastTransitionTime":"2025-11-21T09:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.627422 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.627476 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.627492 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.627517 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.627530 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:42Z","lastTransitionTime":"2025-11-21T09:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.730283 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.730315 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.730324 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.730337 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.730346 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:42Z","lastTransitionTime":"2025-11-21T09:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.832906 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.832981 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.833008 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.833038 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.833063 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:42Z","lastTransitionTime":"2025-11-21T09:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.936346 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.936404 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.936424 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.936457 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:42 crc kubenswrapper[4972]: I1121 09:41:42.936475 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:42Z","lastTransitionTime":"2025-11-21T09:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.040007 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.040072 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.040096 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.040122 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.040140 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:43Z","lastTransitionTime":"2025-11-21T09:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.054449 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.056146 4972 scope.go:117] "RemoveContainer" containerID="8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a" Nov 21 09:41:43 crc kubenswrapper[4972]: E1121 09:41:43.056482 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.080442 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.099755 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.120758 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.138282 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.144748 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.144789 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.144803 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.144842 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.144855 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:43Z","lastTransitionTime":"2025-11-21T09:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.154588 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.179008 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:36Z\\\",\\\"message\\\":\\\" 6446 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 09:41:36.090780 6446 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:41:36.090819 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:36.090824 6446 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:36.090877 6446 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:36.090900 6446 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:41:36.090912 6446 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:36.090916 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:36.090947 6446 factory.go:656] Stopping watch factory\\\\nI1121 09:41:36.090964 6446 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:36.091000 6446 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 09:41:36.091006 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:36.091012 6446 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:36.091018 6446 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:36.091022 6446 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:41:36.091028 6446 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.193245 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.207673 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.223742 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.244416 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.248089 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.248136 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.248151 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.248170 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.248183 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:43Z","lastTransitionTime":"2025-11-21T09:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.260972 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.273548 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.289318 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.304346 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.316149 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.333974 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:43Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.351122 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.351193 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.351206 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.351235 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.351250 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:43Z","lastTransitionTime":"2025-11-21T09:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.453697 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.453749 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.453765 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.453787 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.453802 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:43Z","lastTransitionTime":"2025-11-21T09:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.556592 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.557006 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.557057 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.557109 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.557165 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:43Z","lastTransitionTime":"2025-11-21T09:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.661394 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.661684 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.661758 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.661854 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.661970 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:43Z","lastTransitionTime":"2025-11-21T09:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.759205 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.759275 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.759350 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.759388 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:43 crc kubenswrapper[4972]: E1121 09:41:43.759713 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:43 crc kubenswrapper[4972]: E1121 09:41:43.760168 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:43 crc kubenswrapper[4972]: E1121 09:41:43.760495 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:43 crc kubenswrapper[4972]: E1121 09:41:43.760652 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.764990 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.765175 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.765275 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.765405 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.765491 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:43Z","lastTransitionTime":"2025-11-21T09:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.868728 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.868762 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.868771 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.868783 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.868792 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:43Z","lastTransitionTime":"2025-11-21T09:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.972244 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.972312 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.972335 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.972366 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:43 crc kubenswrapper[4972]: I1121 09:41:43.972390 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:43Z","lastTransitionTime":"2025-11-21T09:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.074561 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.074608 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.074624 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.074645 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.074659 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:44Z","lastTransitionTime":"2025-11-21T09:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.178442 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.178506 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.178532 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.178558 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.178575 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:44Z","lastTransitionTime":"2025-11-21T09:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.282101 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.282136 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.282145 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.282159 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.282169 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:44Z","lastTransitionTime":"2025-11-21T09:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.385186 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.385245 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.385267 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.385298 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.385321 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:44Z","lastTransitionTime":"2025-11-21T09:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.488744 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.488815 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.488894 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.488930 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.488952 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:44Z","lastTransitionTime":"2025-11-21T09:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.591559 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.591643 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.591661 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.591684 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.591701 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:44Z","lastTransitionTime":"2025-11-21T09:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.695003 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.695071 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.695095 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.695130 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.695156 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:44Z","lastTransitionTime":"2025-11-21T09:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.798239 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.798307 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.798328 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.798355 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.798376 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:44Z","lastTransitionTime":"2025-11-21T09:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.901720 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.901768 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.901782 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.901802 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:44 crc kubenswrapper[4972]: I1121 09:41:44.901814 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:44Z","lastTransitionTime":"2025-11-21T09:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.005249 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.005284 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.005292 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.005305 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.005315 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.108491 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.108574 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.108599 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.108627 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.108644 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.210874 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.210930 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.210943 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.210970 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.210984 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.238533 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.238582 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.238594 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.238610 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.238622 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: E1121 09:41:45.255609 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.260499 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.260538 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.260552 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.260573 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.260584 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: E1121 09:41:45.277686 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.282100 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.282148 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.282158 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.282177 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.282189 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: E1121 09:41:45.307645 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.312880 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.312930 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.312947 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.312974 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.312991 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: E1121 09:41:45.332713 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.336883 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.337039 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.337124 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.337224 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.337323 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: E1121 09:41:45.356088 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: E1121 09:41:45.356466 4972 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.358186 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.358234 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.358246 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.358447 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.358460 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.460422 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.460466 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.460480 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.460509 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.460523 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.562616 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.562666 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.562683 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.562705 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.562722 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.666101 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.666159 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.666188 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.666216 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.666234 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.758517 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.758524 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.758580 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:45 crc kubenswrapper[4972]: E1121 09:41:45.759155 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:45 crc kubenswrapper[4972]: E1121 09:41:45.758959 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.758688 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:45 crc kubenswrapper[4972]: E1121 09:41:45.759323 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:45 crc kubenswrapper[4972]: E1121 09:41:45.759505 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.768804 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.768893 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.768921 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.768947 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.768966 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.782135 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.804455 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.819322 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.843231 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.855686 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.872014 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.872053 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.872065 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.872083 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.872094 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.872299 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.885708 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.909774 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:36Z\\\",\\\"message\\\":\\\" 6446 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 09:41:36.090780 6446 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:41:36.090819 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:36.090824 6446 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:36.090877 6446 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:36.090900 6446 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:41:36.090912 6446 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:36.090916 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:36.090947 6446 factory.go:656] Stopping watch factory\\\\nI1121 09:41:36.090964 6446 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:36.091000 6446 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 09:41:36.091006 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:36.091012 6446 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:36.091018 6446 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:36.091022 6446 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:41:36.091028 6446 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.921131 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.932012 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.945684 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.963747 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.973965 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.976176 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.976217 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.976235 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.976261 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.976280 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:45Z","lastTransitionTime":"2025-11-21T09:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:45 crc kubenswrapper[4972]: I1121 09:41:45.985408 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.001881 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:45Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.019963 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:46Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.079487 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.079536 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.079553 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.079573 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.079588 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:46Z","lastTransitionTime":"2025-11-21T09:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.182930 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.182994 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.183011 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.183037 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.183054 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:46Z","lastTransitionTime":"2025-11-21T09:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.286365 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.286408 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.286420 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.286437 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.286450 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:46Z","lastTransitionTime":"2025-11-21T09:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.389896 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.389981 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.390006 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.390044 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.390069 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:46Z","lastTransitionTime":"2025-11-21T09:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.492790 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.492867 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.492880 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.492899 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.492911 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:46Z","lastTransitionTime":"2025-11-21T09:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.595613 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.595661 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.595671 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.595686 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.595698 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:46Z","lastTransitionTime":"2025-11-21T09:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.699072 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.699150 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.699184 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.699218 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.699240 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:46Z","lastTransitionTime":"2025-11-21T09:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.802317 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.802364 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.802374 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.802394 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.802405 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:46Z","lastTransitionTime":"2025-11-21T09:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.904925 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.904986 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.905005 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.905031 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:46 crc kubenswrapper[4972]: I1121 09:41:46.905052 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:46Z","lastTransitionTime":"2025-11-21T09:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.008284 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.008339 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.008351 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.008371 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.008383 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:47Z","lastTransitionTime":"2025-11-21T09:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.111072 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.111143 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.111162 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.111179 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.111189 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:47Z","lastTransitionTime":"2025-11-21T09:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.214454 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.214505 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.214514 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.214527 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.214538 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:47Z","lastTransitionTime":"2025-11-21T09:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.317795 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.317919 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.317944 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.317976 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.317999 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:47Z","lastTransitionTime":"2025-11-21T09:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.420771 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.420860 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.420880 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.420903 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.420919 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:47Z","lastTransitionTime":"2025-11-21T09:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.524461 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.524504 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.524515 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.524533 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.524546 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:47Z","lastTransitionTime":"2025-11-21T09:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.628734 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.628809 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.628861 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.628893 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.628922 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:47Z","lastTransitionTime":"2025-11-21T09:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.731914 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.731995 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.732024 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.732060 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.732084 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:47Z","lastTransitionTime":"2025-11-21T09:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.759284 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:47 crc kubenswrapper[4972]: E1121 09:41:47.759469 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.759627 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:47 crc kubenswrapper[4972]: E1121 09:41:47.759863 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.759923 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:47 crc kubenswrapper[4972]: E1121 09:41:47.760057 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.760150 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:47 crc kubenswrapper[4972]: E1121 09:41:47.760192 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.778983 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:47 crc kubenswrapper[4972]: E1121 09:41:47.779145 4972 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:41:47 crc kubenswrapper[4972]: E1121 09:41:47.779209 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs podName:df5e96f4-727c-44c1-8e2f-e624c912430b nodeName:}" failed. No retries permitted until 2025-11-21 09:42:03.779190283 +0000 UTC m=+68.888332781 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs") pod "network-metrics-daemon-k9mnh" (UID: "df5e96f4-727c-44c1-8e2f-e624c912430b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.834162 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.834212 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.834228 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.834248 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.834264 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:47Z","lastTransitionTime":"2025-11-21T09:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.937291 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.937785 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.938224 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.938587 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:47 crc kubenswrapper[4972]: I1121 09:41:47.939015 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:47Z","lastTransitionTime":"2025-11-21T09:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.041900 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.041973 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.041987 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.042042 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.042058 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:48Z","lastTransitionTime":"2025-11-21T09:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.145421 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.145500 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.145523 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.145555 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.145576 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:48Z","lastTransitionTime":"2025-11-21T09:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.247588 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.247650 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.247665 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.247690 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.248203 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:48Z","lastTransitionTime":"2025-11-21T09:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.350825 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.350897 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.350911 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.350935 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.350952 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:48Z","lastTransitionTime":"2025-11-21T09:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.453944 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.453998 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.454013 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.454032 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.454045 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:48Z","lastTransitionTime":"2025-11-21T09:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.557008 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.557080 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.557098 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.557123 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.557142 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:48Z","lastTransitionTime":"2025-11-21T09:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.660879 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.660929 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.660940 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.660960 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.660976 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:48Z","lastTransitionTime":"2025-11-21T09:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.764439 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.764947 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.765159 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.765384 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.765587 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:48Z","lastTransitionTime":"2025-11-21T09:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.869293 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.869377 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.869420 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.869455 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.869484 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:48Z","lastTransitionTime":"2025-11-21T09:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.972780 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.972880 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.972914 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.972948 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:48 crc kubenswrapper[4972]: I1121 09:41:48.972968 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:48Z","lastTransitionTime":"2025-11-21T09:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.075758 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.075880 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.075907 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.075939 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.075963 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:49Z","lastTransitionTime":"2025-11-21T09:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.179402 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.179469 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.179488 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.179515 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.179541 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:49Z","lastTransitionTime":"2025-11-21T09:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.283102 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.283197 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.283215 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.283240 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.283264 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:49Z","lastTransitionTime":"2025-11-21T09:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.386545 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.386601 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.386612 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.386628 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.386642 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:49Z","lastTransitionTime":"2025-11-21T09:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.489884 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.489961 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.489984 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.490016 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.490040 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:49Z","lastTransitionTime":"2025-11-21T09:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.496208 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.496381 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:42:21.496339577 +0000 UTC m=+86.605482105 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.593239 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.593305 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.593328 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.593357 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.593378 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:49Z","lastTransitionTime":"2025-11-21T09:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.597817 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.597977 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.598035 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.598087 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.598122 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.598165 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.598189 4972 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.598245 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.598261 4972 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.598279 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.598367 4972 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.598365 4972 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.598342 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 09:42:21.59831265 +0000 UTC m=+86.707455188 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.598442 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:42:21.598418773 +0000 UTC m=+86.707561261 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.598458 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 09:42:21.598452234 +0000 UTC m=+86.707594852 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.598477 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:42:21.598466695 +0000 UTC m=+86.707609193 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.696190 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.696250 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.696274 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.696308 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.696331 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:49Z","lastTransitionTime":"2025-11-21T09:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.759086 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.759250 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.759745 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.759863 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.760016 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.760198 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.760012 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:49 crc kubenswrapper[4972]: E1121 09:41:49.760556 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.799509 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.799606 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.799622 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.799647 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.799663 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:49Z","lastTransitionTime":"2025-11-21T09:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.902886 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.902947 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.902959 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.902981 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:49 crc kubenswrapper[4972]: I1121 09:41:49.902994 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:49Z","lastTransitionTime":"2025-11-21T09:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.011811 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.011899 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.011915 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.011942 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.011963 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:50Z","lastTransitionTime":"2025-11-21T09:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.114727 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.115219 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.115229 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.115242 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.115250 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:50Z","lastTransitionTime":"2025-11-21T09:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.218542 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.218590 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.218603 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.218621 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.218635 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:50Z","lastTransitionTime":"2025-11-21T09:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.321389 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.321449 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.321464 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.321487 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.321499 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:50Z","lastTransitionTime":"2025-11-21T09:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.424864 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.424933 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.424944 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.424965 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.424978 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:50Z","lastTransitionTime":"2025-11-21T09:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.528373 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.528415 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.528426 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.528443 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.528452 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:50Z","lastTransitionTime":"2025-11-21T09:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.632475 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.632571 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.632599 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.632636 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.632656 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:50Z","lastTransitionTime":"2025-11-21T09:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.735769 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.735809 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.735821 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.735853 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.735865 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:50Z","lastTransitionTime":"2025-11-21T09:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.799885 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.815488 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.823758 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:50Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.839288 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.839350 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.839373 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.839403 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.839425 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:50Z","lastTransitionTime":"2025-11-21T09:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.844602 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:50Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.865607 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:50Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.882868 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:50Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.901155 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:50Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.921918 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:50Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.934617 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:50Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.942440 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.942485 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.942501 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.942522 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.942536 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:50Z","lastTransitionTime":"2025-11-21T09:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.956119 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:50Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.976988 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:50Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:50 crc kubenswrapper[4972]: I1121 09:41:50.992482 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:50Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.010515 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:51Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.023972 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:51Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.045959 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.046054 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.046073 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.046099 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.046115 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:51Z","lastTransitionTime":"2025-11-21T09:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.055394 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:51Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.072431 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:51Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.110782 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:36Z\\\",\\\"message\\\":\\\" 6446 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 09:41:36.090780 6446 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:41:36.090819 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:36.090824 6446 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:36.090877 6446 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:36.090900 6446 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:41:36.090912 6446 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:36.090916 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:36.090947 6446 factory.go:656] Stopping watch factory\\\\nI1121 09:41:36.090964 6446 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:36.091000 6446 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 09:41:36.091006 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:36.091012 6446 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:36.091018 6446 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:36.091022 6446 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:41:36.091028 6446 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:51Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.124869 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:51Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.149356 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.149414 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.149432 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.149457 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.149476 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:51Z","lastTransitionTime":"2025-11-21T09:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.253314 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.253356 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.253366 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.253384 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.253395 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:51Z","lastTransitionTime":"2025-11-21T09:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.356200 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.356266 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.356288 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.356318 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.356336 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:51Z","lastTransitionTime":"2025-11-21T09:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.459449 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.459486 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.459498 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.459515 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.459528 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:51Z","lastTransitionTime":"2025-11-21T09:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.561783 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.561872 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.561891 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.561913 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.561931 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:51Z","lastTransitionTime":"2025-11-21T09:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.667626 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.667704 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.667733 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.667782 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.667808 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:51Z","lastTransitionTime":"2025-11-21T09:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.758562 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.758695 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:51 crc kubenswrapper[4972]: E1121 09:41:51.758752 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:51 crc kubenswrapper[4972]: E1121 09:41:51.758959 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.759070 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:51 crc kubenswrapper[4972]: E1121 09:41:51.759139 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.759201 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:51 crc kubenswrapper[4972]: E1121 09:41:51.759329 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.769953 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.770003 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.770020 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.770043 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.770060 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:51Z","lastTransitionTime":"2025-11-21T09:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.873468 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.873528 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.873542 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.873562 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.873576 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:51Z","lastTransitionTime":"2025-11-21T09:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.976602 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.976669 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.976695 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.976726 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:51 crc kubenswrapper[4972]: I1121 09:41:51.976747 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:51Z","lastTransitionTime":"2025-11-21T09:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.080319 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.080379 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.080399 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.080423 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.080440 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:52Z","lastTransitionTime":"2025-11-21T09:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.183110 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.183417 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.183486 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.183562 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.183646 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:52Z","lastTransitionTime":"2025-11-21T09:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.286647 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.286687 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.286700 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.286716 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.286728 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:52Z","lastTransitionTime":"2025-11-21T09:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.390027 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.390359 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.390447 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.390553 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.390642 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:52Z","lastTransitionTime":"2025-11-21T09:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.494099 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.494148 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.494163 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.494181 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.494198 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:52Z","lastTransitionTime":"2025-11-21T09:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.596377 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.596446 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.596456 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.596470 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.596481 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:52Z","lastTransitionTime":"2025-11-21T09:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.700343 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.700404 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.700422 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.700447 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.700465 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:52Z","lastTransitionTime":"2025-11-21T09:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.803427 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.803482 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.803502 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.803526 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.803543 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:52Z","lastTransitionTime":"2025-11-21T09:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.905530 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.905598 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.905613 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.905628 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:52 crc kubenswrapper[4972]: I1121 09:41:52.905637 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:52Z","lastTransitionTime":"2025-11-21T09:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.009301 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.009378 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.009399 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.009424 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.009442 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:53Z","lastTransitionTime":"2025-11-21T09:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.111994 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.112028 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.112037 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.112054 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.112065 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:53Z","lastTransitionTime":"2025-11-21T09:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.214056 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.214112 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.214122 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.214138 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.214150 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:53Z","lastTransitionTime":"2025-11-21T09:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.316926 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.316997 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.317009 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.317047 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.317059 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:53Z","lastTransitionTime":"2025-11-21T09:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.419546 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.419596 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.419605 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.419621 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.419631 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:53Z","lastTransitionTime":"2025-11-21T09:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.523234 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.523308 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.523328 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.523355 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.523376 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:53Z","lastTransitionTime":"2025-11-21T09:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.627103 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.627174 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.627186 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.627205 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.627223 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:53Z","lastTransitionTime":"2025-11-21T09:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.730247 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.730322 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.730333 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.730349 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.730359 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:53Z","lastTransitionTime":"2025-11-21T09:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.758766 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.758823 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:53 crc kubenswrapper[4972]: E1121 09:41:53.758999 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.759021 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:53 crc kubenswrapper[4972]: E1121 09:41:53.759103 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:53 crc kubenswrapper[4972]: E1121 09:41:53.759262 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.759444 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:53 crc kubenswrapper[4972]: E1121 09:41:53.759636 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.832365 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.832396 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.832404 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.832416 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.832427 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:53Z","lastTransitionTime":"2025-11-21T09:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.935454 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.935510 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.935530 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.935572 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:53 crc kubenswrapper[4972]: I1121 09:41:53.935591 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:53Z","lastTransitionTime":"2025-11-21T09:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.039874 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.039913 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.039922 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.039938 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.039948 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:54Z","lastTransitionTime":"2025-11-21T09:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.143552 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.143600 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.143614 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.143634 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.143648 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:54Z","lastTransitionTime":"2025-11-21T09:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.246421 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.246461 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.246473 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.246488 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.246501 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:54Z","lastTransitionTime":"2025-11-21T09:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.349348 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.349412 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.349428 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.349450 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.349465 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:54Z","lastTransitionTime":"2025-11-21T09:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.452008 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.452058 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.452072 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.452088 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.452097 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:54Z","lastTransitionTime":"2025-11-21T09:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.554473 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.554507 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.554518 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.554535 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.554547 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:54Z","lastTransitionTime":"2025-11-21T09:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.657040 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.657105 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.657117 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.657156 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.657168 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:54Z","lastTransitionTime":"2025-11-21T09:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.760527 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.760574 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.760587 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.760602 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.760614 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:54Z","lastTransitionTime":"2025-11-21T09:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.863736 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.863869 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.863899 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.863935 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.863961 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:54Z","lastTransitionTime":"2025-11-21T09:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.967497 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.967557 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.967575 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.967602 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:54 crc kubenswrapper[4972]: I1121 09:41:54.967621 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:54Z","lastTransitionTime":"2025-11-21T09:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.070341 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.070383 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.070396 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.070412 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.070424 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.172945 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.173003 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.173018 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.173038 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.173057 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.275928 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.275990 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.276014 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.276046 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.276068 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.379085 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.379166 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.379190 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.379222 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.379249 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.482036 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.482080 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.482088 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.482103 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.482112 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.560125 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.560183 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.560195 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.560215 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.560243 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: E1121 09:41:55.574707 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.578586 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.578626 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.578642 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.578662 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.578676 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: E1121 09:41:55.593011 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.597518 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.597593 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.597606 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.597624 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.597635 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: E1121 09:41:55.612502 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.616922 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.616965 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.616979 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.617000 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.617012 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: E1121 09:41:55.639998 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.646168 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.646205 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.646216 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.646233 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.646247 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: E1121 09:41:55.665731 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: E1121 09:41:55.665929 4972 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.667889 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.667937 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.667951 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.667970 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.667983 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.758472 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.758561 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:55 crc kubenswrapper[4972]: E1121 09:41:55.758591 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.758631 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.758476 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:55 crc kubenswrapper[4972]: E1121 09:41:55.758930 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:55 crc kubenswrapper[4972]: E1121 09:41:55.759062 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:55 crc kubenswrapper[4972]: E1121 09:41:55.759216 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.770572 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.770633 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.770659 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.770688 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.770712 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.779458 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.791997 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.808041 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.820219 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.833591 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.848045 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.865658 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.873534 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.873605 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.873630 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.873662 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.873686 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.889300 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.902726 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.921904 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.937132 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.957535 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.969191 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.976546 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.976574 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.976582 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.976595 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.976605 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:55Z","lastTransitionTime":"2025-11-21T09:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.981554 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:55 crc kubenswrapper[4972]: I1121 09:41:55.990859 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:55Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.007099 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:36Z\\\",\\\"message\\\":\\\" 6446 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 09:41:36.090780 6446 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:41:36.090819 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:36.090824 6446 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:36.090877 6446 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:36.090900 6446 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:41:36.090912 6446 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:36.090916 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:36.090947 6446 factory.go:656] Stopping watch factory\\\\nI1121 09:41:36.090964 6446 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:36.091000 6446 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 09:41:36.091006 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:36.091012 6446 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:36.091018 6446 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:36.091022 6446 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:41:36.091028 6446 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:56Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.019322 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:56Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.078969 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.079003 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.079011 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.079026 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.079035 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:56Z","lastTransitionTime":"2025-11-21T09:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.182029 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.182084 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.182100 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.182122 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.182137 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:56Z","lastTransitionTime":"2025-11-21T09:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.285106 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.285143 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.285155 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.285173 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.285184 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:56Z","lastTransitionTime":"2025-11-21T09:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.388105 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.388173 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.388197 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.388228 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.388250 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:56Z","lastTransitionTime":"2025-11-21T09:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.491314 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.491374 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.491392 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.491417 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.491437 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:56Z","lastTransitionTime":"2025-11-21T09:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.593801 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.593902 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.593920 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.593945 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.593965 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:56Z","lastTransitionTime":"2025-11-21T09:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.697306 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.697354 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.697367 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.697386 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.697400 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:56Z","lastTransitionTime":"2025-11-21T09:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.800720 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.800778 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.800789 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.800808 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.800821 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:56Z","lastTransitionTime":"2025-11-21T09:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.903516 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.903564 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.903574 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.903595 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:56 crc kubenswrapper[4972]: I1121 09:41:56.903608 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:56Z","lastTransitionTime":"2025-11-21T09:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.006143 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.006192 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.006204 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.006224 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.006237 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:57Z","lastTransitionTime":"2025-11-21T09:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.109105 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.109143 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.109153 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.109172 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.109183 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:57Z","lastTransitionTime":"2025-11-21T09:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.211823 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.211923 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.211949 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.211988 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.212025 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:57Z","lastTransitionTime":"2025-11-21T09:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.315351 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.315397 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.315408 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.315428 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.315440 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:57Z","lastTransitionTime":"2025-11-21T09:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.417852 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.417890 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.417900 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.417918 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.417930 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:57Z","lastTransitionTime":"2025-11-21T09:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.519874 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.519901 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.519909 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.519922 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.519930 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:57Z","lastTransitionTime":"2025-11-21T09:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.529465 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.542103 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.556551 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.568036 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.582072 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.596510 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.609448 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.622374 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.622486 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.622514 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.622541 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.622556 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:57Z","lastTransitionTime":"2025-11-21T09:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.623081 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.635897 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.646383 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.666196 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:36Z\\\",\\\"message\\\":\\\" 6446 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 09:41:36.090780 6446 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:41:36.090819 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:36.090824 6446 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:36.090877 6446 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:36.090900 6446 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:41:36.090912 6446 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:36.090916 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:36.090947 6446 factory.go:656] Stopping watch factory\\\\nI1121 09:41:36.090964 6446 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:36.091000 6446 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 09:41:36.091006 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:36.091012 6446 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:36.091018 6446 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:36.091022 6446 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:41:36.091028 6446 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.678312 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.693688 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.707091 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.721031 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.724418 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.724617 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.724731 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.724869 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.724995 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:57Z","lastTransitionTime":"2025-11-21T09:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.733954 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.746628 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.758458 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.758522 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:57 crc kubenswrapper[4972]: E1121 09:41:57.758566 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:57 crc kubenswrapper[4972]: E1121 09:41:57.758638 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.758686 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.758747 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:57 crc kubenswrapper[4972]: E1121 09:41:57.758783 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:57 crc kubenswrapper[4972]: E1121 09:41:57.758931 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.759036 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:41:57Z is after 2025-08-24T17:21:41Z" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.827690 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.827766 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.827777 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.827798 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.827815 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:57Z","lastTransitionTime":"2025-11-21T09:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.930950 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.930991 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.931003 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.931026 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:57 crc kubenswrapper[4972]: I1121 09:41:57.931036 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:57Z","lastTransitionTime":"2025-11-21T09:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.034440 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.034540 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.034565 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.034600 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.034623 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:58Z","lastTransitionTime":"2025-11-21T09:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.138047 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.138112 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.138130 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.138170 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.138188 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:58Z","lastTransitionTime":"2025-11-21T09:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.241304 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.241671 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.241923 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.242157 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.242333 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:58Z","lastTransitionTime":"2025-11-21T09:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.345795 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.345894 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.345911 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.345938 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.345954 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:58Z","lastTransitionTime":"2025-11-21T09:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.449707 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.449773 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.449997 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.450064 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.450090 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:58Z","lastTransitionTime":"2025-11-21T09:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.553670 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.554182 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.554200 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.554225 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.554240 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:58Z","lastTransitionTime":"2025-11-21T09:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.657287 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.657353 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.657378 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.657409 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.657433 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:58Z","lastTransitionTime":"2025-11-21T09:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.759682 4972 scope.go:117] "RemoveContainer" containerID="8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.762786 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.763064 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.763205 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.763336 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.763467 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:58Z","lastTransitionTime":"2025-11-21T09:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.870997 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.871075 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.871100 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.871131 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.871157 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:58Z","lastTransitionTime":"2025-11-21T09:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.974872 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.974917 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.974931 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.974950 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:58 crc kubenswrapper[4972]: I1121 09:41:58.974964 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:58Z","lastTransitionTime":"2025-11-21T09:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.077915 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.077980 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.078004 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.078039 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.078061 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:59Z","lastTransitionTime":"2025-11-21T09:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.182174 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.182243 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.182258 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.182276 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.182292 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:59Z","lastTransitionTime":"2025-11-21T09:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.285319 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.285365 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.285378 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.285399 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.285411 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:59Z","lastTransitionTime":"2025-11-21T09:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.388793 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.388887 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.388905 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.388928 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.388946 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:59Z","lastTransitionTime":"2025-11-21T09:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.491252 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.491312 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.491330 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.491356 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.491375 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:59Z","lastTransitionTime":"2025-11-21T09:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.594386 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.594428 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.594440 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.594459 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.594469 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:59Z","lastTransitionTime":"2025-11-21T09:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.697380 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.697442 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.697461 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.697488 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.697515 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:59Z","lastTransitionTime":"2025-11-21T09:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.759408 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:41:59 crc kubenswrapper[4972]: E1121 09:41:59.759654 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.760050 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.760188 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:41:59 crc kubenswrapper[4972]: E1121 09:41:59.760196 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.760763 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:41:59 crc kubenswrapper[4972]: E1121 09:41:59.761073 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:41:59 crc kubenswrapper[4972]: E1121 09:41:59.761517 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.800585 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.800630 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.800642 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.800660 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.800674 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:59Z","lastTransitionTime":"2025-11-21T09:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.903020 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.903089 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.903105 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.903130 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:41:59 crc kubenswrapper[4972]: I1121 09:41:59.903147 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:41:59Z","lastTransitionTime":"2025-11-21T09:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.010100 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.010129 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.010138 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.010151 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.010160 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:00Z","lastTransitionTime":"2025-11-21T09:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.112563 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.112617 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.112630 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.112654 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.112668 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:00Z","lastTransitionTime":"2025-11-21T09:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.215146 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.215181 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.215191 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.215206 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.215216 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:00Z","lastTransitionTime":"2025-11-21T09:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.240514 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/1.log" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.242313 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b"} Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.243087 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.260150 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.281015 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.295536 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.321774 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.321863 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.321884 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.321921 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.321939 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:00Z","lastTransitionTime":"2025-11-21T09:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.322059 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.343085 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.365096 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.384005 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.405927 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:36Z\\\",\\\"message\\\":\\\" 6446 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 09:41:36.090780 6446 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:41:36.090819 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:36.090824 6446 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:36.090877 6446 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:36.090900 6446 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:41:36.090912 6446 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:36.090916 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:36.090947 6446 factory.go:656] Stopping watch factory\\\\nI1121 09:41:36.090964 6446 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:36.091000 6446 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 09:41:36.091006 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:36.091012 6446 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:36.091018 6446 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:36.091022 6446 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:41:36.091028 6446 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.425112 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.425175 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.425192 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.425217 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.425233 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:00Z","lastTransitionTime":"2025-11-21T09:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.427980 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.451687 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.469911 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.485196 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.499261 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.509964 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.526326 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.527045 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.527156 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.527259 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.527350 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.527427 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:00Z","lastTransitionTime":"2025-11-21T09:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.539768 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.551649 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:00Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.633463 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.633502 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.633511 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.633529 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.633539 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:00Z","lastTransitionTime":"2025-11-21T09:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.738425 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.738463 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.738474 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.738491 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.738501 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:00Z","lastTransitionTime":"2025-11-21T09:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.841294 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.841335 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.841345 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.841361 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.841370 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:00Z","lastTransitionTime":"2025-11-21T09:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.944482 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.944949 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.944966 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.944989 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:00 crc kubenswrapper[4972]: I1121 09:42:00.945005 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:00Z","lastTransitionTime":"2025-11-21T09:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.046928 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.047281 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.047483 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.047696 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.047990 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:01Z","lastTransitionTime":"2025-11-21T09:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.150914 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.151356 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.151580 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.151785 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.152035 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:01Z","lastTransitionTime":"2025-11-21T09:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.254867 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.255252 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.255467 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.255625 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.255746 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:01Z","lastTransitionTime":"2025-11-21T09:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.358593 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.358621 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.358630 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.358644 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.358652 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:01Z","lastTransitionTime":"2025-11-21T09:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.461892 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.461937 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.461953 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.461970 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.461980 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:01Z","lastTransitionTime":"2025-11-21T09:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.565205 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.565245 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.565281 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.565304 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.565315 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:01Z","lastTransitionTime":"2025-11-21T09:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.668062 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.668102 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.668114 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.668129 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.668139 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:01Z","lastTransitionTime":"2025-11-21T09:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.758908 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.758821 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:01 crc kubenswrapper[4972]: E1121 09:42:01.759100 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.759717 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:01 crc kubenswrapper[4972]: E1121 09:42:01.759906 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.759997 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:01 crc kubenswrapper[4972]: E1121 09:42:01.760026 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:01 crc kubenswrapper[4972]: E1121 09:42:01.760184 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.771493 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.771544 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.771557 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.771579 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.771596 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:01Z","lastTransitionTime":"2025-11-21T09:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.875039 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.875091 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.875105 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.875128 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.875141 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:01Z","lastTransitionTime":"2025-11-21T09:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.978447 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.978507 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.978520 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.978537 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:01 crc kubenswrapper[4972]: I1121 09:42:01.978552 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:01Z","lastTransitionTime":"2025-11-21T09:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.081262 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.081312 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.081333 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.081363 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.081381 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:02Z","lastTransitionTime":"2025-11-21T09:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.183604 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.183653 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.183666 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.183684 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.183697 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:02Z","lastTransitionTime":"2025-11-21T09:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.249875 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/2.log" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.250504 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/1.log" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.253146 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b" exitCode=1 Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.253192 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b"} Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.253237 4972 scope.go:117] "RemoveContainer" containerID="8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.256984 4972 scope.go:117] "RemoveContainer" containerID="96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b" Nov 21 09:42:02 crc kubenswrapper[4972]: E1121 09:42:02.257166 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.269129 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.280862 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.285797 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.285864 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.285885 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.285905 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.285918 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:02Z","lastTransitionTime":"2025-11-21T09:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.295182 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.309841 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.325322 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.342454 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.355532 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.368239 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.380161 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.388620 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.388647 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.388656 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.388669 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.388679 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:02Z","lastTransitionTime":"2025-11-21T09:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.399619 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:36Z\\\",\\\"message\\\":\\\" 6446 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 09:41:36.090780 6446 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:41:36.090819 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:36.090824 6446 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:36.090877 6446 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:36.090900 6446 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:41:36.090912 6446 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:36.090916 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:36.090947 6446 factory.go:656] Stopping watch factory\\\\nI1121 09:41:36.090964 6446 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:36.091000 6446 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 09:41:36.091006 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:36.091012 6446 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:36.091018 6446 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:36.091022 6446 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:41:36.091028 6446 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:01Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 09:42:00.997792 6728 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:42:00.997813 6728 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:42:00.997844 6728 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:42:00.997850 6728 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:42:00.997875 6728 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:42:00.997876 6728 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:42:00.997907 6728 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 09:42:00.997914 6728 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:42:00.997926 6728 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:42:00.997938 6728 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 09:42:00.997949 6728 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:42:00.997957 6728 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:42:00.997978 6728 factory.go:656] Stopping watch factory\\\\nI1121 09:42:00.997999 6728 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:42:00.998024 6728 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:42:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.409650 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.420946 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.431418 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.442507 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.455402 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.466392 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.481025 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:02Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.491027 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.491051 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.491060 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.491100 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.491110 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:02Z","lastTransitionTime":"2025-11-21T09:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.595455 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.595516 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.595529 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.595548 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.595985 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:02Z","lastTransitionTime":"2025-11-21T09:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.699122 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.699173 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.699183 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.699199 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.699209 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:02Z","lastTransitionTime":"2025-11-21T09:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.774030 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.803440 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.803507 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.803532 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.803565 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.803594 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:02Z","lastTransitionTime":"2025-11-21T09:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.907361 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.907401 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.907411 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.907427 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:02 crc kubenswrapper[4972]: I1121 09:42:02.907438 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:02Z","lastTransitionTime":"2025-11-21T09:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.010590 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.010648 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.010669 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.010700 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.010717 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:03Z","lastTransitionTime":"2025-11-21T09:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.113422 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.113484 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.113497 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.113520 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.113537 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:03Z","lastTransitionTime":"2025-11-21T09:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.216656 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.216697 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.216708 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.216723 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.216733 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:03Z","lastTransitionTime":"2025-11-21T09:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.258320 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/2.log" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.319128 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.319173 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.319184 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.319202 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.319217 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:03Z","lastTransitionTime":"2025-11-21T09:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.422165 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.422207 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.422217 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.422234 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.422244 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:03Z","lastTransitionTime":"2025-11-21T09:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.525298 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.525342 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.525352 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.525367 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.525377 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:03Z","lastTransitionTime":"2025-11-21T09:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.627515 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.627557 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.627566 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.627583 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.627596 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:03Z","lastTransitionTime":"2025-11-21T09:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.730762 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.730816 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.730845 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.730865 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.730878 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:03Z","lastTransitionTime":"2025-11-21T09:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.759082 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.759091 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.759150 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.759175 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:03 crc kubenswrapper[4972]: E1121 09:42:03.759296 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:03 crc kubenswrapper[4972]: E1121 09:42:03.759479 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:03 crc kubenswrapper[4972]: E1121 09:42:03.759544 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:03 crc kubenswrapper[4972]: E1121 09:42:03.759658 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.785176 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:03 crc kubenswrapper[4972]: E1121 09:42:03.785392 4972 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:42:03 crc kubenswrapper[4972]: E1121 09:42:03.785483 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs podName:df5e96f4-727c-44c1-8e2f-e624c912430b nodeName:}" failed. No retries permitted until 2025-11-21 09:42:35.785460316 +0000 UTC m=+100.894602864 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs") pod "network-metrics-daemon-k9mnh" (UID: "df5e96f4-727c-44c1-8e2f-e624c912430b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.832874 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.832934 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.832951 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.832972 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.832988 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:03Z","lastTransitionTime":"2025-11-21T09:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.936547 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.936594 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.936610 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.936628 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:03 crc kubenswrapper[4972]: I1121 09:42:03.936641 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:03Z","lastTransitionTime":"2025-11-21T09:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.039434 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.039484 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.039495 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.039516 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.039527 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:04Z","lastTransitionTime":"2025-11-21T09:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.145709 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.145751 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.145767 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.145785 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.145876 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:04Z","lastTransitionTime":"2025-11-21T09:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.248676 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.248718 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.248727 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.248741 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.248750 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:04Z","lastTransitionTime":"2025-11-21T09:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.351193 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.351238 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.351252 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.351271 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.351286 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:04Z","lastTransitionTime":"2025-11-21T09:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.453915 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.453966 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.453976 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.453993 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.454001 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:04Z","lastTransitionTime":"2025-11-21T09:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.556721 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.556790 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.556803 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.556866 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.556884 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:04Z","lastTransitionTime":"2025-11-21T09:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.659231 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.659281 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.659296 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.659315 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.659327 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:04Z","lastTransitionTime":"2025-11-21T09:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.761437 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.761479 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.761489 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.761504 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.761518 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:04Z","lastTransitionTime":"2025-11-21T09:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.863791 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.863850 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.863862 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.863875 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.863884 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:04Z","lastTransitionTime":"2025-11-21T09:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.966417 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.966457 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.966468 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.966484 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:04 crc kubenswrapper[4972]: I1121 09:42:04.966495 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:04Z","lastTransitionTime":"2025-11-21T09:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.069797 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.069878 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.069893 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.069912 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.069926 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:05Z","lastTransitionTime":"2025-11-21T09:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.171960 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.172007 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.172018 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.172033 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.172044 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:05Z","lastTransitionTime":"2025-11-21T09:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.274969 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.275011 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.275027 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.275051 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.275068 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:05Z","lastTransitionTime":"2025-11-21T09:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.377711 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.377752 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.377764 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.377785 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.377797 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:05Z","lastTransitionTime":"2025-11-21T09:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.479921 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.479961 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.479973 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.479993 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.480005 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:05Z","lastTransitionTime":"2025-11-21T09:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.581515 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.581552 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.581566 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.581580 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.581590 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:05Z","lastTransitionTime":"2025-11-21T09:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.683476 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.683517 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.683526 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.683540 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.683550 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:05Z","lastTransitionTime":"2025-11-21T09:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.759261 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.759354 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.759509 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:05 crc kubenswrapper[4972]: E1121 09:42:05.759507 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.759540 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:05 crc kubenswrapper[4972]: E1121 09:42:05.759637 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:05 crc kubenswrapper[4972]: E1121 09:42:05.759704 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:05 crc kubenswrapper[4972]: E1121 09:42:05.760023 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.776177 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.786361 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.786436 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.786448 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.786467 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.786480 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:05Z","lastTransitionTime":"2025-11-21T09:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.788908 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.806801 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.822329 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.832636 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.840632 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.857915 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:36Z\\\",\\\"message\\\":\\\" 6446 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 09:41:36.090780 6446 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:41:36.090819 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:36.090824 6446 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:36.090877 6446 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:36.090900 6446 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:41:36.090912 6446 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:36.090916 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:36.090947 6446 factory.go:656] Stopping watch factory\\\\nI1121 09:41:36.090964 6446 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:36.091000 6446 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 09:41:36.091006 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:36.091012 6446 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:36.091018 6446 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:36.091022 6446 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:41:36.091028 6446 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:01Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 09:42:00.997792 6728 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:42:00.997813 6728 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:42:00.997844 6728 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:42:00.997850 6728 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:42:00.997875 6728 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:42:00.997876 6728 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:42:00.997907 6728 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 09:42:00.997914 6728 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:42:00.997926 6728 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:42:00.997938 6728 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 09:42:00.997949 6728 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:42:00.997957 6728 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:42:00.997978 6728 factory.go:656] Stopping watch factory\\\\nI1121 09:42:00.997999 6728 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:42:00.998024 6728 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:42:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.868654 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.878128 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30325c44-ba7b-46ae-8a97-6b61aa169366\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e8457b59ef21238dc544bad22e50462262f2a8dccb77f227e3b71c0e42a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.889316 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.889354 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.889366 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.889384 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.889400 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:05Z","lastTransitionTime":"2025-11-21T09:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.895778 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.906330 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.916633 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.929124 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.938183 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.954267 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.961767 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.961796 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.961805 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.961821 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.961874 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:05Z","lastTransitionTime":"2025-11-21T09:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.967186 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.978439 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: E1121 09:42:05.986073 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.989627 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.989707 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:05Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.989747 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.989788 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.989805 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:05 crc kubenswrapper[4972]: I1121 09:42:05.989815 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:05Z","lastTransitionTime":"2025-11-21T09:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: E1121 09:42:06.006815 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:06Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.010713 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.010804 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.010819 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.010852 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.010867 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: E1121 09:42:06.024454 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:06Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.027902 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.027938 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.027949 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.027967 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.027979 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: E1121 09:42:06.041022 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:06Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.045043 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.045300 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.045373 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.045464 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.045552 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: E1121 09:42:06.059050 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:06Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:06 crc kubenswrapper[4972]: E1121 09:42:06.059164 4972 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.060702 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.060734 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.060743 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.060759 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.060769 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.162903 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.163504 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.163600 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.163718 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.163931 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.266708 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.266783 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.266799 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.266818 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.266867 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.369685 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.370002 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.370390 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.370433 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.370457 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.473608 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.473667 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.473716 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.473764 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.473784 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.575803 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.575845 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.575854 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.575867 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.575875 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.679133 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.679173 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.679185 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.679202 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.679214 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.781890 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.781923 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.781931 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.781944 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.781953 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.884320 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.884363 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.884375 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.884392 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.884404 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.987301 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.987344 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.987356 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.987373 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:06 crc kubenswrapper[4972]: I1121 09:42:06.987385 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:06Z","lastTransitionTime":"2025-11-21T09:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.089740 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.089798 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.089810 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.089825 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.089847 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:07Z","lastTransitionTime":"2025-11-21T09:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.192500 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.192556 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.192565 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.192584 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.192599 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:07Z","lastTransitionTime":"2025-11-21T09:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.294739 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.294795 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.294815 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.294873 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.294901 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:07Z","lastTransitionTime":"2025-11-21T09:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.397771 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.397865 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.397879 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.397897 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.397910 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:07Z","lastTransitionTime":"2025-11-21T09:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.500691 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.500734 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.500743 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.500757 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.500767 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:07Z","lastTransitionTime":"2025-11-21T09:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.602652 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.602691 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.602703 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.602719 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.602731 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:07Z","lastTransitionTime":"2025-11-21T09:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.705316 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.705369 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.705381 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.705415 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.705438 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:07Z","lastTransitionTime":"2025-11-21T09:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.758677 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:07 crc kubenswrapper[4972]: E1121 09:42:07.758842 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.759020 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.759067 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.759088 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:07 crc kubenswrapper[4972]: E1121 09:42:07.759177 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:07 crc kubenswrapper[4972]: E1121 09:42:07.759279 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:07 crc kubenswrapper[4972]: E1121 09:42:07.759328 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.808497 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.808785 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.808890 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.808985 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.809087 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:07Z","lastTransitionTime":"2025-11-21T09:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.912652 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.912996 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.913102 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.913188 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:07 crc kubenswrapper[4972]: I1121 09:42:07.913262 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:07Z","lastTransitionTime":"2025-11-21T09:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.016172 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.016207 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.016219 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.016236 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.016246 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:08Z","lastTransitionTime":"2025-11-21T09:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.118546 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.118578 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.118586 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.118600 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.118611 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:08Z","lastTransitionTime":"2025-11-21T09:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.221727 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.221775 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.221790 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.221809 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.221822 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:08Z","lastTransitionTime":"2025-11-21T09:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.324384 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.324439 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.324451 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.324470 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.324482 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:08Z","lastTransitionTime":"2025-11-21T09:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.426784 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.426879 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.426900 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.426926 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.426942 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:08Z","lastTransitionTime":"2025-11-21T09:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.533606 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.534416 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.534435 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.534464 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.534483 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:08Z","lastTransitionTime":"2025-11-21T09:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.636259 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.636301 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.636311 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.636324 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.636333 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:08Z","lastTransitionTime":"2025-11-21T09:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.738535 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.738580 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.738591 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.738606 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.738617 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:08Z","lastTransitionTime":"2025-11-21T09:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.841315 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.841379 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.841396 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.841425 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.841447 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:08Z","lastTransitionTime":"2025-11-21T09:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.945043 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.945097 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.945113 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.945136 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:08 crc kubenswrapper[4972]: I1121 09:42:08.945155 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:08Z","lastTransitionTime":"2025-11-21T09:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.047289 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.047359 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.047381 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.047409 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.047427 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:09Z","lastTransitionTime":"2025-11-21T09:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.149984 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.150050 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.150070 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.150095 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.150115 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:09Z","lastTransitionTime":"2025-11-21T09:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.252778 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.252820 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.252856 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.252876 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.252887 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:09Z","lastTransitionTime":"2025-11-21T09:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.356140 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.356202 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.356220 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.356245 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.356264 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:09Z","lastTransitionTime":"2025-11-21T09:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.458678 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.458718 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.458730 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.458747 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.458759 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:09Z","lastTransitionTime":"2025-11-21T09:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.561082 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.561128 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.561144 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.561165 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.561181 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:09Z","lastTransitionTime":"2025-11-21T09:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.663860 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.663892 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.663901 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.663914 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.663924 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:09Z","lastTransitionTime":"2025-11-21T09:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.759263 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.759447 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.759515 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:09 crc kubenswrapper[4972]: E1121 09:42:09.759509 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.759578 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:09 crc kubenswrapper[4972]: E1121 09:42:09.759714 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:09 crc kubenswrapper[4972]: E1121 09:42:09.759819 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:09 crc kubenswrapper[4972]: E1121 09:42:09.760013 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.766403 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.766427 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.766437 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.766451 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.766459 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:09Z","lastTransitionTime":"2025-11-21T09:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.869992 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.870025 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.870035 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.870049 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.870059 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:09Z","lastTransitionTime":"2025-11-21T09:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.972236 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.972277 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.972286 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.972301 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:09 crc kubenswrapper[4972]: I1121 09:42:09.972311 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:09Z","lastTransitionTime":"2025-11-21T09:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.074776 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.074823 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.074846 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.074862 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.074873 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:10Z","lastTransitionTime":"2025-11-21T09:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.177639 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.177689 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.177705 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.177728 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.177745 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:10Z","lastTransitionTime":"2025-11-21T09:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.279726 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.279767 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.279778 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.279795 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.279808 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:10Z","lastTransitionTime":"2025-11-21T09:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.382359 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.382425 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.382444 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.382477 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.382501 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:10Z","lastTransitionTime":"2025-11-21T09:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.484441 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.484520 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.484543 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.484574 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.484595 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:10Z","lastTransitionTime":"2025-11-21T09:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.586112 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.586142 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.586151 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.586164 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.586195 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:10Z","lastTransitionTime":"2025-11-21T09:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.688432 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.688468 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.688477 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.688494 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.688503 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:10Z","lastTransitionTime":"2025-11-21T09:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.790967 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.791020 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.791033 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.791051 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.791062 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:10Z","lastTransitionTime":"2025-11-21T09:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.893541 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.893618 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.893632 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.893650 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.893661 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:10Z","lastTransitionTime":"2025-11-21T09:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.996232 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.996289 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.996300 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.996315 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:10 crc kubenswrapper[4972]: I1121 09:42:10.996327 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:10Z","lastTransitionTime":"2025-11-21T09:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.099112 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.099167 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.099182 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.099203 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.099216 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:11Z","lastTransitionTime":"2025-11-21T09:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.201820 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.201881 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.201893 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.201910 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.201921 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:11Z","lastTransitionTime":"2025-11-21T09:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.284774 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bgtmb_ff4929f7-ed2f-4332-af3c-31b2333bda3d/kube-multus/0.log" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.284819 4972 generic.go:334] "Generic (PLEG): container finished" podID="ff4929f7-ed2f-4332-af3c-31b2333bda3d" containerID="a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc" exitCode=1 Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.284868 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bgtmb" event={"ID":"ff4929f7-ed2f-4332-af3c-31b2333bda3d","Type":"ContainerDied","Data":"a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc"} Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.285221 4972 scope.go:117] "RemoveContainer" containerID="a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.298247 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:11Z\\\",\\\"message\\\":\\\"2025-11-21T09:41:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9\\\\n2025-11-21T09:41:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9 to /host/opt/cni/bin/\\\\n2025-11-21T09:41:25Z [verbose] multus-daemon started\\\\n2025-11-21T09:41:25Z [verbose] Readiness Indicator file check\\\\n2025-11-21T09:42:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.303549 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.303574 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.303583 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.303597 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.303606 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:11Z","lastTransitionTime":"2025-11-21T09:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.312730 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.323952 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.337585 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.353171 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.367159 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.379318 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.389559 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30325c44-ba7b-46ae-8a97-6b61aa169366\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e8457b59ef21238dc544bad22e50462262f2a8dccb77f227e3b71c0e42a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.401290 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.405616 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.405653 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.405664 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.405682 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.405694 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:11Z","lastTransitionTime":"2025-11-21T09:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.413801 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.432495 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:36Z\\\",\\\"message\\\":\\\" 6446 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 09:41:36.090780 6446 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:41:36.090819 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:36.090824 6446 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:36.090877 6446 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:36.090900 6446 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:41:36.090912 6446 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:36.090916 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:36.090947 6446 factory.go:656] Stopping watch factory\\\\nI1121 09:41:36.090964 6446 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:36.091000 6446 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 09:41:36.091006 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:36.091012 6446 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:36.091018 6446 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:36.091022 6446 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:41:36.091028 6446 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:01Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 09:42:00.997792 6728 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:42:00.997813 6728 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:42:00.997844 6728 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:42:00.997850 6728 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:42:00.997875 6728 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:42:00.997876 6728 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:42:00.997907 6728 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 09:42:00.997914 6728 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:42:00.997926 6728 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:42:00.997938 6728 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 09:42:00.997949 6728 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:42:00.997957 6728 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:42:00.997978 6728 factory.go:656] Stopping watch factory\\\\nI1121 09:42:00.997999 6728 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:42:00.998024 6728 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:42:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.444495 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.457712 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.470618 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.487508 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.501695 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.508096 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.508135 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.508147 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.508163 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.508176 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:11Z","lastTransitionTime":"2025-11-21T09:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.519022 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.533151 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:11Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.610972 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.611044 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.611055 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.611072 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.611083 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:11Z","lastTransitionTime":"2025-11-21T09:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.714444 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.714485 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.714497 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.714515 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.714524 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:11Z","lastTransitionTime":"2025-11-21T09:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.758866 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.759134 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:11 crc kubenswrapper[4972]: E1121 09:42:11.759322 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.759371 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:11 crc kubenswrapper[4972]: E1121 09:42:11.759542 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.759648 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:11 crc kubenswrapper[4972]: E1121 09:42:11.759755 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:11 crc kubenswrapper[4972]: E1121 09:42:11.759870 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.817571 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.817614 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.817629 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.817649 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.817661 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:11Z","lastTransitionTime":"2025-11-21T09:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.921187 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.921248 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.921262 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.921284 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:11 crc kubenswrapper[4972]: I1121 09:42:11.921297 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:11Z","lastTransitionTime":"2025-11-21T09:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.024158 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.024217 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.024232 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.024250 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.024265 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:12Z","lastTransitionTime":"2025-11-21T09:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.127128 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.127171 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.127181 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.127195 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.127204 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:12Z","lastTransitionTime":"2025-11-21T09:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.230547 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.230630 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.230649 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.230676 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.230696 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:12Z","lastTransitionTime":"2025-11-21T09:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.289811 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bgtmb_ff4929f7-ed2f-4332-af3c-31b2333bda3d/kube-multus/0.log" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.289920 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bgtmb" event={"ID":"ff4929f7-ed2f-4332-af3c-31b2333bda3d","Type":"ContainerStarted","Data":"cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce"} Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.305536 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.315930 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.328759 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.333620 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.333675 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.333693 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.333715 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.333732 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:12Z","lastTransitionTime":"2025-11-21T09:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.341946 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.356576 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.373534 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.389488 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:11Z\\\",\\\"message\\\":\\\"2025-11-21T09:41:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9\\\\n2025-11-21T09:41:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9 to /host/opt/cni/bin/\\\\n2025-11-21T09:41:25Z [verbose] multus-daemon started\\\\n2025-11-21T09:41:25Z [verbose] Readiness Indicator file check\\\\n2025-11-21T09:42:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.406531 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.418172 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.429377 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.439975 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.440005 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.440013 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.440028 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.440039 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:12Z","lastTransitionTime":"2025-11-21T09:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.442249 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.456037 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.469400 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.479534 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.489863 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30325c44-ba7b-46ae-8a97-6b61aa169366\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e8457b59ef21238dc544bad22e50462262f2a8dccb77f227e3b71c0e42a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.502970 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.515636 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.537529 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7f54295494552443c6694bd9a9c1ffaf360a018baf2e4475f409304f37005a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:41:36Z\\\",\\\"message\\\":\\\" 6446 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 09:41:36.090780 6446 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:41:36.090819 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:41:36.090824 6446 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:41:36.090877 6446 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:41:36.090900 6446 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:41:36.090912 6446 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:41:36.090916 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:41:36.090947 6446 factory.go:656] Stopping watch factory\\\\nI1121 09:41:36.090964 6446 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:41:36.091000 6446 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 09:41:36.091006 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:41:36.091012 6446 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:41:36.091018 6446 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:41:36.091022 6446 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:41:36.091028 6446 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:01Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 09:42:00.997792 6728 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:42:00.997813 6728 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:42:00.997844 6728 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:42:00.997850 6728 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:42:00.997875 6728 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:42:00.997876 6728 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:42:00.997907 6728 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 09:42:00.997914 6728 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:42:00.997926 6728 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:42:00.997938 6728 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 09:42:00.997949 6728 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:42:00.997957 6728 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:42:00.997978 6728 factory.go:656] Stopping watch factory\\\\nI1121 09:42:00.997999 6728 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:42:00.998024 6728 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:42:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.542380 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.542414 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.542424 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.542484 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.542501 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:12Z","lastTransitionTime":"2025-11-21T09:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.645155 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.645229 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.645250 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.645277 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.645294 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:12Z","lastTransitionTime":"2025-11-21T09:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.749163 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.749251 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.749268 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.749292 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.749308 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:12Z","lastTransitionTime":"2025-11-21T09:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.759549 4972 scope.go:117] "RemoveContainer" containerID="96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b" Nov 21 09:42:12 crc kubenswrapper[4972]: E1121 09:42:12.759819 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.777364 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.795784 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:11Z\\\",\\\"message\\\":\\\"2025-11-21T09:41:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9\\\\n2025-11-21T09:41:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9 to /host/opt/cni/bin/\\\\n2025-11-21T09:41:25Z [verbose] multus-daemon started\\\\n2025-11-21T09:41:25Z [verbose] Readiness Indicator file check\\\\n2025-11-21T09:42:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.818264 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.835380 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.849796 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.851300 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.851336 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.851347 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.851369 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.851381 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:12Z","lastTransitionTime":"2025-11-21T09:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.864781 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.888506 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.903076 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.923329 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:01Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 09:42:00.997792 6728 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:42:00.997813 6728 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:42:00.997844 6728 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:42:00.997850 6728 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:42:00.997875 6728 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:42:00.997876 6728 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:42:00.997907 6728 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 09:42:00.997914 6728 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:42:00.997926 6728 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:42:00.997938 6728 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 09:42:00.997949 6728 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:42:00.997957 6728 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:42:00.997978 6728 factory.go:656] Stopping watch factory\\\\nI1121 09:42:00.997999 6728 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:42:00.998024 6728 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:42:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.937992 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.947912 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30325c44-ba7b-46ae-8a97-6b61aa169366\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e8457b59ef21238dc544bad22e50462262f2a8dccb77f227e3b71c0e42a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.955025 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.955297 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.955541 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.955764 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.956028 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:12Z","lastTransitionTime":"2025-11-21T09:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.968695 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:12 crc kubenswrapper[4972]: I1121 09:42:12.984478 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:12Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.007756 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:13Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.024900 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:13Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.036624 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:13Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.050600 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:13Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.059456 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.059498 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.059509 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.059526 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.059539 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:13Z","lastTransitionTime":"2025-11-21T09:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.062193 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:13Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.161901 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.162165 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.162269 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.162358 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.162513 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:13Z","lastTransitionTime":"2025-11-21T09:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.265543 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.265583 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.265595 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.265611 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.265622 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:13Z","lastTransitionTime":"2025-11-21T09:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.367933 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.368876 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.369017 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.369156 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.369275 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:13Z","lastTransitionTime":"2025-11-21T09:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.472439 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.472481 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.472493 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.472509 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.472517 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:13Z","lastTransitionTime":"2025-11-21T09:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.574516 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.574553 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.574560 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.574574 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.574583 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:13Z","lastTransitionTime":"2025-11-21T09:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.676738 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.676789 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.676801 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.676819 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.676849 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:13Z","lastTransitionTime":"2025-11-21T09:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.759118 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.759175 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.759243 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.759116 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:13 crc kubenswrapper[4972]: E1121 09:42:13.759302 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:13 crc kubenswrapper[4972]: E1121 09:42:13.759433 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:13 crc kubenswrapper[4972]: E1121 09:42:13.759650 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:13 crc kubenswrapper[4972]: E1121 09:42:13.759717 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.779313 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.779358 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.779371 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.779389 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.779402 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:13Z","lastTransitionTime":"2025-11-21T09:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.882282 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.882363 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.882373 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.882389 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.882398 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:13Z","lastTransitionTime":"2025-11-21T09:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.985161 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.985220 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.985239 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.985263 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:13 crc kubenswrapper[4972]: I1121 09:42:13.985280 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:13Z","lastTransitionTime":"2025-11-21T09:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.088229 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.088262 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.088271 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.088284 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.088293 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:14Z","lastTransitionTime":"2025-11-21T09:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.190296 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.190337 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.190348 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.190364 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.190372 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:14Z","lastTransitionTime":"2025-11-21T09:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.293459 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.293510 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.293524 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.293542 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.293552 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:14Z","lastTransitionTime":"2025-11-21T09:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.395785 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.395870 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.395885 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.395901 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.395912 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:14Z","lastTransitionTime":"2025-11-21T09:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.498367 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.498418 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.498434 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.498451 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.498464 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:14Z","lastTransitionTime":"2025-11-21T09:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.601063 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.601109 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.601119 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.601133 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.601144 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:14Z","lastTransitionTime":"2025-11-21T09:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.703968 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.704033 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.704050 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.704079 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.704096 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:14Z","lastTransitionTime":"2025-11-21T09:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.807507 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.807565 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.807581 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.807597 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.807608 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:14Z","lastTransitionTime":"2025-11-21T09:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.910027 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.910095 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.910103 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.910118 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:14 crc kubenswrapper[4972]: I1121 09:42:14.910145 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:14Z","lastTransitionTime":"2025-11-21T09:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.013384 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.013429 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.013439 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.013455 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.013467 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:15Z","lastTransitionTime":"2025-11-21T09:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.116490 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.116594 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.116610 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.116627 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.116639 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:15Z","lastTransitionTime":"2025-11-21T09:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.219404 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.219450 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.219459 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.219474 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.219486 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:15Z","lastTransitionTime":"2025-11-21T09:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.322032 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.322074 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.322085 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.322100 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.322109 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:15Z","lastTransitionTime":"2025-11-21T09:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.425137 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.425772 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.425878 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.425992 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.426076 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:15Z","lastTransitionTime":"2025-11-21T09:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.528915 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.529201 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.529275 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.529345 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.529438 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:15Z","lastTransitionTime":"2025-11-21T09:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.631674 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.631975 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.632055 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.632125 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.632190 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:15Z","lastTransitionTime":"2025-11-21T09:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.735127 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.735170 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.735182 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.735197 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.735208 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:15Z","lastTransitionTime":"2025-11-21T09:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.758608 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.758694 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:15 crc kubenswrapper[4972]: E1121 09:42:15.758743 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.758766 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.758863 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:15 crc kubenswrapper[4972]: E1121 09:42:15.758846 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:15 crc kubenswrapper[4972]: E1121 09:42:15.758909 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:15 crc kubenswrapper[4972]: E1121 09:42:15.758967 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.769465 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.781666 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.800039 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:01Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 09:42:00.997792 6728 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:42:00.997813 6728 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:42:00.997844 6728 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:42:00.997850 6728 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:42:00.997875 6728 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:42:00.997876 6728 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:42:00.997907 6728 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 09:42:00.997914 6728 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:42:00.997926 6728 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:42:00.997938 6728 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 09:42:00.997949 6728 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:42:00.997957 6728 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:42:00.997978 6728 factory.go:656] Stopping watch factory\\\\nI1121 09:42:00.997999 6728 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:42:00.998024 6728 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:42:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.809793 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.821436 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30325c44-ba7b-46ae-8a97-6b61aa169366\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e8457b59ef21238dc544bad22e50462262f2a8dccb77f227e3b71c0e42a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.833621 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.837200 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.837243 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.837254 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.837271 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.837284 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:15Z","lastTransitionTime":"2025-11-21T09:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.846468 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.859347 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.871439 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.883087 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.897260 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.916395 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.928266 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.939599 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.939639 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.939649 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.939685 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.939697 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:15Z","lastTransitionTime":"2025-11-21T09:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.941206 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:11Z\\\",\\\"message\\\":\\\"2025-11-21T09:41:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9\\\\n2025-11-21T09:41:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9 to /host/opt/cni/bin/\\\\n2025-11-21T09:41:25Z [verbose] multus-daemon started\\\\n2025-11-21T09:41:25Z [verbose] Readiness Indicator file check\\\\n2025-11-21T09:42:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.958040 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.972275 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.984727 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:15 crc kubenswrapper[4972]: I1121 09:42:15.994820 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:15Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.043894 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.043992 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.044014 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.044037 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.044052 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.147332 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.147380 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.147391 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.147407 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.147418 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.249789 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.249873 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.249887 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.249903 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.249914 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.352300 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.352332 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.352341 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.352355 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.352364 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.453917 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.453990 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.454013 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.454041 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.454063 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: E1121 09:42:16.470151 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:16Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.486291 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.486341 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.486353 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.486371 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.486382 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: E1121 09:42:16.500229 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:16Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.505029 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.505066 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.505076 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.505093 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.505103 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: E1121 09:42:16.519569 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:16Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.523415 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.523465 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.523478 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.523495 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.523507 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: E1121 09:42:16.538412 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:16Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.542660 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.542707 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.542720 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.542736 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.542748 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: E1121 09:42:16.557686 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:16Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:16 crc kubenswrapper[4972]: E1121 09:42:16.557866 4972 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.559647 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.559695 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.559706 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.559791 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.559816 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.661671 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.661718 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.661729 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.661744 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.661756 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.764498 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.764539 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.764551 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.764569 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.764580 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.866550 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.866603 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.866616 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.866637 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.866650 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.970142 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.970214 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.970224 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.970239 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:16 crc kubenswrapper[4972]: I1121 09:42:16.970249 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:16Z","lastTransitionTime":"2025-11-21T09:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.074137 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.074177 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.074187 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.074202 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.074212 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:17Z","lastTransitionTime":"2025-11-21T09:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.176933 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.176987 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.177004 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.177027 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.177046 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:17Z","lastTransitionTime":"2025-11-21T09:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.279387 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.279424 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.279432 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.279446 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.279455 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:17Z","lastTransitionTime":"2025-11-21T09:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.382120 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.382151 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.382160 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.382172 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.382202 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:17Z","lastTransitionTime":"2025-11-21T09:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.484469 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.484516 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.484530 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.484548 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.484562 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:17Z","lastTransitionTime":"2025-11-21T09:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.588348 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.588410 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.588419 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.588433 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.588460 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:17Z","lastTransitionTime":"2025-11-21T09:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.691697 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.691736 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.691745 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.691758 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.691767 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:17Z","lastTransitionTime":"2025-11-21T09:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.758969 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.759049 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:17 crc kubenswrapper[4972]: E1121 09:42:17.759091 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.759050 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:17 crc kubenswrapper[4972]: E1121 09:42:17.759185 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.759251 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:17 crc kubenswrapper[4972]: E1121 09:42:17.759331 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:17 crc kubenswrapper[4972]: E1121 09:42:17.759450 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.794675 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.794712 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.794774 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.794792 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.794804 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:17Z","lastTransitionTime":"2025-11-21T09:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.898533 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.898609 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.898629 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.898658 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:17 crc kubenswrapper[4972]: I1121 09:42:17.898679 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:17Z","lastTransitionTime":"2025-11-21T09:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.005017 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.005065 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.005080 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.005101 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.005118 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:18Z","lastTransitionTime":"2025-11-21T09:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.107490 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.107567 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.107590 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.107620 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.107643 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:18Z","lastTransitionTime":"2025-11-21T09:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.210393 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.210433 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.210447 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.210462 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.210471 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:18Z","lastTransitionTime":"2025-11-21T09:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.312414 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.312461 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.312474 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.312490 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.312501 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:18Z","lastTransitionTime":"2025-11-21T09:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.416215 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.416248 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.416258 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.416275 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.416286 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:18Z","lastTransitionTime":"2025-11-21T09:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.518925 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.519268 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.519281 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.519297 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.519308 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:18Z","lastTransitionTime":"2025-11-21T09:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.621526 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.621571 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.621582 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.621599 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.621611 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:18Z","lastTransitionTime":"2025-11-21T09:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.723363 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.723424 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.723435 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.723452 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.723463 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:18Z","lastTransitionTime":"2025-11-21T09:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.826652 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.826692 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.826700 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.826715 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.826725 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:18Z","lastTransitionTime":"2025-11-21T09:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.929161 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.929218 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.929227 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.929243 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:18 crc kubenswrapper[4972]: I1121 09:42:18.929252 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:18Z","lastTransitionTime":"2025-11-21T09:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.031368 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.031414 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.031426 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.031444 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.031456 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:19Z","lastTransitionTime":"2025-11-21T09:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.134066 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.134115 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.134125 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.134142 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.134154 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:19Z","lastTransitionTime":"2025-11-21T09:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.236538 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.236601 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.236618 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.236645 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.236661 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:19Z","lastTransitionTime":"2025-11-21T09:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.339601 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.339650 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.339663 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.339681 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.339694 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:19Z","lastTransitionTime":"2025-11-21T09:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.442778 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.442872 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.442891 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.442915 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.442932 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:19Z","lastTransitionTime":"2025-11-21T09:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.545877 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.545951 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.545974 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.546005 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.546022 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:19Z","lastTransitionTime":"2025-11-21T09:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.648424 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.648458 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.648466 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.648480 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.648489 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:19Z","lastTransitionTime":"2025-11-21T09:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.751781 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.751863 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.751876 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.751893 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.751911 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:19Z","lastTransitionTime":"2025-11-21T09:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.758590 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.758634 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.758737 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:19 crc kubenswrapper[4972]: E1121 09:42:19.758963 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.759012 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:19 crc kubenswrapper[4972]: E1121 09:42:19.759360 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:19 crc kubenswrapper[4972]: E1121 09:42:19.759383 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:19 crc kubenswrapper[4972]: E1121 09:42:19.759492 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.854458 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.854507 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.854519 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.854536 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.854548 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:19Z","lastTransitionTime":"2025-11-21T09:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.957588 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.957652 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.957664 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.957679 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:19 crc kubenswrapper[4972]: I1121 09:42:19.957689 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:19Z","lastTransitionTime":"2025-11-21T09:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.060575 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.060631 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.060647 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.060668 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.060685 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:20Z","lastTransitionTime":"2025-11-21T09:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.164265 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.164338 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.164359 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.164388 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.164406 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:20Z","lastTransitionTime":"2025-11-21T09:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.273012 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.273075 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.273093 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.273118 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.273133 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:20Z","lastTransitionTime":"2025-11-21T09:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.375802 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.375853 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.375864 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.375879 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.375891 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:20Z","lastTransitionTime":"2025-11-21T09:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.478428 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.478482 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.478493 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.478512 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.478525 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:20Z","lastTransitionTime":"2025-11-21T09:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.581238 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.581277 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.581288 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.581306 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.581317 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:20Z","lastTransitionTime":"2025-11-21T09:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.683994 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.684100 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.684124 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.684153 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.684178 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:20Z","lastTransitionTime":"2025-11-21T09:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.786802 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.786867 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.786876 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.786889 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.786897 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:20Z","lastTransitionTime":"2025-11-21T09:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.888962 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.889009 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.889021 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.889037 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.889049 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:20Z","lastTransitionTime":"2025-11-21T09:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.992114 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.992173 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.992192 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.992215 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:20 crc kubenswrapper[4972]: I1121 09:42:20.992232 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:20Z","lastTransitionTime":"2025-11-21T09:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.095162 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.095200 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.095232 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.095249 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.095260 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:21Z","lastTransitionTime":"2025-11-21T09:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.198045 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.198094 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.198139 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.198160 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.198178 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:21Z","lastTransitionTime":"2025-11-21T09:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.301080 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.301124 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.301134 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.301149 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.301161 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:21Z","lastTransitionTime":"2025-11-21T09:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.403905 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.403972 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.403992 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.404020 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.404039 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:21Z","lastTransitionTime":"2025-11-21T09:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.507511 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.507586 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.507631 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.507664 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.507686 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:21Z","lastTransitionTime":"2025-11-21T09:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.578079 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.578295 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:25.578249228 +0000 UTC m=+150.687391786 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.610556 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.610610 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.610623 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.610640 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.610653 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:21Z","lastTransitionTime":"2025-11-21T09:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.679247 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.679316 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.679357 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.679410 4972 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.679441 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.679483 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:43:25.679462151 +0000 UTC m=+150.788604659 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.679569 4972 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.679595 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.679630 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:43:25.679613785 +0000 UTC m=+150.788756313 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.679633 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.679661 4972 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.679702 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 09:43:25.679690467 +0000 UTC m=+150.788833005 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.679929 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.680064 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.680105 4972 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.680269 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 09:43:25.680230132 +0000 UTC m=+150.789372670 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.713890 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.713948 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.713960 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.713980 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.713992 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:21Z","lastTransitionTime":"2025-11-21T09:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.759056 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.759127 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.759056 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.759311 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.759509 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.759653 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.759783 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:21 crc kubenswrapper[4972]: E1121 09:42:21.759952 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.817051 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.817193 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.817213 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.817235 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.817251 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:21Z","lastTransitionTime":"2025-11-21T09:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.920463 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.920522 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.920538 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.920560 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:21 crc kubenswrapper[4972]: I1121 09:42:21.920578 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:21Z","lastTransitionTime":"2025-11-21T09:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.024256 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.024381 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.024400 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.024425 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.024442 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:22Z","lastTransitionTime":"2025-11-21T09:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.127036 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.127123 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.127162 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.127186 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.127197 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:22Z","lastTransitionTime":"2025-11-21T09:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.229343 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.229386 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.229394 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.229407 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.229415 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:22Z","lastTransitionTime":"2025-11-21T09:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.331661 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.331754 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.331777 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.331816 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.331866 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:22Z","lastTransitionTime":"2025-11-21T09:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.435200 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.435286 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.435309 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.435339 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.435362 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:22Z","lastTransitionTime":"2025-11-21T09:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.538150 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.538206 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.538220 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.538238 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.538248 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:22Z","lastTransitionTime":"2025-11-21T09:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.640755 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.640804 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.640815 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.640860 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.640871 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:22Z","lastTransitionTime":"2025-11-21T09:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.743594 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.743632 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.743643 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.743666 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.743677 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:22Z","lastTransitionTime":"2025-11-21T09:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.847532 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.847570 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.847584 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.847598 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.847608 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:22Z","lastTransitionTime":"2025-11-21T09:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.950486 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.950551 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.950567 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.950592 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:22 crc kubenswrapper[4972]: I1121 09:42:22.950612 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:22Z","lastTransitionTime":"2025-11-21T09:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.053424 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.053452 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.053463 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.053476 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.053484 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:23Z","lastTransitionTime":"2025-11-21T09:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.156394 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.156443 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.156454 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.156469 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.156498 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:23Z","lastTransitionTime":"2025-11-21T09:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.259811 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.259891 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.259904 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.259925 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.259944 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:23Z","lastTransitionTime":"2025-11-21T09:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.362638 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.362694 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.362710 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.362732 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.362749 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:23Z","lastTransitionTime":"2025-11-21T09:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.465186 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.465256 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.465272 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.465298 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.465320 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:23Z","lastTransitionTime":"2025-11-21T09:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.569583 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.569629 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.569639 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.569657 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.569667 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:23Z","lastTransitionTime":"2025-11-21T09:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.672727 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.672786 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.672797 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.672816 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.672861 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:23Z","lastTransitionTime":"2025-11-21T09:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.758445 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.758573 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:23 crc kubenswrapper[4972]: E1121 09:42:23.759058 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.758684 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:23 crc kubenswrapper[4972]: E1121 09:42:23.759321 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:23 crc kubenswrapper[4972]: E1121 09:42:23.759109 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.758609 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:23 crc kubenswrapper[4972]: E1121 09:42:23.759637 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.775858 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.775896 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.775911 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.775928 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.775940 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:23Z","lastTransitionTime":"2025-11-21T09:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.878962 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.879019 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.879034 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.879060 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.879074 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:23Z","lastTransitionTime":"2025-11-21T09:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.981754 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.981790 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.981801 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.981816 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:23 crc kubenswrapper[4972]: I1121 09:42:23.981853 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:23Z","lastTransitionTime":"2025-11-21T09:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.084147 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.084185 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.084195 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.084209 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.084220 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:24Z","lastTransitionTime":"2025-11-21T09:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.186051 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.186091 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.186101 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.186118 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.186132 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:24Z","lastTransitionTime":"2025-11-21T09:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.289307 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.289343 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.289356 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.289372 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.289384 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:24Z","lastTransitionTime":"2025-11-21T09:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.393859 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.393899 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.393931 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.393950 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.393962 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:24Z","lastTransitionTime":"2025-11-21T09:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.497334 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.497884 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.498039 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.498185 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.498319 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:24Z","lastTransitionTime":"2025-11-21T09:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.601446 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.601528 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.601545 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.601573 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.601591 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:24Z","lastTransitionTime":"2025-11-21T09:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.704988 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.705055 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.705072 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.705101 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.705123 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:24Z","lastTransitionTime":"2025-11-21T09:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.807427 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.807531 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.807554 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.807578 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.807596 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:24Z","lastTransitionTime":"2025-11-21T09:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.911417 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.911463 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.911474 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.911493 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:24 crc kubenswrapper[4972]: I1121 09:42:24.911507 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:24Z","lastTransitionTime":"2025-11-21T09:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.013675 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.013730 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.013747 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.013770 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.013786 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:25Z","lastTransitionTime":"2025-11-21T09:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.117183 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.117312 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.117382 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.117418 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.117444 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:25Z","lastTransitionTime":"2025-11-21T09:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.219867 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.219937 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.219956 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.219980 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.220020 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:25Z","lastTransitionTime":"2025-11-21T09:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.322776 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.322859 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.322877 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.322893 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.322906 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:25Z","lastTransitionTime":"2025-11-21T09:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.425016 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.425106 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.425118 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.425134 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.425167 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:25Z","lastTransitionTime":"2025-11-21T09:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.528359 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.528432 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.528454 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.528484 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.528506 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:25Z","lastTransitionTime":"2025-11-21T09:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.632020 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.632244 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.632333 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.632428 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.632515 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:25Z","lastTransitionTime":"2025-11-21T09:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.735989 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.736055 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.736072 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.736099 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.736119 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:25Z","lastTransitionTime":"2025-11-21T09:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.758622 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.758695 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.758791 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.758862 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:25 crc kubenswrapper[4972]: E1121 09:42:25.758869 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:25 crc kubenswrapper[4972]: E1121 09:42:25.759156 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:25 crc kubenswrapper[4972]: E1121 09:42:25.759389 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:25 crc kubenswrapper[4972]: E1121 09:42:25.760228 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.760749 4972 scope.go:117] "RemoveContainer" containerID="96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.774101 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.784389 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.800636 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.815640 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.830791 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.838905 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.838934 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.838949 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.838964 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.838974 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:25Z","lastTransitionTime":"2025-11-21T09:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.843895 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.859541 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:11Z\\\",\\\"message\\\":\\\"2025-11-21T09:41:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9\\\\n2025-11-21T09:41:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9 to /host/opt/cni/bin/\\\\n2025-11-21T09:41:25Z [verbose] multus-daemon started\\\\n2025-11-21T09:41:25Z [verbose] Readiness Indicator file check\\\\n2025-11-21T09:42:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.876404 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.888540 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.899805 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.916087 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.935782 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.942099 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.942179 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.942198 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.942218 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.942233 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:25Z","lastTransitionTime":"2025-11-21T09:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.951564 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.962541 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.981399 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30325c44-ba7b-46ae-8a97-6b61aa169366\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e8457b59ef21238dc544bad22e50462262f2a8dccb77f227e3b71c0e42a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:25 crc kubenswrapper[4972]: I1121 09:42:25.995978 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:25Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.004698 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.030580 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:01Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 09:42:00.997792 6728 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:42:00.997813 6728 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:42:00.997844 6728 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:42:00.997850 6728 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:42:00.997875 6728 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:42:00.997876 6728 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:42:00.997907 6728 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 09:42:00.997914 6728 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:42:00.997926 6728 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:42:00.997938 6728 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 09:42:00.997949 6728 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:42:00.997957 6728 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:42:00.997978 6728 factory.go:656] Stopping watch factory\\\\nI1121 09:42:00.997999 6728 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:42:00.998024 6728 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:42:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.045408 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.045452 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.045491 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.045513 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.045527 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.148210 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.148281 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.148310 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.148340 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.148360 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.252422 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.252462 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.252472 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.252485 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.252493 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.355482 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.355529 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.355539 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.355554 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.355564 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.458785 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.458878 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.458892 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.458907 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.458919 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.566559 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.567060 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.567070 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.567085 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.567095 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.669706 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.669759 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.669779 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.669802 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.669820 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.710713 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.710759 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.710767 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.710781 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.710791 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: E1121 09:42:26.728331 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.736204 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.736280 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.736303 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.736332 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.736352 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: E1121 09:42:26.763942 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.768610 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.768658 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.768723 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.768757 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.768774 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: E1121 09:42:26.786508 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.791420 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.791497 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.791522 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.791553 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.791578 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: E1121 09:42:26.823625 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.837041 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.837085 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.837098 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.837117 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.837130 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: E1121 09:42:26.862520 4972 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a234290f-71bd-4d0a-b5a3-5342e5c9c28a\\\",\\\"systemUUID\\\":\\\"da538fee-18a0-417f-878c-3556afbb76c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:26Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:26 crc kubenswrapper[4972]: E1121 09:42:26.863145 4972 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.865447 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.865493 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.865505 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.865523 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.865537 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.967893 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.968199 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.968359 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.968433 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:26 crc kubenswrapper[4972]: I1121 09:42:26.968494 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:26Z","lastTransitionTime":"2025-11-21T09:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.071249 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.071295 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.071322 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.071341 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.071353 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:27Z","lastTransitionTime":"2025-11-21T09:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.174797 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.175076 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.175164 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.175253 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.175367 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:27Z","lastTransitionTime":"2025-11-21T09:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.277336 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.277371 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.277381 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.277396 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.277410 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:27Z","lastTransitionTime":"2025-11-21T09:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.342988 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/2.log" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.346555 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff"} Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.347120 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.366389 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30325c44-ba7b-46ae-8a97-6b61aa169366\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e8457b59ef21238dc544bad22e50462262f2a8dccb77f227e3b71c0e42a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.379199 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.379232 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.379241 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.379255 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.379267 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:27Z","lastTransitionTime":"2025-11-21T09:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.382564 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.397086 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.416638 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:01Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 09:42:00.997792 6728 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:42:00.997813 6728 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:42:00.997844 6728 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:42:00.997850 6728 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:42:00.997875 6728 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:42:00.997876 6728 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:42:00.997907 6728 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 09:42:00.997914 6728 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:42:00.997926 6728 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:42:00.997938 6728 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 09:42:00.997949 6728 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:42:00.997957 6728 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:42:00.997978 6728 factory.go:656] Stopping watch factory\\\\nI1121 09:42:00.997999 6728 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:42:00.998024 6728 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:42:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.428292 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.437745 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.449824 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.461427 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.471861 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.480715 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.482703 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.482758 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.482772 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.482795 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.482811 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:27Z","lastTransitionTime":"2025-11-21T09:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.490547 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.502795 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:11Z\\\",\\\"message\\\":\\\"2025-11-21T09:41:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9\\\\n2025-11-21T09:41:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9 to /host/opt/cni/bin/\\\\n2025-11-21T09:41:25Z [verbose] multus-daemon started\\\\n2025-11-21T09:41:25Z [verbose] Readiness Indicator file check\\\\n2025-11-21T09:42:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.517811 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.528977 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.540685 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.552017 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.564409 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.577057 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:27Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.585175 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.585216 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.585233 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.585255 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.585269 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:27Z","lastTransitionTime":"2025-11-21T09:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.688039 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.688098 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.688112 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.688126 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.688137 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:27Z","lastTransitionTime":"2025-11-21T09:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.758605 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.758654 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.758676 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.758629 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:27 crc kubenswrapper[4972]: E1121 09:42:27.758791 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:27 crc kubenswrapper[4972]: E1121 09:42:27.758882 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:27 crc kubenswrapper[4972]: E1121 09:42:27.758991 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:27 crc kubenswrapper[4972]: E1121 09:42:27.759119 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.792990 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.793035 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.793053 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.793076 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.793096 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:27Z","lastTransitionTime":"2025-11-21T09:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.896872 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.896940 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.896965 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.896995 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.897017 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:27Z","lastTransitionTime":"2025-11-21T09:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.999378 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.999438 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.999456 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.999478 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:27 crc kubenswrapper[4972]: I1121 09:42:27.999494 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:27Z","lastTransitionTime":"2025-11-21T09:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.102030 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.102085 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.102101 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.102123 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.102140 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:28Z","lastTransitionTime":"2025-11-21T09:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.205165 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.205232 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.205252 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.205282 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.205300 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:28Z","lastTransitionTime":"2025-11-21T09:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.309977 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.310070 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.310093 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.310125 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.310154 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:28Z","lastTransitionTime":"2025-11-21T09:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.354204 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/3.log" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.355502 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/2.log" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.360227 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff" exitCode=1 Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.360291 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff"} Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.360346 4972 scope.go:117] "RemoveContainer" containerID="96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.361410 4972 scope.go:117] "RemoveContainer" containerID="6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff" Nov 21 09:42:28 crc kubenswrapper[4972]: E1121 09:42:28.361703 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.386023 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.407537 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.413906 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.413961 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.413979 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.414006 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.414026 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:28Z","lastTransitionTime":"2025-11-21T09:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.423941 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.442205 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.459172 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.478688 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.498633 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:11Z\\\",\\\"message\\\":\\\"2025-11-21T09:41:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9\\\\n2025-11-21T09:41:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9 to /host/opt/cni/bin/\\\\n2025-11-21T09:41:25Z [verbose] multus-daemon started\\\\n2025-11-21T09:41:25Z [verbose] Readiness Indicator file check\\\\n2025-11-21T09:42:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.517120 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.517304 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.517421 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.517536 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.517649 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:28Z","lastTransitionTime":"2025-11-21T09:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.518088 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.536255 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.554684 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.569810 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.587604 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.600941 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.621305 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.621341 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.621350 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.621366 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.621377 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:28Z","lastTransitionTime":"2025-11-21T09:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.621055 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a16c98413be1d1a5e757aa73e242a6e7d28e1b2fc3f032c98105d928949d6b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:01Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 09:42:00.997792 6728 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 09:42:00.997813 6728 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 09:42:00.997844 6728 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1121 09:42:00.997850 6728 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1121 09:42:00.997875 6728 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 09:42:00.997876 6728 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 09:42:00.997907 6728 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 09:42:00.997914 6728 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 09:42:00.997926 6728 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 09:42:00.997938 6728 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 09:42:00.997949 6728 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 09:42:00.997957 6728 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 09:42:00.997978 6728 factory.go:656] Stopping watch factory\\\\nI1121 09:42:00.997999 6728 ovnkube.go:599] Stopped ovnkube\\\\nI1121 09:42:00.998024 6728 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 09:42:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:27Z\\\",\\\"message\\\":\\\"5] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI1121 09:42:27.387094 7075 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc\\\\nI1121 09:42:27.387100 7075 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI1121 09:42:27.387245 7075 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI1121 09:42:27.387136 7075 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-bxwhb\\\\nI1121 09:42:27.387298 7075 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-bxwhb\\\\nI1121 09:42:27.387304 7075 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-bxwhb in node crc\\\\nI1121 09:42:27.387309 7075 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-bxwhb after 0 failed attempt(s)\\\\nI1121 09:42:27.387313 7075 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-bxwhb\\\\nF1121 09:42:27.387142 7075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.632903 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.648870 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30325c44-ba7b-46ae-8a97-6b61aa169366\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e8457b59ef21238dc544bad22e50462262f2a8dccb77f227e3b71c0e42a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.665236 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.681280 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:28Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.723264 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.723313 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.723326 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.723346 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.723363 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:28Z","lastTransitionTime":"2025-11-21T09:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.826162 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.826213 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.826229 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.826250 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.826265 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:28Z","lastTransitionTime":"2025-11-21T09:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.929325 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.929375 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.929388 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.929412 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:28 crc kubenswrapper[4972]: I1121 09:42:28.929428 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:28Z","lastTransitionTime":"2025-11-21T09:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.031695 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.032036 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.032212 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.032393 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.032604 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:29Z","lastTransitionTime":"2025-11-21T09:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.135612 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.135892 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.135978 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.136060 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.136180 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:29Z","lastTransitionTime":"2025-11-21T09:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.239312 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.239592 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.239656 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.239723 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.239785 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:29Z","lastTransitionTime":"2025-11-21T09:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.342184 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.342254 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.342287 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.342318 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.342343 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:29Z","lastTransitionTime":"2025-11-21T09:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.365954 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/3.log" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.370309 4972 scope.go:117] "RemoveContainer" containerID="6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff" Nov 21 09:42:29 crc kubenswrapper[4972]: E1121 09:42:29.370671 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.392137 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.409207 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.422995 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.436503 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.445284 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.445325 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.445335 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.445350 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.445360 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:29Z","lastTransitionTime":"2025-11-21T09:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.450389 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.463241 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.487149 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:27Z\\\",\\\"message\\\":\\\"5] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI1121 09:42:27.387094 7075 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc\\\\nI1121 09:42:27.387100 7075 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI1121 09:42:27.387245 7075 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI1121 09:42:27.387136 7075 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-bxwhb\\\\nI1121 09:42:27.387298 7075 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-bxwhb\\\\nI1121 09:42:27.387304 7075 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-bxwhb in node crc\\\\nI1121 09:42:27.387309 7075 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-bxwhb after 0 failed attempt(s)\\\\nI1121 09:42:27.387313 7075 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-bxwhb\\\\nF1121 09:42:27.387142 7075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:42:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.498377 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.509113 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30325c44-ba7b-46ae-8a97-6b61aa169366\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e8457b59ef21238dc544bad22e50462262f2a8dccb77f227e3b71c0e42a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.522943 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.538165 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.548046 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.548084 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.548096 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.548113 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.548125 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:29Z","lastTransitionTime":"2025-11-21T09:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.551526 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.569464 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.583447 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.596234 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.610343 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.625011 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.642763 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:11Z\\\",\\\"message\\\":\\\"2025-11-21T09:41:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9\\\\n2025-11-21T09:41:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9 to /host/opt/cni/bin/\\\\n2025-11-21T09:41:25Z [verbose] multus-daemon started\\\\n2025-11-21T09:41:25Z [verbose] Readiness Indicator file check\\\\n2025-11-21T09:42:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:29Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.650628 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.650752 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.650848 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.650941 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.651007 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:29Z","lastTransitionTime":"2025-11-21T09:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.754188 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.754224 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.754233 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.754248 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.754268 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:29Z","lastTransitionTime":"2025-11-21T09:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.758731 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.758763 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.758735 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:29 crc kubenswrapper[4972]: E1121 09:42:29.758855 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:29 crc kubenswrapper[4972]: E1121 09:42:29.758920 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.758975 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:29 crc kubenswrapper[4972]: E1121 09:42:29.759069 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:29 crc kubenswrapper[4972]: E1121 09:42:29.759330 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.857453 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.857502 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.857515 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.857532 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.857545 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:29Z","lastTransitionTime":"2025-11-21T09:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.960290 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.960356 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.960375 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.960398 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:29 crc kubenswrapper[4972]: I1121 09:42:29.960416 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:29Z","lastTransitionTime":"2025-11-21T09:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.063544 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.063580 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.063591 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.063606 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.063617 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:30Z","lastTransitionTime":"2025-11-21T09:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.166185 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.166224 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.166232 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.166265 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.166275 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:30Z","lastTransitionTime":"2025-11-21T09:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.269917 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.270014 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.270032 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.270082 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.270100 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:30Z","lastTransitionTime":"2025-11-21T09:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.372683 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.372818 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.372888 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.372921 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.372961 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:30Z","lastTransitionTime":"2025-11-21T09:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.475812 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.475882 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.475896 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.475913 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.475927 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:30Z","lastTransitionTime":"2025-11-21T09:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.578783 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.578860 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.578871 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.578883 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.578893 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:30Z","lastTransitionTime":"2025-11-21T09:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.681591 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.681646 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.681656 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.681670 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.681680 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:30Z","lastTransitionTime":"2025-11-21T09:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.785409 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.785507 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.785524 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.785557 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.785572 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:30Z","lastTransitionTime":"2025-11-21T09:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.889392 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.889502 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.889525 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.889594 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.889615 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:30Z","lastTransitionTime":"2025-11-21T09:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.993503 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.993549 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.993560 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.993578 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:30 crc kubenswrapper[4972]: I1121 09:42:30.993591 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:30Z","lastTransitionTime":"2025-11-21T09:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.097567 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.097635 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.097648 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.097677 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.097692 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:31Z","lastTransitionTime":"2025-11-21T09:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.200342 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.200411 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.200424 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.200450 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.200471 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:31Z","lastTransitionTime":"2025-11-21T09:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.304427 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.304485 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.304503 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.304522 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.304535 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:31Z","lastTransitionTime":"2025-11-21T09:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.407796 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.407882 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.407897 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.407921 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.407938 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:31Z","lastTransitionTime":"2025-11-21T09:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.510820 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.510920 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.510938 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.510967 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.510990 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:31Z","lastTransitionTime":"2025-11-21T09:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.613962 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.614018 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.614035 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.614058 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.614075 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:31Z","lastTransitionTime":"2025-11-21T09:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.716249 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.716332 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.716350 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.716373 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.716390 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:31Z","lastTransitionTime":"2025-11-21T09:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.758591 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.758603 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:31 crc kubenswrapper[4972]: E1121 09:42:31.759100 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.758977 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:31 crc kubenswrapper[4972]: E1121 09:42:31.759346 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.758718 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:31 crc kubenswrapper[4972]: E1121 09:42:31.759574 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:31 crc kubenswrapper[4972]: E1121 09:42:31.759134 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.819982 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.820024 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.820049 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.820065 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.820077 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:31Z","lastTransitionTime":"2025-11-21T09:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.922683 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.922722 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.922734 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.922750 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:31 crc kubenswrapper[4972]: I1121 09:42:31.922761 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:31Z","lastTransitionTime":"2025-11-21T09:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.026990 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.027041 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.027052 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.027070 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.027084 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:32Z","lastTransitionTime":"2025-11-21T09:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.130378 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.130409 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.130419 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.130440 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.130452 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:32Z","lastTransitionTime":"2025-11-21T09:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.234098 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.234144 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.234157 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.234174 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.234183 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:32Z","lastTransitionTime":"2025-11-21T09:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.337693 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.337758 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.337772 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.337797 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.337814 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:32Z","lastTransitionTime":"2025-11-21T09:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.439898 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.440141 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.440206 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.440271 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.440336 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:32Z","lastTransitionTime":"2025-11-21T09:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.544214 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.544535 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.544682 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.544859 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.545029 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:32Z","lastTransitionTime":"2025-11-21T09:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.648402 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.648507 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.648532 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.648562 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.648579 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:32Z","lastTransitionTime":"2025-11-21T09:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.751190 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.751228 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.751240 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.751255 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.751264 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:32Z","lastTransitionTime":"2025-11-21T09:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.854512 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.854568 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.854580 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.854598 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.854611 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:32Z","lastTransitionTime":"2025-11-21T09:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.957044 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.957103 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.957124 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.957147 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:32 crc kubenswrapper[4972]: I1121 09:42:32.957164 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:32Z","lastTransitionTime":"2025-11-21T09:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.060241 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.060288 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.060300 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.060319 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.060331 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:33Z","lastTransitionTime":"2025-11-21T09:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.162429 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.162487 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.162496 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.162512 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.162521 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:33Z","lastTransitionTime":"2025-11-21T09:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.265960 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.266275 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.266292 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.266319 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.266336 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:33Z","lastTransitionTime":"2025-11-21T09:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.369550 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.369612 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.369625 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.369638 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.369649 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:33Z","lastTransitionTime":"2025-11-21T09:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.472145 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.472180 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.472188 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.472201 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.472210 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:33Z","lastTransitionTime":"2025-11-21T09:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.574984 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.575037 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.575055 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.575076 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.575092 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:33Z","lastTransitionTime":"2025-11-21T09:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.677681 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.677729 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.677739 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.677753 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.677762 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:33Z","lastTransitionTime":"2025-11-21T09:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.758739 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.758825 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.758969 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:33 crc kubenswrapper[4972]: E1121 09:42:33.758887 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:33 crc kubenswrapper[4972]: E1121 09:42:33.759033 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:33 crc kubenswrapper[4972]: E1121 09:42:33.759171 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.759442 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:33 crc kubenswrapper[4972]: E1121 09:42:33.759587 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.780020 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.780069 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.780081 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.780100 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.780111 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:33Z","lastTransitionTime":"2025-11-21T09:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.883764 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.883877 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.883898 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.883924 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.883949 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:33Z","lastTransitionTime":"2025-11-21T09:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.986729 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.986764 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.986774 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.986788 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:33 crc kubenswrapper[4972]: I1121 09:42:33.986800 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:33Z","lastTransitionTime":"2025-11-21T09:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.089800 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.089866 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.089877 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.089890 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.089900 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:34Z","lastTransitionTime":"2025-11-21T09:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.192209 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.192254 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.192270 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.192292 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.192309 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:34Z","lastTransitionTime":"2025-11-21T09:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.295389 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.295439 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.295450 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.295467 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.295478 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:34Z","lastTransitionTime":"2025-11-21T09:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.397909 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.397948 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.397960 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.397976 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.397987 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:34Z","lastTransitionTime":"2025-11-21T09:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.501030 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.501065 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.501073 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.501087 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.501096 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:34Z","lastTransitionTime":"2025-11-21T09:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.603391 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.603444 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.603453 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.603467 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.603477 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:34Z","lastTransitionTime":"2025-11-21T09:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.706264 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.706336 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.706357 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.706389 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.706411 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:34Z","lastTransitionTime":"2025-11-21T09:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.808293 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.808335 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.808347 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.808363 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.808375 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:34Z","lastTransitionTime":"2025-11-21T09:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.911244 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.911289 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.911331 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.911351 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:34 crc kubenswrapper[4972]: I1121 09:42:34.911361 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:34Z","lastTransitionTime":"2025-11-21T09:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.013375 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.013450 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.013465 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.013484 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.013497 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:35Z","lastTransitionTime":"2025-11-21T09:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.116725 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.116774 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.116790 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.116814 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.116861 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:35Z","lastTransitionTime":"2025-11-21T09:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.219290 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.219355 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.219378 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.219407 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.219429 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:35Z","lastTransitionTime":"2025-11-21T09:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.323271 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.323345 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.323405 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.323440 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.323462 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:35Z","lastTransitionTime":"2025-11-21T09:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.426158 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.426219 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.426233 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.426254 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.426268 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:35Z","lastTransitionTime":"2025-11-21T09:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.529367 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.529432 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.529445 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.529460 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.529474 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:35Z","lastTransitionTime":"2025-11-21T09:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.632004 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.632079 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.632120 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.632147 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.632159 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:35Z","lastTransitionTime":"2025-11-21T09:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.735668 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.735717 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.735729 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.735747 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.735759 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:35Z","lastTransitionTime":"2025-11-21T09:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.758653 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.758727 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.758855 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:35 crc kubenswrapper[4972]: E1121 09:42:35.758823 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.758875 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:35 crc kubenswrapper[4972]: E1121 09:42:35.758965 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:35 crc kubenswrapper[4972]: E1121 09:42:35.759119 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:35 crc kubenswrapper[4972]: E1121 09:42:35.759194 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.774659 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.783709 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.804148 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.822864 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab387621e56bc83e169d01839eb5cb84f8a9a80982365563d571d509d705f567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.838512 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.838556 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.838570 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.838591 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.838605 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:35Z","lastTransitionTime":"2025-11-21T09:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.841894 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.848143 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:35 crc kubenswrapper[4972]: E1121 09:42:35.848284 4972 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:42:35 crc kubenswrapper[4972]: E1121 09:42:35.848343 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs podName:df5e96f4-727c-44c1-8e2f-e624c912430b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.848327686 +0000 UTC m=+164.957470194 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs") pod "network-metrics-daemon-k9mnh" (UID: "df5e96f4-727c-44c1-8e2f-e624c912430b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.856038 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df5e96f4-727c-44c1-8e2f-e624c912430b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.876267 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16486305-de51-4df2-87d8-067ec44e2be1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec31e42f5315297235d695bc3addca88a36f1954b0be865d1d85778baddf48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2c7bc4c2de1efcc1678fb37b9689b1b407c0cf5bdbfed608b4828edf84eb67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed0507b0a70ab9ee85c70ce9488aa18ba8b409db687041dcca91d0fd25b334c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bcd41745293ac57de36f07adc200e164ea5e72f0b26f40b164afad58778985\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.900679 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5971d3-55cc-43d2-a604-149eeb23f1e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a86ccbb1baf34cbf5dce16fba61ef5c94975ddca7d6e68690a9974eddf4f04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1436deb04fc6a4f7f3dc8b57a31a55b205c7500c5dfa993ce4da456ad9ba6fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30691231d386a56088e48be10b7d6694c434a0ac8f3a8a3bf6a825b5776603e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6f5bee217bc489c11185847d28c9d8b6d9b5a45fe9a084dce9f07b4ec9d17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f386851f4872e6242cd91744d6c2ef310cc8ee8822c3e0c5dd94a7b4a70bf64a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e372fa46bec16678c6ff5398b089a961e24f9684c580d02668fc1b9620385a66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33af99d260514dd85546f4829ae8e05615c0a2b88a5bf0b7998e01595a03988d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k727p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z4gd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.919579 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d92edf87307c0edef79ffc2c2963b2c9600862340f1d2147867034dfc3e4486e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ztjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9l6cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.936074 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bgtmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4929f7-ed2f-4332-af3c-31b2333bda3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:11Z\\\",\\\"message\\\":\\\"2025-11-21T09:41:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9\\\\n2025-11-21T09:41:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4753cca9-ce1b-4d13-8580-1b908ffbc7a9 to /host/opt/cni/bin/\\\\n2025-11-21T09:41:25Z [verbose] multus-daemon started\\\\n2025-11-21T09:41:25Z [verbose] Readiness Indicator file check\\\\n2025-11-21T09:42:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6tgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bgtmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.940679 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.940712 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.940772 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.940792 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.940822 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:35Z","lastTransitionTime":"2025-11-21T09:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.955090 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c2408e-823e-428e-bd4d-35ea57c639cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b80c5d93b01961df40dc7a26a30017dd3fe37e2fb2354e898c97ba22b75e13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f7ba3733a9be35b89ff4485ae276889a7c3037c6073eea416e4f047e9fed0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23f6b856b30285a6229ffd579cb2387bd290c3ea5d2e8d3948de3a34f327e90a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d577a086d8771d84ba9eef30f60dfda5f3e3b3973d2f2f5d106da15297313fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28a8b9470ebcfc8c201873dafd9fabf994e0d7a4be012905213d83d53818f3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"message\\\":\\\"_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 09:41:17.528668 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 09:41:17.528674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 09:41:17.528678 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 09:41:17.531061 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531118 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1121 09:41:17.531141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531170 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1121 09:41:17.531231 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1121 09:41:17.531230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1121 09:41:17.531282 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1121 09:41:17.531363 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1121 09:41:17.532223 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3911362738/tls.crt::/tmp/serving-cert-3911362738/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763718063\\\\\\\\\\\\\\\" (2025-11-21 09:41:02 +0000 UTC to 2025-12-21 09:41:03 +0000 UTC (now=2025-11-21 09:41:17.532197076 +0000 UTC))\\\\\\\"\\\\nF1121 09:41:17.532273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e918ff7abd38d5ac0127feed95e18c8f8c94afa8710510dae4bea6c8cf28f424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade938de71dfe88b1d3ef333da159428ab1e5535303dc6d9ec068157655ee5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.975769 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0254d0d97e8e199da32f3c722fa11dcc40548aa7a01268ff4895441a8e62d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d548d583d4ff257d1b4b3eea0d085db9560bc89d560e607d218fcc656684b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:35 crc kubenswrapper[4972]: I1121 09:42:35.993709 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01a00dfad916f368164c546a658845ab70eca63284c0ddf0ed88256de3a4c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:35Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.009573 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a546c25-18b2-417a-a58b-4017476895fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1764676765dddc3cf90df2685aeb525066d5779640aa40dbff44c19de04eebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86c15b9db2709ce51cb6ad39709d8575abe1e03e8524b8fbff5b65bf44f8aa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jcfcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.028760 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3201535-914d-45a5-bd2d-2d9e3d1b89ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2569e939b8254ed8f0c255ea14a65d7c4cfa4491a1d00722abd9e4412e29334c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e3dabfde6cfa4ac43cf07090dd319e83e402676216af847178710306ab8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9915311e4e9cae479e53ac0cf1243560d110dcfe1abc366ce37281d49e294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3107e17d4076715cfb147b70c0caa2f49f5a1a7d744b3b2e38ee58bdcb0654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.042729 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.042914 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.042932 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.042950 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.042962 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:36Z","lastTransitionTime":"2025-11-21T09:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.045366 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grwbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6baa0cbc-fe21-4bda-8e20-505496c26832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16acc6b123728e10b0903786b5f370dff41f17377c5a1af99d06b7fe18152787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttvmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grwbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.072318 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c159725e-4c82-4474-96d9-211f7d8db47f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T09:42:27Z\\\",\\\"message\\\":\\\"5] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI1121 09:42:27.387094 7075 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc\\\\nI1121 09:42:27.387100 7075 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI1121 09:42:27.387245 7075 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI1121 09:42:27.387136 7075 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-bxwhb\\\\nI1121 09:42:27.387298 7075 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-bxwhb\\\\nI1121 09:42:27.387304 7075 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-bxwhb in node crc\\\\nI1121 09:42:27.387309 7075 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-bxwhb after 0 failed attempt(s)\\\\nI1121 09:42:27.387313 7075 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-bxwhb\\\\nF1121 09:42:27.387142 7075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T09:42:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:41:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg8k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxwhb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.086610 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h79hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455db960-eb74-4f4e-b297-b06c4d32009a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe17f7cfb6a8ed867964b6dbc5f783ed5c7edd59e4acd59ae4cf8a8a5e8bf0ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jt9l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:41:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h79hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.100058 4972 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30325c44-ba7b-46ae-8a97-6b61aa169366\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T09:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e8457b59ef21238dc544bad22e50462262f2a8dccb77f227e3b71c0e42a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T09:40:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938a1fee241c93da248aac18b241cd7b41fcfbf591796cc6e9e46231d5638f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T09:40:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T09:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T09:40:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T09:42:36Z is after 2025-08-24T17:21:41Z" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.146537 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.146591 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.146610 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.146634 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.146652 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:36Z","lastTransitionTime":"2025-11-21T09:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.249263 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.249299 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.249309 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.249322 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.249330 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:36Z","lastTransitionTime":"2025-11-21T09:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.351441 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.351483 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.351491 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.351508 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.351517 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:36Z","lastTransitionTime":"2025-11-21T09:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.454554 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.454618 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.454632 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.454652 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.454668 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:36Z","lastTransitionTime":"2025-11-21T09:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.557213 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.557262 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.557275 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.557290 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.557300 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:36Z","lastTransitionTime":"2025-11-21T09:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.660229 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.660289 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.660309 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.660335 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.660352 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:36Z","lastTransitionTime":"2025-11-21T09:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.764398 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.764479 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.764503 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.764534 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.764631 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:36Z","lastTransitionTime":"2025-11-21T09:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.867943 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.868003 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.868019 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.868049 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.868067 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:36Z","lastTransitionTime":"2025-11-21T09:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.971433 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.971534 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.971562 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.971590 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:36 crc kubenswrapper[4972]: I1121 09:42:36.971607 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:36Z","lastTransitionTime":"2025-11-21T09:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.074336 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.074394 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.074408 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.074428 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.074444 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:37Z","lastTransitionTime":"2025-11-21T09:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.177677 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.177738 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.177754 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.177775 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.177794 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:37Z","lastTransitionTime":"2025-11-21T09:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.261789 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.261888 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.261906 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.261931 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.261953 4972 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T09:42:37Z","lastTransitionTime":"2025-11-21T09:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.325704 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp"] Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.326748 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.329428 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.329474 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.329510 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.329814 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.358570 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.358552166 podStartE2EDuration="2.358552166s" podCreationTimestamp="2025-11-21 09:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:42:37.356086768 +0000 UTC m=+102.465229336" watchObservedRunningTime="2025-11-21 09:42:37.358552166 +0000 UTC m=+102.467694684" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.386465 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.386449423 podStartE2EDuration="1m20.386449423s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:42:37.372088388 +0000 UTC m=+102.481230986" watchObservedRunningTime="2025-11-21 09:42:37.386449423 +0000 UTC m=+102.495591921" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.430826 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jcfcl" podStartSLOduration=79.430807013 podStartE2EDuration="1m19.430807013s" podCreationTimestamp="2025-11-21 09:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:42:37.416476919 +0000 UTC m=+102.525619437" watchObservedRunningTime="2025-11-21 09:42:37.430807013 +0000 UTC m=+102.539949511" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.431124 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=35.431119331 podStartE2EDuration="35.431119331s" podCreationTimestamp="2025-11-21 09:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:42:37.429761794 +0000 UTC m=+102.538904302" watchObservedRunningTime="2025-11-21 09:42:37.431119331 +0000 UTC m=+102.540261829" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.442348 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=47.442330949 podStartE2EDuration="47.442330949s" podCreationTimestamp="2025-11-21 09:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:42:37.442251777 +0000 UTC m=+102.551394295" watchObservedRunningTime="2025-11-21 09:42:37.442330949 +0000 UTC m=+102.551473447" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.453812 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-grwbs" podStartSLOduration=80.453794625 podStartE2EDuration="1m20.453794625s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:42:37.453592609 +0000 UTC m=+102.562735117" watchObservedRunningTime="2025-11-21 09:42:37.453794625 +0000 UTC m=+102.562937123" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.468819 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3ec53856-e488-4632-9c10-59ed336a1911-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.468902 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec53856-e488-4632-9c10-59ed336a1911-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.468963 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3ec53856-e488-4632-9c10-59ed336a1911-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.469034 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec53856-e488-4632-9c10-59ed336a1911-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.469082 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3ec53856-e488-4632-9c10-59ed336a1911-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.495161 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-h79hr" podStartSLOduration=80.495141621 podStartE2EDuration="1m20.495141621s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:42:37.483928963 +0000 UTC m=+102.593071471" watchObservedRunningTime="2025-11-21 09:42:37.495141621 +0000 UTC m=+102.604284109" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.495505 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.495500021 podStartE2EDuration="1m20.495500021s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:42:37.495342077 +0000 UTC m=+102.604484575" watchObservedRunningTime="2025-11-21 09:42:37.495500021 +0000 UTC m=+102.604642519" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.570536 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3ec53856-e488-4632-9c10-59ed336a1911-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.570601 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec53856-e488-4632-9c10-59ed336a1911-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.570664 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3ec53856-e488-4632-9c10-59ed336a1911-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.570714 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec53856-e488-4632-9c10-59ed336a1911-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.570762 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3ec53856-e488-4632-9c10-59ed336a1911-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.570869 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3ec53856-e488-4632-9c10-59ed336a1911-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.570910 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3ec53856-e488-4632-9c10-59ed336a1911-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.571404 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3ec53856-e488-4632-9c10-59ed336a1911-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.589556 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ec53856-e488-4632-9c10-59ed336a1911-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.603099 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec53856-e488-4632-9c10-59ed336a1911-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5lxjp\" (UID: \"3ec53856-e488-4632-9c10-59ed336a1911\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.606329 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bgtmb" podStartSLOduration=80.606309158 podStartE2EDuration="1m20.606309158s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:42:37.58966582 +0000 UTC m=+102.698808358" watchObservedRunningTime="2025-11-21 09:42:37.606309158 +0000 UTC m=+102.715451666" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.621347 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-z4gd8" podStartSLOduration=80.62131593 podStartE2EDuration="1m20.62131593s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:42:37.60602982 +0000 UTC m=+102.715172338" watchObservedRunningTime="2025-11-21 09:42:37.62131593 +0000 UTC m=+102.730458458" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.621614 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podStartSLOduration=80.621607368 podStartE2EDuration="1m20.621607368s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:42:37.619820579 +0000 UTC m=+102.728963097" watchObservedRunningTime="2025-11-21 09:42:37.621607368 +0000 UTC m=+102.730749906" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.648128 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.759290 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.759316 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:37 crc kubenswrapper[4972]: E1121 09:42:37.759457 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.759316 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:37 crc kubenswrapper[4972]: E1121 09:42:37.759549 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:37 crc kubenswrapper[4972]: E1121 09:42:37.759633 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:37 crc kubenswrapper[4972]: I1121 09:42:37.759769 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:37 crc kubenswrapper[4972]: E1121 09:42:37.759933 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:38 crc kubenswrapper[4972]: I1121 09:42:38.400665 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" event={"ID":"3ec53856-e488-4632-9c10-59ed336a1911","Type":"ContainerStarted","Data":"c962966bcd6577ea307907d584e7c3e70aa46fd55fa415e2d197f51fe3ca4b66"} Nov 21 09:42:38 crc kubenswrapper[4972]: I1121 09:42:38.400742 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" event={"ID":"3ec53856-e488-4632-9c10-59ed336a1911","Type":"ContainerStarted","Data":"f2b213f74fbbd6b83af7bfec546ff5cadde4ee360a8f81ea0431c655b6c910ca"} Nov 21 09:42:38 crc kubenswrapper[4972]: I1121 09:42:38.418977 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5lxjp" podStartSLOduration=81.41895581 podStartE2EDuration="1m21.41895581s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:42:38.41786814 +0000 UTC m=+103.527010638" watchObservedRunningTime="2025-11-21 09:42:38.41895581 +0000 UTC m=+103.528098308" Nov 21 09:42:39 crc kubenswrapper[4972]: I1121 09:42:39.759552 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:39 crc kubenswrapper[4972]: I1121 09:42:39.759706 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:39 crc kubenswrapper[4972]: E1121 09:42:39.760003 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:39 crc kubenswrapper[4972]: I1121 09:42:39.760043 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:39 crc kubenswrapper[4972]: I1121 09:42:39.760036 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:39 crc kubenswrapper[4972]: I1121 09:42:39.761326 4972 scope.go:117] "RemoveContainer" containerID="6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff" Nov 21 09:42:39 crc kubenswrapper[4972]: E1121 09:42:39.761715 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" Nov 21 09:42:39 crc kubenswrapper[4972]: E1121 09:42:39.762358 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:39 crc kubenswrapper[4972]: E1121 09:42:39.762546 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:39 crc kubenswrapper[4972]: E1121 09:42:39.762785 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:41 crc kubenswrapper[4972]: I1121 09:42:41.759087 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:41 crc kubenswrapper[4972]: I1121 09:42:41.759133 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:41 crc kubenswrapper[4972]: I1121 09:42:41.759189 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:41 crc kubenswrapper[4972]: I1121 09:42:41.759101 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:41 crc kubenswrapper[4972]: E1121 09:42:41.759244 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:41 crc kubenswrapper[4972]: E1121 09:42:41.759334 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:41 crc kubenswrapper[4972]: E1121 09:42:41.759426 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:41 crc kubenswrapper[4972]: E1121 09:42:41.759514 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:43 crc kubenswrapper[4972]: I1121 09:42:43.759171 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:43 crc kubenswrapper[4972]: I1121 09:42:43.759205 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:43 crc kubenswrapper[4972]: I1121 09:42:43.759210 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:43 crc kubenswrapper[4972]: I1121 09:42:43.759258 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:43 crc kubenswrapper[4972]: E1121 09:42:43.760073 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:43 crc kubenswrapper[4972]: E1121 09:42:43.760236 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:43 crc kubenswrapper[4972]: E1121 09:42:43.760336 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:43 crc kubenswrapper[4972]: E1121 09:42:43.760543 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:45 crc kubenswrapper[4972]: I1121 09:42:45.759308 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:45 crc kubenswrapper[4972]: I1121 09:42:45.759313 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:45 crc kubenswrapper[4972]: I1121 09:42:45.759334 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:45 crc kubenswrapper[4972]: I1121 09:42:45.759393 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:45 crc kubenswrapper[4972]: E1121 09:42:45.761257 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:45 crc kubenswrapper[4972]: E1121 09:42:45.761400 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:45 crc kubenswrapper[4972]: E1121 09:42:45.761464 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:45 crc kubenswrapper[4972]: E1121 09:42:45.761513 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:47 crc kubenswrapper[4972]: I1121 09:42:47.758811 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:47 crc kubenswrapper[4972]: I1121 09:42:47.758915 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:47 crc kubenswrapper[4972]: I1121 09:42:47.758930 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:47 crc kubenswrapper[4972]: I1121 09:42:47.759002 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:47 crc kubenswrapper[4972]: E1121 09:42:47.763640 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:47 crc kubenswrapper[4972]: E1121 09:42:47.764557 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:47 crc kubenswrapper[4972]: E1121 09:42:47.764960 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:47 crc kubenswrapper[4972]: E1121 09:42:47.765070 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:49 crc kubenswrapper[4972]: I1121 09:42:49.759176 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:49 crc kubenswrapper[4972]: E1121 09:42:49.759392 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:49 crc kubenswrapper[4972]: I1121 09:42:49.759482 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:49 crc kubenswrapper[4972]: I1121 09:42:49.759545 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:49 crc kubenswrapper[4972]: I1121 09:42:49.759531 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:49 crc kubenswrapper[4972]: E1121 09:42:49.759678 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:49 crc kubenswrapper[4972]: E1121 09:42:49.759775 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:49 crc kubenswrapper[4972]: E1121 09:42:49.759904 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:51 crc kubenswrapper[4972]: I1121 09:42:51.758725 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:51 crc kubenswrapper[4972]: I1121 09:42:51.758914 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:51 crc kubenswrapper[4972]: E1121 09:42:51.758929 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:51 crc kubenswrapper[4972]: I1121 09:42:51.758987 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:51 crc kubenswrapper[4972]: I1121 09:42:51.759025 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:51 crc kubenswrapper[4972]: E1121 09:42:51.759178 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:51 crc kubenswrapper[4972]: E1121 09:42:51.760055 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:51 crc kubenswrapper[4972]: E1121 09:42:51.760244 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:51 crc kubenswrapper[4972]: I1121 09:42:51.760962 4972 scope.go:117] "RemoveContainer" containerID="6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff" Nov 21 09:42:51 crc kubenswrapper[4972]: E1121 09:42:51.761304 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" Nov 21 09:42:53 crc kubenswrapper[4972]: I1121 09:42:53.759077 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:53 crc kubenswrapper[4972]: E1121 09:42:53.759214 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:53 crc kubenswrapper[4972]: I1121 09:42:53.759378 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:53 crc kubenswrapper[4972]: I1121 09:42:53.759434 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:53 crc kubenswrapper[4972]: E1121 09:42:53.759543 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:53 crc kubenswrapper[4972]: E1121 09:42:53.759809 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:53 crc kubenswrapper[4972]: I1121 09:42:53.759954 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:53 crc kubenswrapper[4972]: E1121 09:42:53.760030 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:55 crc kubenswrapper[4972]: E1121 09:42:55.737089 4972 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 21 09:42:55 crc kubenswrapper[4972]: I1121 09:42:55.758987 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:55 crc kubenswrapper[4972]: I1121 09:42:55.759278 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:55 crc kubenswrapper[4972]: I1121 09:42:55.759337 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:55 crc kubenswrapper[4972]: I1121 09:42:55.759614 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:55 crc kubenswrapper[4972]: E1121 09:42:55.759589 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:55 crc kubenswrapper[4972]: E1121 09:42:55.759916 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:55 crc kubenswrapper[4972]: E1121 09:42:55.761649 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:55 crc kubenswrapper[4972]: E1121 09:42:55.761823 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:55 crc kubenswrapper[4972]: E1121 09:42:55.956406 4972 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:42:57 crc kubenswrapper[4972]: I1121 09:42:57.463784 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bgtmb_ff4929f7-ed2f-4332-af3c-31b2333bda3d/kube-multus/1.log" Nov 21 09:42:57 crc kubenswrapper[4972]: I1121 09:42:57.464373 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bgtmb_ff4929f7-ed2f-4332-af3c-31b2333bda3d/kube-multus/0.log" Nov 21 09:42:57 crc kubenswrapper[4972]: I1121 09:42:57.464436 4972 generic.go:334] "Generic (PLEG): container finished" podID="ff4929f7-ed2f-4332-af3c-31b2333bda3d" containerID="cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce" exitCode=1 Nov 21 09:42:57 crc kubenswrapper[4972]: I1121 09:42:57.464473 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bgtmb" event={"ID":"ff4929f7-ed2f-4332-af3c-31b2333bda3d","Type":"ContainerDied","Data":"cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce"} Nov 21 09:42:57 crc kubenswrapper[4972]: I1121 09:42:57.464524 4972 scope.go:117] "RemoveContainer" containerID="a081a37b0d08094eedaaaf29ae344b7eb5b06f0e5a231ad7751c7371ff9d50cc" Nov 21 09:42:57 crc kubenswrapper[4972]: I1121 09:42:57.465008 4972 scope.go:117] "RemoveContainer" containerID="cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce" Nov 21 09:42:57 crc kubenswrapper[4972]: E1121 09:42:57.465189 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-bgtmb_openshift-multus(ff4929f7-ed2f-4332-af3c-31b2333bda3d)\"" pod="openshift-multus/multus-bgtmb" podUID="ff4929f7-ed2f-4332-af3c-31b2333bda3d" Nov 21 09:42:57 crc kubenswrapper[4972]: I1121 09:42:57.758532 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:57 crc kubenswrapper[4972]: I1121 09:42:57.758699 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:57 crc kubenswrapper[4972]: I1121 09:42:57.758803 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:57 crc kubenswrapper[4972]: I1121 09:42:57.758933 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:57 crc kubenswrapper[4972]: E1121 09:42:57.759079 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:57 crc kubenswrapper[4972]: E1121 09:42:57.759215 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:57 crc kubenswrapper[4972]: E1121 09:42:57.759304 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:42:57 crc kubenswrapper[4972]: E1121 09:42:57.758815 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:58 crc kubenswrapper[4972]: I1121 09:42:58.470490 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bgtmb_ff4929f7-ed2f-4332-af3c-31b2333bda3d/kube-multus/1.log" Nov 21 09:42:59 crc kubenswrapper[4972]: I1121 09:42:59.759258 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:42:59 crc kubenswrapper[4972]: I1121 09:42:59.759336 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:42:59 crc kubenswrapper[4972]: E1121 09:42:59.759496 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:42:59 crc kubenswrapper[4972]: I1121 09:42:59.759556 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:42:59 crc kubenswrapper[4972]: E1121 09:42:59.759667 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:42:59 crc kubenswrapper[4972]: E1121 09:42:59.759970 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:42:59 crc kubenswrapper[4972]: I1121 09:42:59.760586 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:42:59 crc kubenswrapper[4972]: E1121 09:42:59.760954 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:00 crc kubenswrapper[4972]: E1121 09:43:00.957477 4972 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:43:01 crc kubenswrapper[4972]: I1121 09:43:01.759130 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:01 crc kubenswrapper[4972]: I1121 09:43:01.759259 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:01 crc kubenswrapper[4972]: I1121 09:43:01.759301 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:01 crc kubenswrapper[4972]: E1121 09:43:01.759425 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:01 crc kubenswrapper[4972]: I1121 09:43:01.759468 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:01 crc kubenswrapper[4972]: E1121 09:43:01.759702 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:01 crc kubenswrapper[4972]: E1121 09:43:01.759793 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:01 crc kubenswrapper[4972]: E1121 09:43:01.759938 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:03 crc kubenswrapper[4972]: I1121 09:43:03.759242 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:03 crc kubenswrapper[4972]: E1121 09:43:03.759514 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:03 crc kubenswrapper[4972]: I1121 09:43:03.761409 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:03 crc kubenswrapper[4972]: E1121 09:43:03.761534 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:03 crc kubenswrapper[4972]: I1121 09:43:03.761809 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:03 crc kubenswrapper[4972]: I1121 09:43:03.761899 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:03 crc kubenswrapper[4972]: E1121 09:43:03.761979 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:03 crc kubenswrapper[4972]: E1121 09:43:03.762185 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:05 crc kubenswrapper[4972]: I1121 09:43:05.758787 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:05 crc kubenswrapper[4972]: I1121 09:43:05.758864 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:05 crc kubenswrapper[4972]: I1121 09:43:05.758802 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:05 crc kubenswrapper[4972]: I1121 09:43:05.758783 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:05 crc kubenswrapper[4972]: E1121 09:43:05.761315 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:05 crc kubenswrapper[4972]: E1121 09:43:05.761153 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:05 crc kubenswrapper[4972]: E1121 09:43:05.761921 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:05 crc kubenswrapper[4972]: E1121 09:43:05.762380 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:05 crc kubenswrapper[4972]: E1121 09:43:05.959022 4972 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:43:06 crc kubenswrapper[4972]: I1121 09:43:06.760701 4972 scope.go:117] "RemoveContainer" containerID="6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff" Nov 21 09:43:06 crc kubenswrapper[4972]: E1121 09:43:06.762043 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxwhb_openshift-ovn-kubernetes(c159725e-4c82-4474-96d9-211f7d8db47f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" Nov 21 09:43:07 crc kubenswrapper[4972]: I1121 09:43:07.759395 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:07 crc kubenswrapper[4972]: I1121 09:43:07.759547 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:07 crc kubenswrapper[4972]: I1121 09:43:07.759765 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:07 crc kubenswrapper[4972]: E1121 09:43:07.759743 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:07 crc kubenswrapper[4972]: E1121 09:43:07.759913 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:07 crc kubenswrapper[4972]: E1121 09:43:07.759990 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:07 crc kubenswrapper[4972]: I1121 09:43:07.760281 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:07 crc kubenswrapper[4972]: E1121 09:43:07.760575 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:09 crc kubenswrapper[4972]: I1121 09:43:09.758552 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:09 crc kubenswrapper[4972]: I1121 09:43:09.758631 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:09 crc kubenswrapper[4972]: I1121 09:43:09.758552 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:09 crc kubenswrapper[4972]: E1121 09:43:09.758762 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:09 crc kubenswrapper[4972]: E1121 09:43:09.758956 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:09 crc kubenswrapper[4972]: E1121 09:43:09.759116 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:09 crc kubenswrapper[4972]: I1121 09:43:09.760004 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:09 crc kubenswrapper[4972]: E1121 09:43:09.760243 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:10 crc kubenswrapper[4972]: I1121 09:43:10.759952 4972 scope.go:117] "RemoveContainer" containerID="cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce" Nov 21 09:43:10 crc kubenswrapper[4972]: E1121 09:43:10.960864 4972 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:43:11 crc kubenswrapper[4972]: I1121 09:43:11.521218 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bgtmb_ff4929f7-ed2f-4332-af3c-31b2333bda3d/kube-multus/1.log" Nov 21 09:43:11 crc kubenswrapper[4972]: I1121 09:43:11.521282 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bgtmb" event={"ID":"ff4929f7-ed2f-4332-af3c-31b2333bda3d","Type":"ContainerStarted","Data":"3969f599ad79be6d19471b6566e5f9148e3b59684d5ab5f5dd36490f3ad850ce"} Nov 21 09:43:11 crc kubenswrapper[4972]: I1121 09:43:11.759522 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:11 crc kubenswrapper[4972]: I1121 09:43:11.759583 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:11 crc kubenswrapper[4972]: I1121 09:43:11.759621 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:11 crc kubenswrapper[4972]: E1121 09:43:11.760275 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:11 crc kubenswrapper[4972]: E1121 09:43:11.759682 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:11 crc kubenswrapper[4972]: E1121 09:43:11.759891 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:11 crc kubenswrapper[4972]: I1121 09:43:11.759522 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:11 crc kubenswrapper[4972]: E1121 09:43:11.760370 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:13 crc kubenswrapper[4972]: I1121 09:43:13.759295 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:13 crc kubenswrapper[4972]: I1121 09:43:13.759343 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:13 crc kubenswrapper[4972]: E1121 09:43:13.759510 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:13 crc kubenswrapper[4972]: I1121 09:43:13.759613 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:13 crc kubenswrapper[4972]: I1121 09:43:13.759696 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:13 crc kubenswrapper[4972]: E1121 09:43:13.759883 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:13 crc kubenswrapper[4972]: E1121 09:43:13.760296 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:13 crc kubenswrapper[4972]: E1121 09:43:13.761473 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:15 crc kubenswrapper[4972]: I1121 09:43:15.758515 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:15 crc kubenswrapper[4972]: I1121 09:43:15.758592 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:15 crc kubenswrapper[4972]: E1121 09:43:15.759668 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:15 crc kubenswrapper[4972]: I1121 09:43:15.759772 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:15 crc kubenswrapper[4972]: I1121 09:43:15.759820 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:15 crc kubenswrapper[4972]: E1121 09:43:15.760050 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:15 crc kubenswrapper[4972]: E1121 09:43:15.760342 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:15 crc kubenswrapper[4972]: E1121 09:43:15.760542 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:15 crc kubenswrapper[4972]: E1121 09:43:15.961415 4972 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:43:17 crc kubenswrapper[4972]: I1121 09:43:17.758690 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:17 crc kubenswrapper[4972]: I1121 09:43:17.758719 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:17 crc kubenswrapper[4972]: I1121 09:43:17.758804 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:17 crc kubenswrapper[4972]: E1121 09:43:17.758894 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:17 crc kubenswrapper[4972]: I1121 09:43:17.758958 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:17 crc kubenswrapper[4972]: E1121 09:43:17.759604 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:17 crc kubenswrapper[4972]: E1121 09:43:17.759740 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:17 crc kubenswrapper[4972]: E1121 09:43:17.759933 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:19 crc kubenswrapper[4972]: I1121 09:43:19.759403 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:19 crc kubenswrapper[4972]: I1121 09:43:19.759424 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:19 crc kubenswrapper[4972]: I1121 09:43:19.759498 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:19 crc kubenswrapper[4972]: I1121 09:43:19.759966 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:19 crc kubenswrapper[4972]: E1121 09:43:19.760102 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:19 crc kubenswrapper[4972]: E1121 09:43:19.760248 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:19 crc kubenswrapper[4972]: E1121 09:43:19.760407 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:19 crc kubenswrapper[4972]: E1121 09:43:19.760463 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:19 crc kubenswrapper[4972]: I1121 09:43:19.760935 4972 scope.go:117] "RemoveContainer" containerID="6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff" Nov 21 09:43:20 crc kubenswrapper[4972]: I1121 09:43:20.553709 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/3.log" Nov 21 09:43:20 crc kubenswrapper[4972]: I1121 09:43:20.558723 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerStarted","Data":"500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346"} Nov 21 09:43:20 crc kubenswrapper[4972]: I1121 09:43:20.559377 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:43:20 crc kubenswrapper[4972]: I1121 09:43:20.571402 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-k9mnh"] Nov 21 09:43:20 crc kubenswrapper[4972]: I1121 09:43:20.571512 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:20 crc kubenswrapper[4972]: E1121 09:43:20.571598 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:20 crc kubenswrapper[4972]: I1121 09:43:20.593183 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podStartSLOduration=123.593163105 podStartE2EDuration="2m3.593163105s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:20.592591339 +0000 UTC m=+145.701733867" watchObservedRunningTime="2025-11-21 09:43:20.593163105 +0000 UTC m=+145.702305603" Nov 21 09:43:20 crc kubenswrapper[4972]: E1121 09:43:20.962749 4972 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:43:21 crc kubenswrapper[4972]: I1121 09:43:21.759490 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:21 crc kubenswrapper[4972]: I1121 09:43:21.759543 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:21 crc kubenswrapper[4972]: I1121 09:43:21.759659 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:21 crc kubenswrapper[4972]: E1121 09:43:21.760116 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:21 crc kubenswrapper[4972]: E1121 09:43:21.759980 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:21 crc kubenswrapper[4972]: E1121 09:43:21.760291 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:22 crc kubenswrapper[4972]: I1121 09:43:22.759227 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:22 crc kubenswrapper[4972]: E1121 09:43:22.759412 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:23 crc kubenswrapper[4972]: I1121 09:43:23.759201 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:23 crc kubenswrapper[4972]: I1121 09:43:23.759363 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:23 crc kubenswrapper[4972]: I1121 09:43:23.759236 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:23 crc kubenswrapper[4972]: E1121 09:43:23.759485 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:23 crc kubenswrapper[4972]: E1121 09:43:23.759616 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:23 crc kubenswrapper[4972]: E1121 09:43:23.759693 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:24 crc kubenswrapper[4972]: I1121 09:43:24.758733 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:24 crc kubenswrapper[4972]: E1121 09:43:24.758956 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9mnh" podUID="df5e96f4-727c-44c1-8e2f-e624c912430b" Nov 21 09:43:25 crc kubenswrapper[4972]: I1121 09:43:25.622086 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.622300 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:45:27.622284726 +0000 UTC m=+272.731427224 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:25 crc kubenswrapper[4972]: I1121 09:43:25.723181 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:25 crc kubenswrapper[4972]: I1121 09:43:25.723250 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:25 crc kubenswrapper[4972]: I1121 09:43:25.723306 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:25 crc kubenswrapper[4972]: I1121 09:43:25.723347 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.723438 4972 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.723462 4972 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.723535 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:45:27.72350953 +0000 UTC m=+272.832652068 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.723539 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.723567 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 09:45:27.723552681 +0000 UTC m=+272.832695219 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.723573 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.723597 4972 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.723637 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.723672 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 09:45:27.723650114 +0000 UTC m=+272.832792652 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.723674 4972 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.723704 4972 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.723745 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 09:45:27.723733406 +0000 UTC m=+272.832875944 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 09:43:25 crc kubenswrapper[4972]: I1121 09:43:25.759430 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.762002 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 09:43:25 crc kubenswrapper[4972]: I1121 09:43:25.762406 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:25 crc kubenswrapper[4972]: I1121 09:43:25.762486 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.762618 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 09:43:25 crc kubenswrapper[4972]: E1121 09:43:25.762740 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 09:43:26 crc kubenswrapper[4972]: I1121 09:43:26.178966 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:43:26 crc kubenswrapper[4972]: I1121 09:43:26.179069 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:43:26 crc kubenswrapper[4972]: I1121 09:43:26.759122 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:26 crc kubenswrapper[4972]: I1121 09:43:26.761957 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 21 09:43:26 crc kubenswrapper[4972]: I1121 09:43:26.766343 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 21 09:43:27 crc kubenswrapper[4972]: I1121 09:43:27.759446 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:43:27 crc kubenswrapper[4972]: I1121 09:43:27.759527 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:43:27 crc kubenswrapper[4972]: I1121 09:43:27.759618 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:43:27 crc kubenswrapper[4972]: I1121 09:43:27.762801 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 21 09:43:27 crc kubenswrapper[4972]: I1121 09:43:27.762882 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 21 09:43:27 crc kubenswrapper[4972]: I1121 09:43:27.763052 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 21 09:43:27 crc kubenswrapper[4972]: I1121 09:43:27.763760 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.395236 4972 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.444111 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jdqq6"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.445737 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xxsnz"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.445909 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.446402 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.448063 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.448266 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.448481 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.450728 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.451037 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.451273 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.451435 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.451582 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.455184 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.455382 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.456080 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.456282 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.456413 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.456536 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.457750 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.459461 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4nd8h"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.460322 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.461121 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.461987 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.462434 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.471153 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.471849 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.472080 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.472138 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.472206 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.472306 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.472393 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.472531 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.472588 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.481042 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.485559 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.489351 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.489984 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.493966 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-n6wh5"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.511400 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.513197 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.513456 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.513629 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.515690 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.520209 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.521150 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.521171 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.521458 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.521468 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.521679 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.521784 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.522074 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.522229 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.522263 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.522450 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.531888 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.532242 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w2c2r"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.532296 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.532492 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.532865 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.537142 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b5tdm"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.538029 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.539909 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.540109 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.541390 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-swwr5"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.541581 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.541699 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4nd8h"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.541725 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jdqq6"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.541738 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.542598 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.543255 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n8mj7"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.543629 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.544047 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-swwr5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.544318 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.547117 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.548883 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-j7xxl"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.549389 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.550202 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/12c25e78-a24e-4962-8976-3bc097fdaaf6-node-pullsecrets\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.550601 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f82j\" (UniqueName: \"kubernetes.io/projected/3897d9bc-e576-4575-8451-10a0e3a73517-kube-api-access-4f82j\") pod \"machine-api-operator-5694c8668f-jdqq6\" (UID: \"3897d9bc-e576-4575-8451-10a0e3a73517\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.550671 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-audit\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.550716 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-etcd-serving-ca\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.550744 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3897d9bc-e576-4575-8451-10a0e3a73517-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jdqq6\" (UID: \"3897d9bc-e576-4575-8451-10a0e3a73517\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.550765 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c25e78-a24e-4962-8976-3bc097fdaaf6-serving-cert\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.550789 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42v9w\" (UniqueName: \"kubernetes.io/projected/12c25e78-a24e-4962-8976-3bc097fdaaf6-kube-api-access-42v9w\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.551251 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3897d9bc-e576-4575-8451-10a0e3a73517-config\") pod \"machine-api-operator-5694c8668f-jdqq6\" (UID: \"3897d9bc-e576-4575-8451-10a0e3a73517\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.551272 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-config\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.551302 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-image-import-ca\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.551321 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/12c25e78-a24e-4962-8976-3bc097fdaaf6-etcd-client\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.551340 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/12c25e78-a24e-4962-8976-3bc097fdaaf6-encryption-config\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.551364 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3897d9bc-e576-4575-8451-10a0e3a73517-images\") pod \"machine-api-operator-5694c8668f-jdqq6\" (UID: \"3897d9bc-e576-4575-8451-10a0e3a73517\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.551382 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/12c25e78-a24e-4962-8976-3bc097fdaaf6-audit-dir\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.551401 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.551929 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.552439 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.553038 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.555407 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zdj2c"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.556138 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zdj2c" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.558553 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5s9h7"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.559177 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.559181 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.559551 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.559779 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.560971 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.563037 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-77wn4"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.563860 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.564143 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.565822 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.567690 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.568741 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xxsnz"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.568823 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.581608 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.581893 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.582020 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.582154 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.582355 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.582483 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.582598 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.582763 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.582892 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sgptm"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.583542 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.585600 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.588311 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.588606 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.597031 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.597312 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.598562 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.598678 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.599093 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.599678 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.599930 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.600088 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.600245 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.600348 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.600456 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.600619 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.600718 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.600396 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.601271 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gmffc"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.602863 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9t6sj"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.603449 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.603911 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.604002 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9t6sj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.601553 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.602251 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.602057 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.603346 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.603393 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.603462 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.603607 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.603749 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.603807 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.603905 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.603958 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gmffc" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.606674 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.617406 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.618528 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.618682 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.618951 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.619231 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.619525 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.621398 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.622597 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-g9znh"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.627259 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.628242 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.629032 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.629075 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.629107 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.631989 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.629280 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.634206 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.634983 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.649569 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.650619 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.651772 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.651941 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.652156 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.652287 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.652487 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.652804 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.652934 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.654869 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.655503 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.655686 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.657057 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-console-config\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.657178 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3897d9bc-e576-4575-8451-10a0e3a73517-images\") pod \"machine-api-operator-5694c8668f-jdqq6\" (UID: \"3897d9bc-e576-4575-8451-10a0e3a73517\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.657294 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/12c25e78-a24e-4962-8976-3bc097fdaaf6-audit-dir\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.657388 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2sdbs\" (UID: \"3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.657488 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7bdccc3-c26f-4d11-a892-caf246a8630f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fd2p7\" (UID: \"e7bdccc3-c26f-4d11-a892-caf246a8630f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.657583 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-config\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.657680 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.657779 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b0e4d64-f901-4a4e-9644-408eb534401e-console-serving-cert\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.657904 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.657999 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.658199 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9280ad8-85ad-4faa-a025-a021e417e522-audit-dir\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.658312 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-config\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.658406 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.658507 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/12c25e78-a24e-4962-8976-3bc097fdaaf6-node-pullsecrets\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.658599 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31dd0dd8-9279-46ab-83bf-92282256204b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrxfz\" (UID: \"31dd0dd8-9279-46ab-83bf-92282256204b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.658696 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.658798 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkbjv\" (UniqueName: \"kubernetes.io/projected/caa7b1cd-346c-4aba-9924-25dba85fcc5f-kube-api-access-tkbjv\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.658931 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa7b1cd-346c-4aba-9924-25dba85fcc5f-serving-cert\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.659029 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq6db\" (UniqueName: \"kubernetes.io/projected/3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3-kube-api-access-kq6db\") pod \"openshift-apiserver-operator-796bbdcf4f-2sdbs\" (UID: \"3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.659132 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-etcd-client\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.659224 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa7b1cd-346c-4aba-9924-25dba85fcc5f-service-ca-bundle\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.659321 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vvm\" (UniqueName: \"kubernetes.io/projected/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-kube-api-access-q2vvm\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.659432 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwz9w\" (UniqueName: \"kubernetes.io/projected/a33da252-8a42-4fb1-8663-b4046881cae0-kube-api-access-hwz9w\") pod \"route-controller-manager-6576b87f9c-hrxkw\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.659532 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-audit-dir\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.659635 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a33da252-8a42-4fb1-8663-b4046881cae0-client-ca\") pod \"route-controller-manager-6576b87f9c-hrxkw\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.659738 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.675080 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.675303 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drsg6\" (UniqueName: \"kubernetes.io/projected/6068d0a6-b0b7-44af-8dcd-995d728bf03a-kube-api-access-drsg6\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.675419 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5df7d34a-5265-4653-8525-68dc1e2109fd-serving-cert\") pod \"console-operator-58897d9998-b5tdm\" (UID: \"5df7d34a-5265-4653-8525-68dc1e2109fd\") " pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.675502 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d4f9796-6468-488d-ac2e-afcf480c57fc-config\") pod \"machine-approver-56656f9798-lswwj\" (UID: \"7d4f9796-6468-488d-ac2e-afcf480c57fc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.675649 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-encryption-config\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.675736 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f82j\" (UniqueName: \"kubernetes.io/projected/3897d9bc-e576-4575-8451-10a0e3a73517-kube-api-access-4f82j\") pod \"machine-api-operator-5694c8668f-jdqq6\" (UID: \"3897d9bc-e576-4575-8451-10a0e3a73517\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.675808 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvnrd\" (UniqueName: \"kubernetes.io/projected/31dd0dd8-9279-46ab-83bf-92282256204b-kube-api-access-lvnrd\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrxfz\" (UID: \"31dd0dd8-9279-46ab-83bf-92282256204b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.675936 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-audit-policies\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676017 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676108 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a33da252-8a42-4fb1-8663-b4046881cae0-serving-cert\") pod \"route-controller-manager-6576b87f9c-hrxkw\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676184 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-audit\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676256 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-etcd-serving-ca\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676338 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3897d9bc-e576-4575-8451-10a0e3a73517-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jdqq6\" (UID: \"3897d9bc-e576-4575-8451-10a0e3a73517\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676411 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c25e78-a24e-4962-8976-3bc097fdaaf6-serving-cert\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676482 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qww95\" (UniqueName: \"kubernetes.io/projected/5df7d34a-5265-4653-8525-68dc1e2109fd-kube-api-access-qww95\") pod \"console-operator-58897d9998-b5tdm\" (UID: \"5df7d34a-5265-4653-8525-68dc1e2109fd\") " pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676564 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-audit-policies\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676642 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-etcd-client\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676713 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676788 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa7b1cd-346c-4aba-9924-25dba85fcc5f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676878 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-oauth-serving-cert\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676954 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-serving-cert\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677031 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42v9w\" (UniqueName: \"kubernetes.io/projected/12c25e78-a24e-4962-8976-3bc097fdaaf6-kube-api-access-42v9w\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677099 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2sdbs\" (UID: \"3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677170 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d4f9796-6468-488d-ac2e-afcf480c57fc-auth-proxy-config\") pod \"machine-approver-56656f9798-lswwj\" (UID: \"7d4f9796-6468-488d-ac2e-afcf480c57fc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677250 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677340 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-client-ca\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677415 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs6p9\" (UniqueName: \"kubernetes.io/projected/e7bdccc3-c26f-4d11-a892-caf246a8630f-kube-api-access-rs6p9\") pod \"cluster-samples-operator-665b6dd947-fd2p7\" (UID: \"e7bdccc3-c26f-4d11-a892-caf246a8630f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677500 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33da252-8a42-4fb1-8663-b4046881cae0-config\") pod \"route-controller-manager-6576b87f9c-hrxkw\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677571 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkpx\" (UniqueName: \"kubernetes.io/projected/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-kube-api-access-phkpx\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677636 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677713 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3897d9bc-e576-4575-8451-10a0e3a73517-config\") pod \"machine-api-operator-5694c8668f-jdqq6\" (UID: \"3897d9bc-e576-4575-8451-10a0e3a73517\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677789 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5df7d34a-5265-4653-8525-68dc1e2109fd-config\") pod \"console-operator-58897d9998-b5tdm\" (UID: \"5df7d34a-5265-4653-8525-68dc1e2109fd\") " pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677880 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-etcd-ca\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.677974 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-etcd-service-ca\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678054 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-config\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678128 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-service-ca\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678201 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.661638 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/12c25e78-a24e-4962-8976-3bc097fdaaf6-node-pullsecrets\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678284 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5df7d34a-5265-4653-8525-68dc1e2109fd-trusted-ca\") pod \"console-operator-58897d9998-b5tdm\" (UID: \"5df7d34a-5265-4653-8525-68dc1e2109fd\") " pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678369 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678442 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-image-import-ca\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678469 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-trusted-ca-bundle\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678494 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bph4s\" (UniqueName: \"kubernetes.io/projected/7b0e4d64-f901-4a4e-9644-408eb534401e-kube-api-access-bph4s\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678520 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/12c25e78-a24e-4962-8976-3bc097fdaaf6-etcd-client\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678544 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/12c25e78-a24e-4962-8976-3bc097fdaaf6-encryption-config\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678575 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31dd0dd8-9279-46ab-83bf-92282256204b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrxfz\" (UID: \"31dd0dd8-9279-46ab-83bf-92282256204b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678597 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t67h8\" (UniqueName: \"kubernetes.io/projected/7d4f9796-6468-488d-ac2e-afcf480c57fc-kube-api-access-t67h8\") pod \"machine-approver-56656f9798-lswwj\" (UID: \"7d4f9796-6468-488d-ac2e-afcf480c57fc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678617 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lknnb\" (UniqueName: \"kubernetes.io/projected/67ed0332-55cb-41e1-8a15-4e497706e00d-kube-api-access-lknnb\") pod \"downloads-7954f5f757-swwr5\" (UID: \"67ed0332-55cb-41e1-8a15-4e497706e00d\") " pod="openshift-console/downloads-7954f5f757-swwr5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678653 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678694 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6068d0a6-b0b7-44af-8dcd-995d728bf03a-serving-cert\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678717 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b0e4d64-f901-4a4e-9644-408eb534401e-console-oauth-config\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678742 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678770 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa7b1cd-346c-4aba-9924-25dba85fcc5f-config\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678799 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7d4f9796-6468-488d-ac2e-afcf480c57fc-machine-approver-tls\") pod \"machine-approver-56656f9798-lswwj\" (UID: \"7d4f9796-6468-488d-ac2e-afcf480c57fc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.661750 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/12c25e78-a24e-4962-8976-3bc097fdaaf6-audit-dir\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.662652 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3897d9bc-e576-4575-8451-10a0e3a73517-images\") pod \"machine-api-operator-5694c8668f-jdqq6\" (UID: \"3897d9bc-e576-4575-8451-10a0e3a73517\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.664050 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.665688 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.681192 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dm986"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.666002 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.681476 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-image-import-ca\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.681515 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qh9tk"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.681672 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.681873 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.682310 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4xjjp"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.667865 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.678824 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-serving-cert\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.682626 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl4g2\" (UniqueName: \"kubernetes.io/projected/e9280ad8-85ad-4faa-a025-a021e417e522-kube-api-access-kl4g2\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.668100 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.669335 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.670236 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.674266 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.671065 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.683171 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.683088 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.683485 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.674311 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.708135 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.709150 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.709697 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3897d9bc-e576-4575-8451-10a0e3a73517-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jdqq6\" (UID: \"3897d9bc-e576-4575-8451-10a0e3a73517\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.710894 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fwcgm"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.713157 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.676521 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.712221 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3897d9bc-e576-4575-8451-10a0e3a73517-config\") pod \"machine-api-operator-5694c8668f-jdqq6\" (UID: \"3897d9bc-e576-4575-8451-10a0e3a73517\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.712284 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.712872 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.713731 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-config\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.714108 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-audit\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.714482 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/12c25e78-a24e-4962-8976-3bc097fdaaf6-etcd-serving-ca\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.714646 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.711696 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c25e78-a24e-4962-8976-3bc097fdaaf6-serving-cert\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.718166 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.718281 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.719820 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w2c2r"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.720177 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/12c25e78-a24e-4962-8976-3bc097fdaaf6-etcd-client\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.721915 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-j7xxl"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.724751 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.727099 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sgptm"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.729063 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/12c25e78-a24e-4962-8976-3bc097fdaaf6-encryption-config\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.729077 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.729406 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.734705 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.735730 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b5tdm"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.736706 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.738053 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zdj2c"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.739282 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-swwr5"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.740474 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.741858 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.744659 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gmffc"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.745561 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-77wn4"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.747700 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.748465 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.749631 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.751161 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.752559 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.753304 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9t6sj"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.754977 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n8mj7"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.755773 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-n6wh5"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.756958 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-drf8x"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.757692 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-drf8x" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.758379 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.759355 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.761246 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.764706 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.765179 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.766953 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5s9h7"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.768172 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.768379 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.769563 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4xjjp"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.770568 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-drf8x"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.772303 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.773629 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qh9tk"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.775072 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dm986"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.776374 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fwcgm"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.777532 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.779124 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bg74p"] Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.779857 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bg74p" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.783478 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-service-ca\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.783695 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.783734 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/289b461d-7f4c-4c5d-99a6-ff44db300d7a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gnpln\" (UID: \"289b461d-7f4c-4c5d-99a6-ff44db300d7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.783759 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5df7d34a-5265-4653-8525-68dc1e2109fd-trusted-ca\") pod \"console-operator-58897d9998-b5tdm\" (UID: \"5df7d34a-5265-4653-8525-68dc1e2109fd\") " pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.783783 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.783872 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-trusted-ca-bundle\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.783920 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bph4s\" (UniqueName: \"kubernetes.io/projected/7b0e4d64-f901-4a4e-9644-408eb534401e-kube-api-access-bph4s\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.783947 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.783975 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31dd0dd8-9279-46ab-83bf-92282256204b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrxfz\" (UID: \"31dd0dd8-9279-46ab-83bf-92282256204b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.783997 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t67h8\" (UniqueName: \"kubernetes.io/projected/7d4f9796-6468-488d-ac2e-afcf480c57fc-kube-api-access-t67h8\") pod \"machine-approver-56656f9798-lswwj\" (UID: \"7d4f9796-6468-488d-ac2e-afcf480c57fc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784019 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lknnb\" (UniqueName: \"kubernetes.io/projected/67ed0332-55cb-41e1-8a15-4e497706e00d-kube-api-access-lknnb\") pod \"downloads-7954f5f757-swwr5\" (UID: \"67ed0332-55cb-41e1-8a15-4e497706e00d\") " pod="openshift-console/downloads-7954f5f757-swwr5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784045 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvjhj\" (UniqueName: \"kubernetes.io/projected/f7d0d69c-8687-4e4f-9069-7db996719dab-kube-api-access-xvjhj\") pod \"dns-operator-744455d44c-zdj2c\" (UID: \"f7d0d69c-8687-4e4f-9069-7db996719dab\") " pod="openshift-dns-operator/dns-operator-744455d44c-zdj2c" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784070 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6068d0a6-b0b7-44af-8dcd-995d728bf03a-serving-cert\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784122 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b0e4d64-f901-4a4e-9644-408eb534401e-console-oauth-config\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784148 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784170 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58f8e920-8907-4555-80dc-c00b2af7c80a-srv-cert\") pod \"olm-operator-6b444d44fb-sqxsm\" (UID: \"58f8e920-8907-4555-80dc-c00b2af7c80a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784197 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl4g2\" (UniqueName: \"kubernetes.io/projected/e9280ad8-85ad-4faa-a025-a021e417e522-kube-api-access-kl4g2\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784224 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa7b1cd-346c-4aba-9924-25dba85fcc5f-config\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784250 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7d4f9796-6468-488d-ac2e-afcf480c57fc-machine-approver-tls\") pod \"machine-approver-56656f9798-lswwj\" (UID: \"7d4f9796-6468-488d-ac2e-afcf480c57fc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784271 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-serving-cert\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784296 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4478a49-a8d5-4922-b9b6-c749c64697a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vfq5m\" (UID: \"e4478a49-a8d5-4922-b9b6-c749c64697a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784320 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-console-config\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784344 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7558514b-88da-4a2a-818b-cd0cee240faa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qgd8m\" (UID: \"7558514b-88da-4a2a-818b-cd0cee240faa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784369 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2sdbs\" (UID: \"3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784396 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7bdccc3-c26f-4d11-a892-caf246a8630f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fd2p7\" (UID: \"e7bdccc3-c26f-4d11-a892-caf246a8630f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784419 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-config\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784507 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b0e4d64-f901-4a4e-9644-408eb534401e-console-serving-cert\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784553 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784577 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784601 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9280ad8-85ad-4faa-a025-a021e417e522-audit-dir\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784627 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7558514b-88da-4a2a-818b-cd0cee240faa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qgd8m\" (UID: \"7558514b-88da-4a2a-818b-cd0cee240faa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784634 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-service-ca\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784658 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-config\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784758 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785071 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-trusted-ca-bundle\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785098 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7939233b-508e-485b-91ea-8b266ba6f829-metrics-tls\") pod \"dns-default-77wn4\" (UID: \"7939233b-508e-485b-91ea-8b266ba6f829\") " pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785135 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksw9w\" (UniqueName: \"kubernetes.io/projected/289b461d-7f4c-4c5d-99a6-ff44db300d7a-kube-api-access-ksw9w\") pod \"kube-storage-version-migrator-operator-b67b599dd-gnpln\" (UID: \"289b461d-7f4c-4c5d-99a6-ff44db300d7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785168 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31dd0dd8-9279-46ab-83bf-92282256204b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrxfz\" (UID: \"31dd0dd8-9279-46ab-83bf-92282256204b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785192 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785272 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkbjv\" (UniqueName: \"kubernetes.io/projected/caa7b1cd-346c-4aba-9924-25dba85fcc5f-kube-api-access-tkbjv\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785374 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa7b1cd-346c-4aba-9924-25dba85fcc5f-serving-cert\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785401 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq6db\" (UniqueName: \"kubernetes.io/projected/3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3-kube-api-access-kq6db\") pod \"openshift-apiserver-operator-796bbdcf4f-2sdbs\" (UID: \"3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785415 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31dd0dd8-9279-46ab-83bf-92282256204b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrxfz\" (UID: \"31dd0dd8-9279-46ab-83bf-92282256204b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785427 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-etcd-client\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785454 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/127e1d7b-1e8e-492f-905c-3c0027bd1a45-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7kbhj\" (UID: \"127e1d7b-1e8e-492f-905c-3c0027bd1a45\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785482 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtxch\" (UniqueName: \"kubernetes.io/projected/7939233b-508e-485b-91ea-8b266ba6f829-kube-api-access-gtxch\") pod \"dns-default-77wn4\" (UID: \"7939233b-508e-485b-91ea-8b266ba6f829\") " pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785508 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2vvm\" (UniqueName: \"kubernetes.io/projected/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-kube-api-access-q2vvm\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785573 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa7b1cd-346c-4aba-9924-25dba85fcc5f-service-ca-bundle\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785574 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5df7d34a-5265-4653-8525-68dc1e2109fd-trusted-ca\") pod \"console-operator-58897d9998-b5tdm\" (UID: \"5df7d34a-5265-4653-8525-68dc1e2109fd\") " pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785598 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7d0d69c-8687-4e4f-9069-7db996719dab-metrics-tls\") pod \"dns-operator-744455d44c-zdj2c\" (UID: \"f7d0d69c-8687-4e4f-9069-7db996719dab\") " pod="openshift-dns-operator/dns-operator-744455d44c-zdj2c" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785639 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqg24\" (UniqueName: \"kubernetes.io/projected/58f8e920-8907-4555-80dc-c00b2af7c80a-kube-api-access-qqg24\") pod \"olm-operator-6b444d44fb-sqxsm\" (UID: \"58f8e920-8907-4555-80dc-c00b2af7c80a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785663 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/58f8e920-8907-4555-80dc-c00b2af7c80a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sqxsm\" (UID: \"58f8e920-8907-4555-80dc-c00b2af7c80a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785705 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwz9w\" (UniqueName: \"kubernetes.io/projected/a33da252-8a42-4fb1-8663-b4046881cae0-kube-api-access-hwz9w\") pod \"route-controller-manager-6576b87f9c-hrxkw\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785734 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-audit-dir\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785779 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/127e1d7b-1e8e-492f-905c-3c0027bd1a45-config\") pod \"kube-controller-manager-operator-78b949d7b-7kbhj\" (UID: \"127e1d7b-1e8e-492f-905c-3c0027bd1a45\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785806 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a33da252-8a42-4fb1-8663-b4046881cae0-client-ca\") pod \"route-controller-manager-6576b87f9c-hrxkw\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785849 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785874 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/127e1d7b-1e8e-492f-905c-3c0027bd1a45-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7kbhj\" (UID: \"127e1d7b-1e8e-492f-905c-3c0027bd1a45\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785904 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785930 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drsg6\" (UniqueName: \"kubernetes.io/projected/6068d0a6-b0b7-44af-8dcd-995d728bf03a-kube-api-access-drsg6\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785956 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5df7d34a-5265-4653-8525-68dc1e2109fd-serving-cert\") pod \"console-operator-58897d9998-b5tdm\" (UID: \"5df7d34a-5265-4653-8525-68dc1e2109fd\") " pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.785980 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d4f9796-6468-488d-ac2e-afcf480c57fc-config\") pod \"machine-approver-56656f9798-lswwj\" (UID: \"7d4f9796-6468-488d-ac2e-afcf480c57fc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786002 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-encryption-config\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786031 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvnrd\" (UniqueName: \"kubernetes.io/projected/31dd0dd8-9279-46ab-83bf-92282256204b-kube-api-access-lvnrd\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrxfz\" (UID: \"31dd0dd8-9279-46ab-83bf-92282256204b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786055 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-audit-policies\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786060 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786063 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2sdbs\" (UID: \"3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786080 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7939233b-508e-485b-91ea-8b266ba6f829-config-volume\") pod \"dns-default-77wn4\" (UID: \"7939233b-508e-485b-91ea-8b266ba6f829\") " pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786108 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786115 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9280ad8-85ad-4faa-a025-a021e417e522-audit-dir\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786134 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a33da252-8a42-4fb1-8663-b4046881cae0-serving-cert\") pod \"route-controller-manager-6576b87f9c-hrxkw\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786159 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4478a49-a8d5-4922-b9b6-c749c64697a6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vfq5m\" (UID: \"e4478a49-a8d5-4922-b9b6-c749c64697a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786183 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4478a49-a8d5-4922-b9b6-c749c64697a6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vfq5m\" (UID: \"e4478a49-a8d5-4922-b9b6-c749c64697a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786209 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sdwb\" (UniqueName: \"kubernetes.io/projected/16627f6c-2bea-4b24-9133-a8a009620d53-kube-api-access-2sdwb\") pod \"migrator-59844c95c7-gmffc\" (UID: \"16627f6c-2bea-4b24-9133-a8a009620d53\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gmffc" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786233 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7558514b-88da-4a2a-818b-cd0cee240faa-config\") pod \"kube-apiserver-operator-766d6c64bb-qgd8m\" (UID: \"7558514b-88da-4a2a-818b-cd0cee240faa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786260 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qww95\" (UniqueName: \"kubernetes.io/projected/5df7d34a-5265-4653-8525-68dc1e2109fd-kube-api-access-qww95\") pod \"console-operator-58897d9998-b5tdm\" (UID: \"5df7d34a-5265-4653-8525-68dc1e2109fd\") " pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786284 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-audit-policies\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786308 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-etcd-client\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786599 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-config\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786617 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786645 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-serving-cert\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786670 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa7b1cd-346c-4aba-9924-25dba85fcc5f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786694 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-oauth-serving-cert\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786724 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2sdbs\" (UID: \"3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786747 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d4f9796-6468-488d-ac2e-afcf480c57fc-auth-proxy-config\") pod \"machine-approver-56656f9798-lswwj\" (UID: \"7d4f9796-6468-488d-ac2e-afcf480c57fc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786771 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786783 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-console-config\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786805 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-client-ca\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786853 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs6p9\" (UniqueName: \"kubernetes.io/projected/e7bdccc3-c26f-4d11-a892-caf246a8630f-kube-api-access-rs6p9\") pod \"cluster-samples-operator-665b6dd947-fd2p7\" (UID: \"e7bdccc3-c26f-4d11-a892-caf246a8630f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786880 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786935 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33da252-8a42-4fb1-8663-b4046881cae0-config\") pod \"route-controller-manager-6576b87f9c-hrxkw\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786963 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phkpx\" (UniqueName: \"kubernetes.io/projected/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-kube-api-access-phkpx\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786986 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-etcd-service-ca\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.787014 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5df7d34a-5265-4653-8525-68dc1e2109fd-config\") pod \"console-operator-58897d9998-b5tdm\" (UID: \"5df7d34a-5265-4653-8525-68dc1e2109fd\") " pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.787036 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-etcd-ca\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.787061 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/289b461d-7f4c-4c5d-99a6-ff44db300d7a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gnpln\" (UID: \"289b461d-7f4c-4c5d-99a6-ff44db300d7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.786062 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.787471 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-config\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.784576 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.788342 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31dd0dd8-9279-46ab-83bf-92282256204b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrxfz\" (UID: \"31dd0dd8-9279-46ab-83bf-92282256204b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.788778 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-client-ca\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.789057 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa7b1cd-346c-4aba-9924-25dba85fcc5f-config\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.789186 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-audit-dir\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.789516 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa7b1cd-346c-4aba-9924-25dba85fcc5f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.790066 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6068d0a6-b0b7-44af-8dcd-995d728bf03a-serving-cert\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.790205 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.790636 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a33da252-8a42-4fb1-8663-b4046881cae0-client-ca\") pod \"route-controller-manager-6576b87f9c-hrxkw\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.791015 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-oauth-serving-cert\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.791028 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-etcd-client\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.791079 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.791371 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.791687 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7bdccc3-c26f-4d11-a892-caf246a8630f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fd2p7\" (UID: \"e7bdccc3-c26f-4d11-a892-caf246a8630f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.791766 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d4f9796-6468-488d-ac2e-afcf480c57fc-config\") pod \"machine-approver-56656f9798-lswwj\" (UID: \"7d4f9796-6468-488d-ac2e-afcf480c57fc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.792281 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d4f9796-6468-488d-ac2e-afcf480c57fc-auth-proxy-config\") pod \"machine-approver-56656f9798-lswwj\" (UID: \"7d4f9796-6468-488d-ac2e-afcf480c57fc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.792358 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-audit-policies\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.792432 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33da252-8a42-4fb1-8663-b4046881cae0-config\") pod \"route-controller-manager-6576b87f9c-hrxkw\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.792446 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-etcd-ca\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.792602 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-serving-cert\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.792807 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5df7d34a-5265-4653-8525-68dc1e2109fd-config\") pod \"console-operator-58897d9998-b5tdm\" (UID: \"5df7d34a-5265-4653-8525-68dc1e2109fd\") " pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.792904 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-etcd-service-ca\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.793073 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-audit-policies\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.793147 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-serving-cert\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.793524 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.793771 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.793932 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa7b1cd-346c-4aba-9924-25dba85fcc5f-service-ca-bundle\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.794256 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa7b1cd-346c-4aba-9924-25dba85fcc5f-serving-cert\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.794265 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.794315 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-etcd-client\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.794324 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b0e4d64-f901-4a4e-9644-408eb534401e-console-oauth-config\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.794379 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.794466 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2sdbs\" (UID: \"3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.794479 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.794626 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7d4f9796-6468-488d-ac2e-afcf480c57fc-machine-approver-tls\") pod \"machine-approver-56656f9798-lswwj\" (UID: \"7d4f9796-6468-488d-ac2e-afcf480c57fc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.794652 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.795141 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.795465 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.796000 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.796161 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a33da252-8a42-4fb1-8663-b4046881cae0-serving-cert\") pod \"route-controller-manager-6576b87f9c-hrxkw\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.796320 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5df7d34a-5265-4653-8525-68dc1e2109fd-serving-cert\") pod \"console-operator-58897d9998-b5tdm\" (UID: \"5df7d34a-5265-4653-8525-68dc1e2109fd\") " pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.797309 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b0e4d64-f901-4a4e-9644-408eb534401e-console-serving-cert\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.797902 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-encryption-config\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.807853 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.828250 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.847811 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.867761 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.888025 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.889666 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7939233b-508e-485b-91ea-8b266ba6f829-metrics-tls\") pod \"dns-default-77wn4\" (UID: \"7939233b-508e-485b-91ea-8b266ba6f829\") " pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.889698 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksw9w\" (UniqueName: \"kubernetes.io/projected/289b461d-7f4c-4c5d-99a6-ff44db300d7a-kube-api-access-ksw9w\") pod \"kube-storage-version-migrator-operator-b67b599dd-gnpln\" (UID: \"289b461d-7f4c-4c5d-99a6-ff44db300d7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.889724 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtxch\" (UniqueName: \"kubernetes.io/projected/7939233b-508e-485b-91ea-8b266ba6f829-kube-api-access-gtxch\") pod \"dns-default-77wn4\" (UID: \"7939233b-508e-485b-91ea-8b266ba6f829\") " pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.889763 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/127e1d7b-1e8e-492f-905c-3c0027bd1a45-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7kbhj\" (UID: \"127e1d7b-1e8e-492f-905c-3c0027bd1a45\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.889804 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7d0d69c-8687-4e4f-9069-7db996719dab-metrics-tls\") pod \"dns-operator-744455d44c-zdj2c\" (UID: \"f7d0d69c-8687-4e4f-9069-7db996719dab\") " pod="openshift-dns-operator/dns-operator-744455d44c-zdj2c" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.889854 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqg24\" (UniqueName: \"kubernetes.io/projected/58f8e920-8907-4555-80dc-c00b2af7c80a-kube-api-access-qqg24\") pod \"olm-operator-6b444d44fb-sqxsm\" (UID: \"58f8e920-8907-4555-80dc-c00b2af7c80a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.889876 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/127e1d7b-1e8e-492f-905c-3c0027bd1a45-config\") pod \"kube-controller-manager-operator-78b949d7b-7kbhj\" (UID: \"127e1d7b-1e8e-492f-905c-3c0027bd1a45\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.889897 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/58f8e920-8907-4555-80dc-c00b2af7c80a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sqxsm\" (UID: \"58f8e920-8907-4555-80dc-c00b2af7c80a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.889933 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/127e1d7b-1e8e-492f-905c-3c0027bd1a45-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7kbhj\" (UID: \"127e1d7b-1e8e-492f-905c-3c0027bd1a45\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.889980 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7939233b-508e-485b-91ea-8b266ba6f829-config-volume\") pod \"dns-default-77wn4\" (UID: \"7939233b-508e-485b-91ea-8b266ba6f829\") " pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.890013 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4478a49-a8d5-4922-b9b6-c749c64697a6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vfq5m\" (UID: \"e4478a49-a8d5-4922-b9b6-c749c64697a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.890032 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4478a49-a8d5-4922-b9b6-c749c64697a6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vfq5m\" (UID: \"e4478a49-a8d5-4922-b9b6-c749c64697a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.890053 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sdwb\" (UniqueName: \"kubernetes.io/projected/16627f6c-2bea-4b24-9133-a8a009620d53-kube-api-access-2sdwb\") pod \"migrator-59844c95c7-gmffc\" (UID: \"16627f6c-2bea-4b24-9133-a8a009620d53\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gmffc" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.890085 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7558514b-88da-4a2a-818b-cd0cee240faa-config\") pod \"kube-apiserver-operator-766d6c64bb-qgd8m\" (UID: \"7558514b-88da-4a2a-818b-cd0cee240faa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.890165 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/289b461d-7f4c-4c5d-99a6-ff44db300d7a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gnpln\" (UID: \"289b461d-7f4c-4c5d-99a6-ff44db300d7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.890188 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/289b461d-7f4c-4c5d-99a6-ff44db300d7a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gnpln\" (UID: \"289b461d-7f4c-4c5d-99a6-ff44db300d7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.890235 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvjhj\" (UniqueName: \"kubernetes.io/projected/f7d0d69c-8687-4e4f-9069-7db996719dab-kube-api-access-xvjhj\") pod \"dns-operator-744455d44c-zdj2c\" (UID: \"f7d0d69c-8687-4e4f-9069-7db996719dab\") " pod="openshift-dns-operator/dns-operator-744455d44c-zdj2c" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.890260 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58f8e920-8907-4555-80dc-c00b2af7c80a-srv-cert\") pod \"olm-operator-6b444d44fb-sqxsm\" (UID: \"58f8e920-8907-4555-80dc-c00b2af7c80a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.890290 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4478a49-a8d5-4922-b9b6-c749c64697a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vfq5m\" (UID: \"e4478a49-a8d5-4922-b9b6-c749c64697a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.890312 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7558514b-88da-4a2a-818b-cd0cee240faa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qgd8m\" (UID: \"7558514b-88da-4a2a-818b-cd0cee240faa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.890542 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7558514b-88da-4a2a-818b-cd0cee240faa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qgd8m\" (UID: \"7558514b-88da-4a2a-818b-cd0cee240faa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.890862 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7939233b-508e-485b-91ea-8b266ba6f829-config-volume\") pod \"dns-default-77wn4\" (UID: \"7939233b-508e-485b-91ea-8b266ba6f829\") " pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.891205 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7558514b-88da-4a2a-818b-cd0cee240faa-config\") pod \"kube-apiserver-operator-766d6c64bb-qgd8m\" (UID: \"7558514b-88da-4a2a-818b-cd0cee240faa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.892442 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7939233b-508e-485b-91ea-8b266ba6f829-metrics-tls\") pod \"dns-default-77wn4\" (UID: \"7939233b-508e-485b-91ea-8b266ba6f829\") " pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.893939 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7d0d69c-8687-4e4f-9069-7db996719dab-metrics-tls\") pod \"dns-operator-744455d44c-zdj2c\" (UID: \"f7d0d69c-8687-4e4f-9069-7db996719dab\") " pod="openshift-dns-operator/dns-operator-744455d44c-zdj2c" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.908049 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.927804 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.933862 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7558514b-88da-4a2a-818b-cd0cee240faa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qgd8m\" (UID: \"7558514b-88da-4a2a-818b-cd0cee240faa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.950691 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.967970 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.988349 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 21 09:43:28 crc kubenswrapper[4972]: I1121 09:43:28.995049 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4478a49-a8d5-4922-b9b6-c749c64697a6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vfq5m\" (UID: \"e4478a49-a8d5-4922-b9b6-c749c64697a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.011358 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.028421 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.031668 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4478a49-a8d5-4922-b9b6-c749c64697a6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vfq5m\" (UID: \"e4478a49-a8d5-4922-b9b6-c749c64697a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.048300 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.068956 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.087993 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.108685 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.148067 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.168424 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.188375 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.208427 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.228948 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.249151 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.275530 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.287976 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.308957 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.328629 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.348886 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.369262 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.389355 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.409077 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.429190 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.435427 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/58f8e920-8907-4555-80dc-c00b2af7c80a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sqxsm\" (UID: \"58f8e920-8907-4555-80dc-c00b2af7c80a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.449701 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.469335 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.488804 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.509596 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.529567 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.534985 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58f8e920-8907-4555-80dc-c00b2af7c80a-srv-cert\") pod \"olm-operator-6b444d44fb-sqxsm\" (UID: \"58f8e920-8907-4555-80dc-c00b2af7c80a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.549487 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.569856 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.588182 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.609462 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.629290 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.646422 4972 request.go:700] Waited for 1.013866506s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&limit=500&resourceVersion=0 Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.648439 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.668490 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.688302 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.707770 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.729046 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.748271 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.753902 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/289b461d-7f4c-4c5d-99a6-ff44db300d7a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gnpln\" (UID: \"289b461d-7f4c-4c5d-99a6-ff44db300d7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.768566 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.772101 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/289b461d-7f4c-4c5d-99a6-ff44db300d7a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gnpln\" (UID: \"289b461d-7f4c-4c5d-99a6-ff44db300d7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.788131 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.808628 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.829270 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.849235 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.868090 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.876814 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/127e1d7b-1e8e-492f-905c-3c0027bd1a45-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7kbhj\" (UID: \"127e1d7b-1e8e-492f-905c-3c0027bd1a45\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.889297 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.891635 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/127e1d7b-1e8e-492f-905c-3c0027bd1a45-config\") pod \"kube-controller-manager-operator-78b949d7b-7kbhj\" (UID: \"127e1d7b-1e8e-492f-905c-3c0027bd1a45\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.950122 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.955054 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42v9w\" (UniqueName: \"kubernetes.io/projected/12c25e78-a24e-4962-8976-3bc097fdaaf6-kube-api-access-42v9w\") pod \"apiserver-76f77b778f-xxsnz\" (UID: \"12c25e78-a24e-4962-8976-3bc097fdaaf6\") " pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.989009 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 21 09:43:29 crc kubenswrapper[4972]: I1121 09:43:29.996114 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f82j\" (UniqueName: \"kubernetes.io/projected/3897d9bc-e576-4575-8451-10a0e3a73517-kube-api-access-4f82j\") pod \"machine-api-operator-5694c8668f-jdqq6\" (UID: \"3897d9bc-e576-4575-8451-10a0e3a73517\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.009123 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.028746 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.033504 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.049260 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.062174 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.068517 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.090038 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.109247 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.129145 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.151498 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.169812 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.188210 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.209400 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.228652 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.250263 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.274505 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jdqq6"] Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.275670 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.277452 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xxsnz"] Nov 21 09:43:30 crc kubenswrapper[4972]: W1121 09:43:30.281591 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3897d9bc_e576_4575_8451_10a0e3a73517.slice/crio-71b881c1be070a295a6b043a8e7adf18b017ed8e6813fdefc828e02019cb9825 WatchSource:0}: Error finding container 71b881c1be070a295a6b043a8e7adf18b017ed8e6813fdefc828e02019cb9825: Status 404 returned error can't find the container with id 71b881c1be070a295a6b043a8e7adf18b017ed8e6813fdefc828e02019cb9825 Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.288884 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.308147 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.327501 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.348995 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.368353 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.389147 4972 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.408730 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.428472 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.448318 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.468646 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.488099 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.510757 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.527986 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.549131 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.590651 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bph4s\" (UniqueName: \"kubernetes.io/projected/7b0e4d64-f901-4a4e-9644-408eb534401e-kube-api-access-bph4s\") pod \"console-f9d7485db-j7xxl\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.598134 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" event={"ID":"3897d9bc-e576-4575-8451-10a0e3a73517","Type":"ContainerStarted","Data":"af464a57301b3c1bd271e564d9599c55b707be43e9a7f1e420420a9af00c8416"} Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.598232 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" event={"ID":"3897d9bc-e576-4575-8451-10a0e3a73517","Type":"ContainerStarted","Data":"71b881c1be070a295a6b043a8e7adf18b017ed8e6813fdefc828e02019cb9825"} Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.599015 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" event={"ID":"12c25e78-a24e-4962-8976-3bc097fdaaf6","Type":"ContainerStarted","Data":"31dc0de029e5690107d7c7ff52bdc9401439b19811ec5cfa5ae7e8ab75105d26"} Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.608045 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl4g2\" (UniqueName: \"kubernetes.io/projected/e9280ad8-85ad-4faa-a025-a021e417e522-kube-api-access-kl4g2\") pod \"oauth-openshift-558db77b4-w2c2r\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.610911 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.635885 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lknnb\" (UniqueName: \"kubernetes.io/projected/67ed0332-55cb-41e1-8a15-4e497706e00d-kube-api-access-lknnb\") pod \"downloads-7954f5f757-swwr5\" (UID: \"67ed0332-55cb-41e1-8a15-4e497706e00d\") " pod="openshift-console/downloads-7954f5f757-swwr5" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.645744 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t67h8\" (UniqueName: \"kubernetes.io/projected/7d4f9796-6468-488d-ac2e-afcf480c57fc-kube-api-access-t67h8\") pod \"machine-approver-56656f9798-lswwj\" (UID: \"7d4f9796-6468-488d-ac2e-afcf480c57fc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.660589 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq6db\" (UniqueName: \"kubernetes.io/projected/3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3-kube-api-access-kq6db\") pod \"openshift-apiserver-operator-796bbdcf4f-2sdbs\" (UID: \"3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.666556 4972 request.go:700] Waited for 1.878528206s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.689404 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkbjv\" (UniqueName: \"kubernetes.io/projected/caa7b1cd-346c-4aba-9924-25dba85fcc5f-kube-api-access-tkbjv\") pod \"authentication-operator-69f744f599-4nd8h\" (UID: \"caa7b1cd-346c-4aba-9924-25dba85fcc5f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.697591 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.711044 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2vvm\" (UniqueName: \"kubernetes.io/projected/f11172f5-cbc2-4f41-bfb5-7cf480d8af7f-kube-api-access-q2vvm\") pod \"etcd-operator-b45778765-n8mj7\" (UID: \"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.723038 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwz9w\" (UniqueName: \"kubernetes.io/projected/a33da252-8a42-4fb1-8663-b4046881cae0-kube-api-access-hwz9w\") pod \"route-controller-manager-6576b87f9c-hrxkw\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.751565 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs6p9\" (UniqueName: \"kubernetes.io/projected/e7bdccc3-c26f-4d11-a892-caf246a8630f-kube-api-access-rs6p9\") pod \"cluster-samples-operator-665b6dd947-fd2p7\" (UID: \"e7bdccc3-c26f-4d11-a892-caf246a8630f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.764698 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvnrd\" (UniqueName: \"kubernetes.io/projected/31dd0dd8-9279-46ab-83bf-92282256204b-kube-api-access-lvnrd\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrxfz\" (UID: \"31dd0dd8-9279-46ab-83bf-92282256204b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.771447 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.788934 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phkpx\" (UniqueName: \"kubernetes.io/projected/0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561-kube-api-access-phkpx\") pod \"apiserver-7bbb656c7d-kj5r8\" (UID: \"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.811321 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.811683 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-j7xxl"] Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.813455 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drsg6\" (UniqueName: \"kubernetes.io/projected/6068d0a6-b0b7-44af-8dcd-995d728bf03a-kube-api-access-drsg6\") pod \"controller-manager-879f6c89f-n6wh5\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.827198 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qww95\" (UniqueName: \"kubernetes.io/projected/5df7d34a-5265-4653-8525-68dc1e2109fd-kube-api-access-qww95\") pod \"console-operator-58897d9998-b5tdm\" (UID: \"5df7d34a-5265-4653-8525-68dc1e2109fd\") " pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.835803 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.845196 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.852241 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.856740 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksw9w\" (UniqueName: \"kubernetes.io/projected/289b461d-7f4c-4c5d-99a6-ff44db300d7a-kube-api-access-ksw9w\") pod \"kube-storage-version-migrator-operator-b67b599dd-gnpln\" (UID: \"289b461d-7f4c-4c5d-99a6-ff44db300d7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.859899 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.867601 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtxch\" (UniqueName: \"kubernetes.io/projected/7939233b-508e-485b-91ea-8b266ba6f829-kube-api-access-gtxch\") pod \"dns-default-77wn4\" (UID: \"7939233b-508e-485b-91ea-8b266ba6f829\") " pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.869746 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.877718 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.883565 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-swwr5" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.885073 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4nd8h"] Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.886721 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqg24\" (UniqueName: \"kubernetes.io/projected/58f8e920-8907-4555-80dc-c00b2af7c80a-kube-api-access-qqg24\") pod \"olm-operator-6b444d44fb-sqxsm\" (UID: \"58f8e920-8907-4555-80dc-c00b2af7c80a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.903770 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" Nov 21 09:43:30 crc kubenswrapper[4972]: W1121 09:43:30.910103 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaa7b1cd_346c_4aba_9924_25dba85fcc5f.slice/crio-9321e10b3d252bc2d88efbe08f6ed56a18658dd0cdf8118a94bb309cd41b998c WatchSource:0}: Error finding container 9321e10b3d252bc2d88efbe08f6ed56a18658dd0cdf8118a94bb309cd41b998c: Status 404 returned error can't find the container with id 9321e10b3d252bc2d88efbe08f6ed56a18658dd0cdf8118a94bb309cd41b998c Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.911138 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sdwb\" (UniqueName: \"kubernetes.io/projected/16627f6c-2bea-4b24-9133-a8a009620d53-kube-api-access-2sdwb\") pod \"migrator-59844c95c7-gmffc\" (UID: \"16627f6c-2bea-4b24-9133-a8a009620d53\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gmffc" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.924744 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvjhj\" (UniqueName: \"kubernetes.io/projected/f7d0d69c-8687-4e4f-9069-7db996719dab-kube-api-access-xvjhj\") pod \"dns-operator-744455d44c-zdj2c\" (UID: \"f7d0d69c-8687-4e4f-9069-7db996719dab\") " pod="openshift-dns-operator/dns-operator-744455d44c-zdj2c" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.926316 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zdj2c" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.940339 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.950527 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4478a49-a8d5-4922-b9b6-c749c64697a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vfq5m\" (UID: \"e4478a49-a8d5-4922-b9b6-c749c64697a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.955107 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.955512 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw"] Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.961886 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/127e1d7b-1e8e-492f-905c-3c0027bd1a45-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7kbhj\" (UID: \"127e1d7b-1e8e-492f-905c-3c0027bd1a45\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.987105 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7558514b-88da-4a2a-818b-cd0cee240faa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qgd8m\" (UID: \"7558514b-88da-4a2a-818b-cd0cee240faa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" Nov 21 09:43:30 crc kubenswrapper[4972]: I1121 09:43:30.991781 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gmffc" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.038801 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.039213 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.039959 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040000 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbp8\" (UniqueName: \"kubernetes.io/projected/1fc3fe65-482e-43ed-9669-7849bfc0bfd2-kube-api-access-jbbp8\") pod \"control-plane-machine-set-operator-78cbb6b69f-6hlzb\" (UID: \"1fc3fe65-482e-43ed-9669-7849bfc0bfd2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040026 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac9e18c9-3efe-4b57-a2d8-09aba942b999-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kk9j4\" (UID: \"ac9e18c9-3efe-4b57-a2d8-09aba942b999\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040053 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76ca4784-e584-413d-b1cb-77f336e4f695-default-certificate\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040092 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-bound-sa-token\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040115 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac83f439-8fd3-4813-b348-bdea75672000-apiservice-cert\") pod \"packageserver-d55dfcdfc-l4nps\" (UID: \"ac83f439-8fd3-4813-b348-bdea75672000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040152 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac83f439-8fd3-4813-b348-bdea75672000-webhook-cert\") pod \"packageserver-d55dfcdfc-l4nps\" (UID: \"ac83f439-8fd3-4813-b348-bdea75672000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040192 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6e05e924-7aac-419c-82a7-0d9b9592b39f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040225 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzjbj\" (UniqueName: \"kubernetes.io/projected/ac83f439-8fd3-4813-b348-bdea75672000-kube-api-access-nzjbj\") pod \"packageserver-d55dfcdfc-l4nps\" (UID: \"ac83f439-8fd3-4813-b348-bdea75672000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040256 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd926b1a-e534-4f79-ab19-afca2be13183-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ffrcv\" (UID: \"cd926b1a-e534-4f79-ab19-afca2be13183\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040284 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac9e18c9-3efe-4b57-a2d8-09aba942b999-metrics-tls\") pod \"ingress-operator-5b745b69d9-kk9j4\" (UID: \"ac9e18c9-3efe-4b57-a2d8-09aba942b999\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040309 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4bz8\" (UniqueName: \"kubernetes.io/projected/cd926b1a-e534-4f79-ab19-afca2be13183-kube-api-access-g4bz8\") pod \"cluster-image-registry-operator-dc59b4c8b-ffrcv\" (UID: \"cd926b1a-e534-4f79-ab19-afca2be13183\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.040332 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:31.54031314 +0000 UTC m=+156.649455688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040367 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-485b8\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-kube-api-access-485b8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040398 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9fa458e6-be33-42f2-94ea-16ef5b241fa8-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9t6sj\" (UID: \"9fa458e6-be33-42f2-94ea-16ef5b241fa8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9t6sj" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040508 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fc3fe65-482e-43ed-9669-7849bfc0bfd2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6hlzb\" (UID: \"1fc3fe65-482e-43ed-9669-7849bfc0bfd2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040530 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llwqf\" (UniqueName: \"kubernetes.io/projected/76ca4784-e584-413d-b1cb-77f336e4f695-kube-api-access-llwqf\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040608 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76ca4784-e584-413d-b1cb-77f336e4f695-service-ca-bundle\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040631 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd926b1a-e534-4f79-ab19-afca2be13183-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ffrcv\" (UID: \"cd926b1a-e534-4f79-ab19-afca2be13183\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040654 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wh4s\" (UniqueName: \"kubernetes.io/projected/ac9e18c9-3efe-4b57-a2d8-09aba942b999-kube-api-access-9wh4s\") pod \"ingress-operator-5b745b69d9-kk9j4\" (UID: \"ac9e18c9-3efe-4b57-a2d8-09aba942b999\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040687 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac9e18c9-3efe-4b57-a2d8-09aba942b999-trusted-ca\") pod \"ingress-operator-5b745b69d9-kk9j4\" (UID: \"ac9e18c9-3efe-4b57-a2d8-09aba942b999\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040744 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-registry-tls\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040766 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6e05e924-7aac-419c-82a7-0d9b9592b39f-registry-certificates\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040800 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6e05e924-7aac-419c-82a7-0d9b9592b39f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040843 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76ca4784-e584-413d-b1cb-77f336e4f695-metrics-certs\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040866 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd926b1a-e534-4f79-ab19-afca2be13183-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ffrcv\" (UID: \"cd926b1a-e534-4f79-ab19-afca2be13183\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040888 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76ca4784-e584-413d-b1cb-77f336e4f695-stats-auth\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040912 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e05e924-7aac-419c-82a7-0d9b9592b39f-trusted-ca\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040933 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hklb\" (UniqueName: \"kubernetes.io/projected/9fa458e6-be33-42f2-94ea-16ef5b241fa8-kube-api-access-4hklb\") pod \"multus-admission-controller-857f4d67dd-9t6sj\" (UID: \"9fa458e6-be33-42f2-94ea-16ef5b241fa8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9t6sj" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.040952 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ac83f439-8fd3-4813-b348-bdea75672000-tmpfs\") pod \"packageserver-d55dfcdfc-l4nps\" (UID: \"ac83f439-8fd3-4813-b348-bdea75672000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: W1121 09:43:31.045789 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda33da252_8a42_4fb1_8663_b4046881cae0.slice/crio-d1f8aea372e176dd30d28ef28de414ddf1e3a26c82b72b42bcd5f2ff94a6b008 WatchSource:0}: Error finding container d1f8aea372e176dd30d28ef28de414ddf1e3a26c82b72b42bcd5f2ff94a6b008: Status 404 returned error can't find the container with id d1f8aea372e176dd30d28ef28de414ddf1e3a26c82b72b42bcd5f2ff94a6b008 Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.055240 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.055746 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.142068 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.142919 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbp8\" (UniqueName: \"kubernetes.io/projected/1fc3fe65-482e-43ed-9669-7849bfc0bfd2-kube-api-access-jbbp8\") pod \"control-plane-machine-set-operator-78cbb6b69f-6hlzb\" (UID: \"1fc3fe65-482e-43ed-9669-7849bfc0bfd2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.142968 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac9e18c9-3efe-4b57-a2d8-09aba942b999-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kk9j4\" (UID: \"ac9e18c9-3efe-4b57-a2d8-09aba942b999\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143011 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76ca4784-e584-413d-b1cb-77f336e4f695-default-certificate\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.143088 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:31.643052847 +0000 UTC m=+156.752195345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143181 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-bound-sa-token\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143211 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac83f439-8fd3-4813-b348-bdea75672000-apiservice-cert\") pod \"packageserver-d55dfcdfc-l4nps\" (UID: \"ac83f439-8fd3-4813-b348-bdea75672000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143249 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac83f439-8fd3-4813-b348-bdea75672000-webhook-cert\") pod \"packageserver-d55dfcdfc-l4nps\" (UID: \"ac83f439-8fd3-4813-b348-bdea75672000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143283 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6e05e924-7aac-419c-82a7-0d9b9592b39f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143316 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzjbj\" (UniqueName: \"kubernetes.io/projected/ac83f439-8fd3-4813-b348-bdea75672000-kube-api-access-nzjbj\") pod \"packageserver-d55dfcdfc-l4nps\" (UID: \"ac83f439-8fd3-4813-b348-bdea75672000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143356 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd926b1a-e534-4f79-ab19-afca2be13183-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ffrcv\" (UID: \"cd926b1a-e534-4f79-ab19-afca2be13183\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143389 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac9e18c9-3efe-4b57-a2d8-09aba942b999-metrics-tls\") pod \"ingress-operator-5b745b69d9-kk9j4\" (UID: \"ac9e18c9-3efe-4b57-a2d8-09aba942b999\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143418 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngplb\" (UniqueName: \"kubernetes.io/projected/d0fc9922-46d2-4700-88d7-4322397193c2-kube-api-access-ngplb\") pod \"catalog-operator-68c6474976-pm48b\" (UID: \"d0fc9922-46d2-4700-88d7-4322397193c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143450 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ae42bdbe-152c-4f34-8bcf-f2284b1a09c6-images\") pod \"machine-config-operator-74547568cd-sz4l8\" (UID: \"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143480 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-485b8\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-kube-api-access-485b8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143503 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9fa458e6-be33-42f2-94ea-16ef5b241fa8-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9t6sj\" (UID: \"9fa458e6-be33-42f2-94ea-16ef5b241fa8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9t6sj" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143523 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4bz8\" (UniqueName: \"kubernetes.io/projected/cd926b1a-e534-4f79-ab19-afca2be13183-kube-api-access-g4bz8\") pod \"cluster-image-registry-operator-dc59b4c8b-ffrcv\" (UID: \"cd926b1a-e534-4f79-ab19-afca2be13183\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143547 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8238ee9f-1408-49ba-8a8f-961adc1488b8-serving-cert\") pod \"openshift-config-operator-7777fb866f-sgptm\" (UID: \"8238ee9f-1408-49ba-8a8f-961adc1488b8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143568 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ae42bdbe-152c-4f34-8bcf-f2284b1a09c6-proxy-tls\") pod \"machine-config-operator-74547568cd-sz4l8\" (UID: \"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143593 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chm8h\" (UniqueName: \"kubernetes.io/projected/8238ee9f-1408-49ba-8a8f-961adc1488b8-kube-api-access-chm8h\") pod \"openshift-config-operator-7777fb866f-sgptm\" (UID: \"8238ee9f-1408-49ba-8a8f-961adc1488b8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143620 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fc3fe65-482e-43ed-9669-7849bfc0bfd2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6hlzb\" (UID: \"1fc3fe65-482e-43ed-9669-7849bfc0bfd2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143645 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llwqf\" (UniqueName: \"kubernetes.io/projected/76ca4784-e584-413d-b1cb-77f336e4f695-kube-api-access-llwqf\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143669 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d0fc9922-46d2-4700-88d7-4322397193c2-profile-collector-cert\") pod \"catalog-operator-68c6474976-pm48b\" (UID: \"d0fc9922-46d2-4700-88d7-4322397193c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143697 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8238ee9f-1408-49ba-8a8f-961adc1488b8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sgptm\" (UID: \"8238ee9f-1408-49ba-8a8f-961adc1488b8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.143796 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ae42bdbe-152c-4f34-8bcf-f2284b1a09c6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sz4l8\" (UID: \"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144127 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76ca4784-e584-413d-b1cb-77f336e4f695-service-ca-bundle\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144180 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd926b1a-e534-4f79-ab19-afca2be13183-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ffrcv\" (UID: \"cd926b1a-e534-4f79-ab19-afca2be13183\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144211 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wh4s\" (UniqueName: \"kubernetes.io/projected/ac9e18c9-3efe-4b57-a2d8-09aba942b999-kube-api-access-9wh4s\") pod \"ingress-operator-5b745b69d9-kk9j4\" (UID: \"ac9e18c9-3efe-4b57-a2d8-09aba942b999\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144244 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msdkz\" (UniqueName: \"kubernetes.io/projected/ae42bdbe-152c-4f34-8bcf-f2284b1a09c6-kube-api-access-msdkz\") pod \"machine-config-operator-74547568cd-sz4l8\" (UID: \"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144322 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac9e18c9-3efe-4b57-a2d8-09aba942b999-trusted-ca\") pod \"ingress-operator-5b745b69d9-kk9j4\" (UID: \"ac9e18c9-3efe-4b57-a2d8-09aba942b999\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144369 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-registry-tls\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144395 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6e05e924-7aac-419c-82a7-0d9b9592b39f-registry-certificates\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144423 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6e05e924-7aac-419c-82a7-0d9b9592b39f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144447 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76ca4784-e584-413d-b1cb-77f336e4f695-metrics-certs\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144502 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd926b1a-e534-4f79-ab19-afca2be13183-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ffrcv\" (UID: \"cd926b1a-e534-4f79-ab19-afca2be13183\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144531 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76ca4784-e584-413d-b1cb-77f336e4f695-stats-auth\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144554 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e05e924-7aac-419c-82a7-0d9b9592b39f-trusted-ca\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144577 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hklb\" (UniqueName: \"kubernetes.io/projected/9fa458e6-be33-42f2-94ea-16ef5b241fa8-kube-api-access-4hklb\") pod \"multus-admission-controller-857f4d67dd-9t6sj\" (UID: \"9fa458e6-be33-42f2-94ea-16ef5b241fa8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9t6sj" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144602 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d0fc9922-46d2-4700-88d7-4322397193c2-srv-cert\") pod \"catalog-operator-68c6474976-pm48b\" (UID: \"d0fc9922-46d2-4700-88d7-4322397193c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144628 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ac83f439-8fd3-4813-b348-bdea75672000-tmpfs\") pod \"packageserver-d55dfcdfc-l4nps\" (UID: \"ac83f439-8fd3-4813-b348-bdea75672000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.144664 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.145058 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:31.645045614 +0000 UTC m=+156.754188112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.148641 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76ca4784-e584-413d-b1cb-77f336e4f695-service-ca-bundle\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.152448 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e05e924-7aac-419c-82a7-0d9b9592b39f-trusted-ca\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.154309 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6e05e924-7aac-419c-82a7-0d9b9592b39f-registry-certificates\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.156628 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac9e18c9-3efe-4b57-a2d8-09aba942b999-trusted-ca\") pod \"ingress-operator-5b745b69d9-kk9j4\" (UID: \"ac9e18c9-3efe-4b57-a2d8-09aba942b999\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.160114 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76ca4784-e584-413d-b1cb-77f336e4f695-default-certificate\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.160230 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6e05e924-7aac-419c-82a7-0d9b9592b39f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.161110 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd926b1a-e534-4f79-ab19-afca2be13183-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ffrcv\" (UID: \"cd926b1a-e534-4f79-ab19-afca2be13183\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.163497 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ac83f439-8fd3-4813-b348-bdea75672000-tmpfs\") pod \"packageserver-d55dfcdfc-l4nps\" (UID: \"ac83f439-8fd3-4813-b348-bdea75672000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.163725 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76ca4784-e584-413d-b1cb-77f336e4f695-metrics-certs\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.164935 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac9e18c9-3efe-4b57-a2d8-09aba942b999-metrics-tls\") pod \"ingress-operator-5b745b69d9-kk9j4\" (UID: \"ac9e18c9-3efe-4b57-a2d8-09aba942b999\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.166141 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76ca4784-e584-413d-b1cb-77f336e4f695-stats-auth\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.166510 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac83f439-8fd3-4813-b348-bdea75672000-webhook-cert\") pod \"packageserver-d55dfcdfc-l4nps\" (UID: \"ac83f439-8fd3-4813-b348-bdea75672000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.166601 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac83f439-8fd3-4813-b348-bdea75672000-apiservice-cert\") pod \"packageserver-d55dfcdfc-l4nps\" (UID: \"ac83f439-8fd3-4813-b348-bdea75672000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.167175 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fc3fe65-482e-43ed-9669-7849bfc0bfd2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6hlzb\" (UID: \"1fc3fe65-482e-43ed-9669-7849bfc0bfd2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.167450 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9fa458e6-be33-42f2-94ea-16ef5b241fa8-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9t6sj\" (UID: \"9fa458e6-be33-42f2-94ea-16ef5b241fa8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9t6sj" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.175727 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd926b1a-e534-4f79-ab19-afca2be13183-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ffrcv\" (UID: \"cd926b1a-e534-4f79-ab19-afca2be13183\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.182982 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-registry-tls\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.183394 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-n6wh5"] Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.186695 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6e05e924-7aac-419c-82a7-0d9b9592b39f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.197376 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac9e18c9-3efe-4b57-a2d8-09aba942b999-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kk9j4\" (UID: \"ac9e18c9-3efe-4b57-a2d8-09aba942b999\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.217131 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbp8\" (UniqueName: \"kubernetes.io/projected/1fc3fe65-482e-43ed-9669-7849bfc0bfd2-kube-api-access-jbbp8\") pod \"control-plane-machine-set-operator-78cbb6b69f-6hlzb\" (UID: \"1fc3fe65-482e-43ed-9669-7849bfc0bfd2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.235896 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hklb\" (UniqueName: \"kubernetes.io/projected/9fa458e6-be33-42f2-94ea-16ef5b241fa8-kube-api-access-4hklb\") pod \"multus-admission-controller-857f4d67dd-9t6sj\" (UID: \"9fa458e6-be33-42f2-94ea-16ef5b241fa8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9t6sj" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.245367 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.245584 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-csi-data-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.245656 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngplb\" (UniqueName: \"kubernetes.io/projected/d0fc9922-46d2-4700-88d7-4322397193c2-kube-api-access-ngplb\") pod \"catalog-operator-68c6474976-pm48b\" (UID: \"d0fc9922-46d2-4700-88d7-4322397193c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.245678 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b5c77c-f0dd-4457-9333-2ed1ae64baa1-config\") pod \"service-ca-operator-777779d784-dm986\" (UID: \"84b5c77c-f0dd-4457-9333-2ed1ae64baa1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.245694 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qnqm\" (UniqueName: \"kubernetes.io/projected/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-kube-api-access-7qnqm\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.245721 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e2c40c04-af4a-4f75-ad23-7366287447bf-node-bootstrap-token\") pod \"machine-config-server-bg74p\" (UID: \"e2c40c04-af4a-4f75-ad23-7366287447bf\") " pod="openshift-machine-config-operator/machine-config-server-bg74p" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.245735 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scgkm\" (UniqueName: \"kubernetes.io/projected/84b5c77c-f0dd-4457-9333-2ed1ae64baa1-kube-api-access-scgkm\") pod \"service-ca-operator-777779d784-dm986\" (UID: \"84b5c77c-f0dd-4457-9333-2ed1ae64baa1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.245767 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ae42bdbe-152c-4f34-8bcf-f2284b1a09c6-images\") pod \"machine-config-operator-74547568cd-sz4l8\" (UID: \"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.245789 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6ea5bb5c-34b8-497e-9193-c3406d2f9756-signing-key\") pod \"service-ca-9c57cc56f-4xjjp\" (UID: \"6ea5bb5c-34b8-497e-9193-c3406d2f9756\") " pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.245875 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-plugins-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.246148 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:31.746126204 +0000 UTC m=+156.855268712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.247884 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.248080 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ae42bdbe-152c-4f34-8bcf-f2284b1a09c6-images\") pod \"machine-config-operator-74547568cd-sz4l8\" (UID: \"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250103 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8238ee9f-1408-49ba-8a8f-961adc1488b8-serving-cert\") pod \"openshift-config-operator-7777fb866f-sgptm\" (UID: \"8238ee9f-1408-49ba-8a8f-961adc1488b8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250173 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ae42bdbe-152c-4f34-8bcf-f2284b1a09c6-proxy-tls\") pod \"machine-config-operator-74547568cd-sz4l8\" (UID: \"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250233 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1380a7fd-719d-420e-8a63-bd959e4e18ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mjkx2\" (UID: \"1380a7fd-719d-420e-8a63-bd959e4e18ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250258 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w28fs\" (UniqueName: \"kubernetes.io/projected/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-kube-api-access-w28fs\") pod \"collect-profiles-29395290-bswxc\" (UID: \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250286 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chm8h\" (UniqueName: \"kubernetes.io/projected/8238ee9f-1408-49ba-8a8f-961adc1488b8-kube-api-access-chm8h\") pod \"openshift-config-operator-7777fb866f-sgptm\" (UID: \"8238ee9f-1408-49ba-8a8f-961adc1488b8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250324 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d0fc9922-46d2-4700-88d7-4322397193c2-profile-collector-cert\") pod \"catalog-operator-68c6474976-pm48b\" (UID: \"d0fc9922-46d2-4700-88d7-4322397193c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250348 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vbn\" (UniqueName: \"kubernetes.io/projected/e4f03066-ed74-40ad-ac94-c9c2d83f648e-kube-api-access-r4vbn\") pod \"marketplace-operator-79b997595-qh9tk\" (UID: \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250410 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8238ee9f-1408-49ba-8a8f-961adc1488b8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sgptm\" (UID: \"8238ee9f-1408-49ba-8a8f-961adc1488b8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250516 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4f03066-ed74-40ad-ac94-c9c2d83f648e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qh9tk\" (UID: \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250561 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6ea5bb5c-34b8-497e-9193-c3406d2f9756-signing-cabundle\") pod \"service-ca-9c57cc56f-4xjjp\" (UID: \"6ea5bb5c-34b8-497e-9193-c3406d2f9756\") " pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250609 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-registration-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250647 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ae42bdbe-152c-4f34-8bcf-f2284b1a09c6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sz4l8\" (UID: \"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250700 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-config-volume\") pod \"collect-profiles-29395290-bswxc\" (UID: \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250754 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msdkz\" (UniqueName: \"kubernetes.io/projected/ae42bdbe-152c-4f34-8bcf-f2284b1a09c6-kube-api-access-msdkz\") pod \"machine-config-operator-74547568cd-sz4l8\" (UID: \"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250781 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-secret-volume\") pod \"collect-profiles-29395290-bswxc\" (UID: \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250906 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jjgp\" (UniqueName: \"kubernetes.io/projected/9741a397-9e67-459c-9dcd-9163fb05c6e4-kube-api-access-8jjgp\") pod \"package-server-manager-789f6589d5-gb5dr\" (UID: \"9741a397-9e67-459c-9dcd-9163fb05c6e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250934 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zclzm\" (UniqueName: \"kubernetes.io/projected/e2c40c04-af4a-4f75-ad23-7366287447bf-kube-api-access-zclzm\") pod \"machine-config-server-bg74p\" (UID: \"e2c40c04-af4a-4f75-ad23-7366287447bf\") " pod="openshift-machine-config-operator/machine-config-server-bg74p" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.250969 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e2c40c04-af4a-4f75-ad23-7366287447bf-certs\") pod \"machine-config-server-bg74p\" (UID: \"e2c40c04-af4a-4f75-ad23-7366287447bf\") " pod="openshift-machine-config-operator/machine-config-server-bg74p" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.251075 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d0fc9922-46d2-4700-88d7-4322397193c2-srv-cert\") pod \"catalog-operator-68c6474976-pm48b\" (UID: \"d0fc9922-46d2-4700-88d7-4322397193c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.251142 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.251165 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b5c77c-f0dd-4457-9333-2ed1ae64baa1-serving-cert\") pod \"service-ca-operator-777779d784-dm986\" (UID: \"84b5c77c-f0dd-4457-9333-2ed1ae64baa1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.251217 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjgnk\" (UniqueName: \"kubernetes.io/projected/3b724f1a-56bb-4151-8740-29fdd824a900-kube-api-access-mjgnk\") pod \"ingress-canary-drf8x\" (UID: \"3b724f1a-56bb-4151-8740-29fdd824a900\") " pod="openshift-ingress-canary/ingress-canary-drf8x" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.251256 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b724f1a-56bb-4151-8740-29fdd824a900-cert\") pod \"ingress-canary-drf8x\" (UID: \"3b724f1a-56bb-4151-8740-29fdd824a900\") " pod="openshift-ingress-canary/ingress-canary-drf8x" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.251294 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8cdd\" (UniqueName: \"kubernetes.io/projected/6ea5bb5c-34b8-497e-9193-c3406d2f9756-kube-api-access-d8cdd\") pod \"service-ca-9c57cc56f-4xjjp\" (UID: \"6ea5bb5c-34b8-497e-9193-c3406d2f9756\") " pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.252487 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1380a7fd-719d-420e-8a63-bd959e4e18ab-proxy-tls\") pod \"machine-config-controller-84d6567774-mjkx2\" (UID: \"1380a7fd-719d-420e-8a63-bd959e4e18ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.252722 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27qj8\" (UniqueName: \"kubernetes.io/projected/1380a7fd-719d-420e-8a63-bd959e4e18ab-kube-api-access-27qj8\") pod \"machine-config-controller-84d6567774-mjkx2\" (UID: \"1380a7fd-719d-420e-8a63-bd959e4e18ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.252776 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4f03066-ed74-40ad-ac94-c9c2d83f648e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qh9tk\" (UID: \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.253004 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8238ee9f-1408-49ba-8a8f-961adc1488b8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sgptm\" (UID: \"8238ee9f-1408-49ba-8a8f-961adc1488b8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.253140 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:31.753127802 +0000 UTC m=+156.862270300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.253194 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-socket-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.253472 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-mountpoint-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.253724 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9741a397-9e67-459c-9dcd-9163fb05c6e4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gb5dr\" (UID: \"9741a397-9e67-459c-9dcd-9163fb05c6e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.254334 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ae42bdbe-152c-4f34-8bcf-f2284b1a09c6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sz4l8\" (UID: \"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.258620 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ae42bdbe-152c-4f34-8bcf-f2284b1a09c6-proxy-tls\") pod \"machine-config-operator-74547568cd-sz4l8\" (UID: \"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.264745 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-485b8\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-kube-api-access-485b8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.266273 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-bound-sa-token\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.266476 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8238ee9f-1408-49ba-8a8f-961adc1488b8-serving-cert\") pod \"openshift-config-operator-7777fb866f-sgptm\" (UID: \"8238ee9f-1408-49ba-8a8f-961adc1488b8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.266666 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d0fc9922-46d2-4700-88d7-4322397193c2-srv-cert\") pod \"catalog-operator-68c6474976-pm48b\" (UID: \"d0fc9922-46d2-4700-88d7-4322397193c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.268156 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d0fc9922-46d2-4700-88d7-4322397193c2-profile-collector-cert\") pod \"catalog-operator-68c6474976-pm48b\" (UID: \"d0fc9922-46d2-4700-88d7-4322397193c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.270562 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9t6sj" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.285280 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.288252 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd926b1a-e534-4f79-ab19-afca2be13183-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ffrcv\" (UID: \"cd926b1a-e534-4f79-ab19-afca2be13183\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.311891 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzjbj\" (UniqueName: \"kubernetes.io/projected/ac83f439-8fd3-4813-b348-bdea75672000-kube-api-access-nzjbj\") pod \"packageserver-d55dfcdfc-l4nps\" (UID: \"ac83f439-8fd3-4813-b348-bdea75672000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.337585 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355095 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355297 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9741a397-9e67-459c-9dcd-9163fb05c6e4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gb5dr\" (UID: \"9741a397-9e67-459c-9dcd-9163fb05c6e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355336 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-csi-data-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355360 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b5c77c-f0dd-4457-9333-2ed1ae64baa1-config\") pod \"service-ca-operator-777779d784-dm986\" (UID: \"84b5c77c-f0dd-4457-9333-2ed1ae64baa1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355373 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qnqm\" (UniqueName: \"kubernetes.io/projected/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-kube-api-access-7qnqm\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355389 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e2c40c04-af4a-4f75-ad23-7366287447bf-node-bootstrap-token\") pod \"machine-config-server-bg74p\" (UID: \"e2c40c04-af4a-4f75-ad23-7366287447bf\") " pod="openshift-machine-config-operator/machine-config-server-bg74p" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355404 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scgkm\" (UniqueName: \"kubernetes.io/projected/84b5c77c-f0dd-4457-9333-2ed1ae64baa1-kube-api-access-scgkm\") pod \"service-ca-operator-777779d784-dm986\" (UID: \"84b5c77c-f0dd-4457-9333-2ed1ae64baa1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355425 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6ea5bb5c-34b8-497e-9193-c3406d2f9756-signing-key\") pod \"service-ca-9c57cc56f-4xjjp\" (UID: \"6ea5bb5c-34b8-497e-9193-c3406d2f9756\") " pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355443 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-plugins-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355460 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w28fs\" (UniqueName: \"kubernetes.io/projected/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-kube-api-access-w28fs\") pod \"collect-profiles-29395290-bswxc\" (UID: \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355478 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1380a7fd-719d-420e-8a63-bd959e4e18ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mjkx2\" (UID: \"1380a7fd-719d-420e-8a63-bd959e4e18ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355508 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4vbn\" (UniqueName: \"kubernetes.io/projected/e4f03066-ed74-40ad-ac94-c9c2d83f648e-kube-api-access-r4vbn\") pod \"marketplace-operator-79b997595-qh9tk\" (UID: \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355537 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4f03066-ed74-40ad-ac94-c9c2d83f648e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qh9tk\" (UID: \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355552 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6ea5bb5c-34b8-497e-9193-c3406d2f9756-signing-cabundle\") pod \"service-ca-9c57cc56f-4xjjp\" (UID: \"6ea5bb5c-34b8-497e-9193-c3406d2f9756\") " pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355567 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-registration-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355595 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-config-volume\") pod \"collect-profiles-29395290-bswxc\" (UID: \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355615 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-secret-volume\") pod \"collect-profiles-29395290-bswxc\" (UID: \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355640 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jjgp\" (UniqueName: \"kubernetes.io/projected/9741a397-9e67-459c-9dcd-9163fb05c6e4-kube-api-access-8jjgp\") pod \"package-server-manager-789f6589d5-gb5dr\" (UID: \"9741a397-9e67-459c-9dcd-9163fb05c6e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355658 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zclzm\" (UniqueName: \"kubernetes.io/projected/e2c40c04-af4a-4f75-ad23-7366287447bf-kube-api-access-zclzm\") pod \"machine-config-server-bg74p\" (UID: \"e2c40c04-af4a-4f75-ad23-7366287447bf\") " pod="openshift-machine-config-operator/machine-config-server-bg74p" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355673 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e2c40c04-af4a-4f75-ad23-7366287447bf-certs\") pod \"machine-config-server-bg74p\" (UID: \"e2c40c04-af4a-4f75-ad23-7366287447bf\") " pod="openshift-machine-config-operator/machine-config-server-bg74p" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355701 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b5c77c-f0dd-4457-9333-2ed1ae64baa1-serving-cert\") pod \"service-ca-operator-777779d784-dm986\" (UID: \"84b5c77c-f0dd-4457-9333-2ed1ae64baa1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355715 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjgnk\" (UniqueName: \"kubernetes.io/projected/3b724f1a-56bb-4151-8740-29fdd824a900-kube-api-access-mjgnk\") pod \"ingress-canary-drf8x\" (UID: \"3b724f1a-56bb-4151-8740-29fdd824a900\") " pod="openshift-ingress-canary/ingress-canary-drf8x" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355734 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b724f1a-56bb-4151-8740-29fdd824a900-cert\") pod \"ingress-canary-drf8x\" (UID: \"3b724f1a-56bb-4151-8740-29fdd824a900\") " pod="openshift-ingress-canary/ingress-canary-drf8x" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355749 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8cdd\" (UniqueName: \"kubernetes.io/projected/6ea5bb5c-34b8-497e-9193-c3406d2f9756-kube-api-access-d8cdd\") pod \"service-ca-9c57cc56f-4xjjp\" (UID: \"6ea5bb5c-34b8-497e-9193-c3406d2f9756\") " pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355771 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1380a7fd-719d-420e-8a63-bd959e4e18ab-proxy-tls\") pod \"machine-config-controller-84d6567774-mjkx2\" (UID: \"1380a7fd-719d-420e-8a63-bd959e4e18ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355785 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27qj8\" (UniqueName: \"kubernetes.io/projected/1380a7fd-719d-420e-8a63-bd959e4e18ab-kube-api-access-27qj8\") pod \"machine-config-controller-84d6567774-mjkx2\" (UID: \"1380a7fd-719d-420e-8a63-bd959e4e18ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355801 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4f03066-ed74-40ad-ac94-c9c2d83f648e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qh9tk\" (UID: \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355818 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-socket-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355847 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-mountpoint-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.355926 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-mountpoint-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.356114 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-registration-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.356777 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-config-volume\") pod \"collect-profiles-29395290-bswxc\" (UID: \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.357902 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:31.857888847 +0000 UTC m=+156.967031335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.359208 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6ea5bb5c-34b8-497e-9193-c3406d2f9756-signing-cabundle\") pod \"service-ca-9c57cc56f-4xjjp\" (UID: \"6ea5bb5c-34b8-497e-9193-c3406d2f9756\") " pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.360599 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-csi-data-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.360676 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e2c40c04-af4a-4f75-ad23-7366287447bf-certs\") pod \"machine-config-server-bg74p\" (UID: \"e2c40c04-af4a-4f75-ad23-7366287447bf\") " pod="openshift-machine-config-operator/machine-config-server-bg74p" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.360753 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-socket-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.361425 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1380a7fd-719d-420e-8a63-bd959e4e18ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mjkx2\" (UID: \"1380a7fd-719d-420e-8a63-bd959e4e18ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.361748 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-plugins-dir\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.362241 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b5c77c-f0dd-4457-9333-2ed1ae64baa1-config\") pod \"service-ca-operator-777779d784-dm986\" (UID: \"84b5c77c-f0dd-4457-9333-2ed1ae64baa1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.363297 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4f03066-ed74-40ad-ac94-c9c2d83f648e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qh9tk\" (UID: \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.363849 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4bz8\" (UniqueName: \"kubernetes.io/projected/cd926b1a-e534-4f79-ab19-afca2be13183-kube-api-access-g4bz8\") pod \"cluster-image-registry-operator-dc59b4c8b-ffrcv\" (UID: \"cd926b1a-e534-4f79-ab19-afca2be13183\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.370937 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1380a7fd-719d-420e-8a63-bd959e4e18ab-proxy-tls\") pod \"machine-config-controller-84d6567774-mjkx2\" (UID: \"1380a7fd-719d-420e-8a63-bd959e4e18ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.371691 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9741a397-9e67-459c-9dcd-9163fb05c6e4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gb5dr\" (UID: \"9741a397-9e67-459c-9dcd-9163fb05c6e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.374033 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b724f1a-56bb-4151-8740-29fdd824a900-cert\") pod \"ingress-canary-drf8x\" (UID: \"3b724f1a-56bb-4151-8740-29fdd824a900\") " pod="openshift-ingress-canary/ingress-canary-drf8x" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.374179 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llwqf\" (UniqueName: \"kubernetes.io/projected/76ca4784-e584-413d-b1cb-77f336e4f695-kube-api-access-llwqf\") pod \"router-default-5444994796-g9znh\" (UID: \"76ca4784-e584-413d-b1cb-77f336e4f695\") " pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.374545 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-secret-volume\") pod \"collect-profiles-29395290-bswxc\" (UID: \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.375217 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wh4s\" (UniqueName: \"kubernetes.io/projected/ac9e18c9-3efe-4b57-a2d8-09aba942b999-kube-api-access-9wh4s\") pod \"ingress-operator-5b745b69d9-kk9j4\" (UID: \"ac9e18c9-3efe-4b57-a2d8-09aba942b999\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.378825 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4f03066-ed74-40ad-ac94-c9c2d83f648e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qh9tk\" (UID: \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.379031 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e2c40c04-af4a-4f75-ad23-7366287447bf-node-bootstrap-token\") pod \"machine-config-server-bg74p\" (UID: \"e2c40c04-af4a-4f75-ad23-7366287447bf\") " pod="openshift-machine-config-operator/machine-config-server-bg74p" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.379248 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b5c77c-f0dd-4457-9333-2ed1ae64baa1-serving-cert\") pod \"service-ca-operator-777779d784-dm986\" (UID: \"84b5c77c-f0dd-4457-9333-2ed1ae64baa1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.384909 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6ea5bb5c-34b8-497e-9193-c3406d2f9756-signing-key\") pod \"service-ca-9c57cc56f-4xjjp\" (UID: \"6ea5bb5c-34b8-497e-9193-c3406d2f9756\") " pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.406102 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngplb\" (UniqueName: \"kubernetes.io/projected/d0fc9922-46d2-4700-88d7-4322397193c2-kube-api-access-ngplb\") pod \"catalog-operator-68c6474976-pm48b\" (UID: \"d0fc9922-46d2-4700-88d7-4322397193c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.429963 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msdkz\" (UniqueName: \"kubernetes.io/projected/ae42bdbe-152c-4f34-8bcf-f2284b1a09c6-kube-api-access-msdkz\") pod \"machine-config-operator-74547568cd-sz4l8\" (UID: \"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.448596 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chm8h\" (UniqueName: \"kubernetes.io/projected/8238ee9f-1408-49ba-8a8f-961adc1488b8-kube-api-access-chm8h\") pod \"openshift-config-operator-7777fb866f-sgptm\" (UID: \"8238ee9f-1408-49ba-8a8f-961adc1488b8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.459723 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.460232 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:31.960217532 +0000 UTC m=+157.069360030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.470941 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs"] Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.474117 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjgnk\" (UniqueName: \"kubernetes.io/projected/3b724f1a-56bb-4151-8740-29fdd824a900-kube-api-access-mjgnk\") pod \"ingress-canary-drf8x\" (UID: \"3b724f1a-56bb-4151-8740-29fdd824a900\") " pod="openshift-ingress-canary/ingress-canary-drf8x" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.477688 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zdj2c"] Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.494313 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7"] Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.503217 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scgkm\" (UniqueName: \"kubernetes.io/projected/84b5c77c-f0dd-4457-9333-2ed1ae64baa1-kube-api-access-scgkm\") pod \"service-ca-operator-777779d784-dm986\" (UID: \"84b5c77c-f0dd-4457-9333-2ed1ae64baa1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.509210 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8cdd\" (UniqueName: \"kubernetes.io/projected/6ea5bb5c-34b8-497e-9193-c3406d2f9756-kube-api-access-d8cdd\") pod \"service-ca-9c57cc56f-4xjjp\" (UID: \"6ea5bb5c-34b8-497e-9193-c3406d2f9756\") " pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.517222 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.521749 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jjgp\" (UniqueName: \"kubernetes.io/projected/9741a397-9e67-459c-9dcd-9163fb05c6e4-kube-api-access-8jjgp\") pod \"package-server-manager-789f6589d5-gb5dr\" (UID: \"9741a397-9e67-459c-9dcd-9163fb05c6e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.543394 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zclzm\" (UniqueName: \"kubernetes.io/projected/e2c40c04-af4a-4f75-ad23-7366287447bf-kube-api-access-zclzm\") pod \"machine-config-server-bg74p\" (UID: \"e2c40c04-af4a-4f75-ad23-7366287447bf\") " pod="openshift-machine-config-operator/machine-config-server-bg74p" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.560976 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.561476 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:32.061457737 +0000 UTC m=+157.170600235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.562767 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.567018 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27qj8\" (UniqueName: \"kubernetes.io/projected/1380a7fd-719d-420e-8a63-bd959e4e18ab-kube-api-access-27qj8\") pod \"machine-config-controller-84d6567774-mjkx2\" (UID: \"1380a7fd-719d-420e-8a63-bd959e4e18ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.578291 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.599915 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w28fs\" (UniqueName: \"kubernetes.io/projected/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-kube-api-access-w28fs\") pod \"collect-profiles-29395290-bswxc\" (UID: \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.602130 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.608743 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.616512 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qnqm\" (UniqueName: \"kubernetes.io/projected/a6597c90-c4ee-4856-b03f-f0fa1d3062f5-kube-api-access-7qnqm\") pod \"csi-hostpathplugin-fwcgm\" (UID: \"a6597c90-c4ee-4856-b03f-f0fa1d3062f5\") " pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.626151 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.631479 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4vbn\" (UniqueName: \"kubernetes.io/projected/e4f03066-ed74-40ad-ac94-c9c2d83f648e-kube-api-access-r4vbn\") pod \"marketplace-operator-79b997595-qh9tk\" (UID: \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.632447 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" event={"ID":"caa7b1cd-346c-4aba-9924-25dba85fcc5f","Type":"ContainerStarted","Data":"f1bea5b035cec529cfa69e33d5fbac4f52cad947b3a8e2325a67e0a92cbd3bf2"} Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.632493 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" event={"ID":"caa7b1cd-346c-4aba-9924-25dba85fcc5f","Type":"ContainerStarted","Data":"9321e10b3d252bc2d88efbe08f6ed56a18658dd0cdf8118a94bb309cd41b998c"} Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.636296 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w2c2r"] Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.652333 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz"] Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.653373 4972 generic.go:334] "Generic (PLEG): container finished" podID="12c25e78-a24e-4962-8976-3bc097fdaaf6" containerID="4284ea7b623c60a2ddfbebe587c7500162f60f711afd2802752a69770a454c59" exitCode=0 Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.653435 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" event={"ID":"12c25e78-a24e-4962-8976-3bc097fdaaf6","Type":"ContainerDied","Data":"4284ea7b623c60a2ddfbebe587c7500162f60f711afd2802752a69770a454c59"} Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.655014 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.656697 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b5tdm"] Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.657718 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" event={"ID":"7d4f9796-6468-488d-ac2e-afcf480c57fc","Type":"ContainerStarted","Data":"6f3a859f943678b9c412432d41cb0e782382787df8512387b6041cb8f5d5952e"} Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.657758 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" event={"ID":"7d4f9796-6468-488d-ac2e-afcf480c57fc","Type":"ContainerStarted","Data":"b7811cbbf7016af9606e41937fb66b1b7bad72fecca7fdad8eb79629e9e4f2ca"} Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.658666 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zdj2c" event={"ID":"f7d0d69c-8687-4e4f-9069-7db996719dab","Type":"ContainerStarted","Data":"0eaf3b6c6d1cdae9f64f14b9bf5e663f8f209f6cafbf2e5e1ac42ee8756a3607"} Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.661049 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" event={"ID":"6068d0a6-b0b7-44af-8dcd-995d728bf03a","Type":"ContainerStarted","Data":"6dac65444381615580d7c5c2a13c1b28392da59ffef9205b56f9dbd137b7e792"} Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.662192 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.663521 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.663698 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" event={"ID":"a33da252-8a42-4fb1-8663-b4046881cae0","Type":"ContainerStarted","Data":"796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de"} Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.663754 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" event={"ID":"a33da252-8a42-4fb1-8663-b4046881cae0","Type":"ContainerStarted","Data":"d1f8aea372e176dd30d28ef28de414ddf1e3a26c82b72b42bcd5f2ff94a6b008"} Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.664036 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:32.164014029 +0000 UTC m=+157.273156527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.664575 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.665638 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" event={"ID":"3897d9bc-e576-4575-8451-10a0e3a73517","Type":"ContainerStarted","Data":"38723d5ffa8c57ee689294f2755a8a1e60cf7811c59187edc297f7f42463bc3d"} Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.666806 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" event={"ID":"3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3","Type":"ContainerStarted","Data":"7d6bf288e43a7bba07e567fadebc06618a0bd17bf01f4281dc7d5e1fc03c00b0"} Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.667974 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j7xxl" event={"ID":"7b0e4d64-f901-4a4e-9644-408eb534401e","Type":"ContainerStarted","Data":"98c7e87ada638d1f8994428ed2de5ff17f70f867ea3a5f22448fbeb0a8f69a38"} Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.668008 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j7xxl" event={"ID":"7b0e4d64-f901-4a4e-9644-408eb534401e","Type":"ContainerStarted","Data":"124f3d9bdcc65ef31e252a5f8e57248caf6e499772ea0946beb4d7fd4ffa3c64"} Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.670654 4972 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-hrxkw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.670710 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" podUID="a33da252-8a42-4fb1-8663-b4046881cae0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.670815 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.680083 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.686127 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.699145 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.730971 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.737559 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-drf8x" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.754058 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bg74p" Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.765508 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.767029 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:32.267009633 +0000 UTC m=+157.376152131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.869925 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.870420 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:32.370399389 +0000 UTC m=+157.479541887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.907643 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-77wn4"] Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.922955 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-swwr5"] Nov 21 09:43:31 crc kubenswrapper[4972]: W1121 09:43:31.930592 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31dd0dd8_9279_46ab_83bf_92282256204b.slice/crio-6cae68fa90badf5058f38ae662756d3bed0fb16cb0a438ee9bf26ef74346dcaf WatchSource:0}: Error finding container 6cae68fa90badf5058f38ae662756d3bed0fb16cb0a438ee9bf26ef74346dcaf: Status 404 returned error can't find the container with id 6cae68fa90badf5058f38ae662756d3bed0fb16cb0a438ee9bf26ef74346dcaf Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.934305 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm"] Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.937941 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n8mj7"] Nov 21 09:43:31 crc kubenswrapper[4972]: W1121 09:43:31.953102 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5df7d34a_5265_4653_8525_68dc1e2109fd.slice/crio-fee4ff3a870373407a7626097a00eef04c6ce50aa7c06c33beb25d0ea8bff5f3 WatchSource:0}: Error finding container fee4ff3a870373407a7626097a00eef04c6ce50aa7c06c33beb25d0ea8bff5f3: Status 404 returned error can't find the container with id fee4ff3a870373407a7626097a00eef04c6ce50aa7c06c33beb25d0ea8bff5f3 Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.970785 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.971134 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:32.471072767 +0000 UTC m=+157.580215265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.972050 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:31 crc kubenswrapper[4972]: E1121 09:43:31.972458 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:32.472439096 +0000 UTC m=+157.581581594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.993025 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sgptm"] Nov 21 09:43:31 crc kubenswrapper[4972]: I1121 09:43:31.994421 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.000559 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8"] Nov 21 09:43:32 crc kubenswrapper[4972]: W1121 09:43:32.016967 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58f8e920_8907_4555_80dc_c00b2af7c80a.slice/crio-c45e8dc5cef0c78ac8a16d15f3eeed57dc82b541f9ec23f194e8fe78661c936f WatchSource:0}: Error finding container c45e8dc5cef0c78ac8a16d15f3eeed57dc82b541f9ec23f194e8fe78661c936f: Status 404 returned error can't find the container with id c45e8dc5cef0c78ac8a16d15f3eeed57dc82b541f9ec23f194e8fe78661c936f Nov 21 09:43:32 crc kubenswrapper[4972]: W1121 09:43:32.017807 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67ed0332_55cb_41e1_8a15_4e497706e00d.slice/crio-3b2b8813cf78e3b6fab1582c8076d11e7bb6d916381556b65fca5849e38f9378 WatchSource:0}: Error finding container 3b2b8813cf78e3b6fab1582c8076d11e7bb6d916381556b65fca5849e38f9378: Status 404 returned error can't find the container with id 3b2b8813cf78e3b6fab1582c8076d11e7bb6d916381556b65fca5849e38f9378 Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.017992 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.033550 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gmffc"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.033601 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.044574 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m"] Nov 21 09:43:32 crc kubenswrapper[4972]: W1121 09:43:32.047557 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7558514b_88da_4a2a_818b_cd0cee240faa.slice/crio-83137dbc78911c2dba7c356fc8846f3eccc3fa8ba4e85f2505baff3bc2fa74f5 WatchSource:0}: Error finding container 83137dbc78911c2dba7c356fc8846f3eccc3fa8ba4e85f2505baff3bc2fa74f5: Status 404 returned error can't find the container with id 83137dbc78911c2dba7c356fc8846f3eccc3fa8ba4e85f2505baff3bc2fa74f5 Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.072771 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.073229 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:32.573209177 +0000 UTC m=+157.682351675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.077434 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.090887 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.094011 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.130647 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9t6sj"] Nov 21 09:43:32 crc kubenswrapper[4972]: W1121 09:43:32.144079 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16627f6c_2bea_4b24_9133_a8a009620d53.slice/crio-7a6fec7826b8cf92d02ce16d304da5541e7c984d8e9b3422c21a1b8a3e1a9118 WatchSource:0}: Error finding container 7a6fec7826b8cf92d02ce16d304da5541e7c984d8e9b3422c21a1b8a3e1a9118: Status 404 returned error can't find the container with id 7a6fec7826b8cf92d02ce16d304da5541e7c984d8e9b3422c21a1b8a3e1a9118 Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.148553 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.174172 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.174228 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b"] Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.174607 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:32.674587825 +0000 UTC m=+157.783730323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.275990 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.276372 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:32.776355045 +0000 UTC m=+157.885497543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.281315 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4xjjp"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.295877 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.340018 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qh9tk"] Nov 21 09:43:32 crc kubenswrapper[4972]: W1121 09:43:32.366489 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ea5bb5c_34b8_497e_9193_c3406d2f9756.slice/crio-eaad7697e50206d2087380702f19c5d58e5e4f2774f537568e9469ceee43c441 WatchSource:0}: Error finding container eaad7697e50206d2087380702f19c5d58e5e4f2774f537568e9469ceee43c441: Status 404 returned error can't find the container with id eaad7697e50206d2087380702f19c5d58e5e4f2774f537568e9469ceee43c441 Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.377945 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.378306 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:32.878291819 +0000 UTC m=+157.987434317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.410460 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.437924 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.478787 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.479979 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:32.979949326 +0000 UTC m=+158.089091824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.480639 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.480948 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:32.980941224 +0000 UTC m=+158.090083722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.540937 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dm986"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.572986 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-drf8x"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.582384 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.582529 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:33.082508698 +0000 UTC m=+158.191651196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.582645 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.583010 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:33.083002492 +0000 UTC m=+158.192144990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.642221 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.644879 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4nd8h" podStartSLOduration=135.644861588 podStartE2EDuration="2m15.644861588s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:32.643138339 +0000 UTC m=+157.752280867" watchObservedRunningTime="2025-11-21 09:43:32.644861588 +0000 UTC m=+157.754004086" Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.674587 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9t6sj" event={"ID":"9fa458e6-be33-42f2-94ea-16ef5b241fa8","Type":"ContainerStarted","Data":"28a6213ff7d0c8865ca0205bf38a697177e2dfd5553584de1a5e28579c54f9b0"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.684039 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" event={"ID":"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561","Type":"ContainerStarted","Data":"9fe64a5903d1624ad843f4b3996857897282cd30d14091a73e82bdebd5c6921d"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.684231 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.686423 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" event={"ID":"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6","Type":"ContainerStarted","Data":"61d39a00d73b00efb4ab6a96e6d0a2c33049bf060e391c2d972e8b7323364e6d"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.687204 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" event={"ID":"9741a397-9e67-459c-9dcd-9163fb05c6e4","Type":"ContainerStarted","Data":"8157cf411132d47718338dc33ab7458d16b7df3bd24ec90ee917cfce15d14050"} Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.690856 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:33.190801712 +0000 UTC m=+158.299944220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.694487 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" event={"ID":"1380a7fd-719d-420e-8a63-bd959e4e18ab","Type":"ContainerStarted","Data":"a257f899e6df57ef8571fff49cf4afc9c707a2e579429a14968be899342c15c8"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.696205 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" event={"ID":"127e1d7b-1e8e-492f-905c-3c0027bd1a45","Type":"ContainerStarted","Data":"f29f2c218b3ed5df5f12d1221d46517bd3b74e4bbdc1ac6cce3c622b1d724483"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.697464 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" event={"ID":"e4f03066-ed74-40ad-ac94-c9c2d83f648e","Type":"ContainerStarted","Data":"a489f56d51a3ed7682db89860a8a2eee2fdcbb5a9d93050e230d4cc355dcdf06"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.701404 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" event={"ID":"289b461d-7f4c-4c5d-99a6-ff44db300d7a","Type":"ContainerStarted","Data":"c246fa25075fe684e0ac9ec89952a4271b93668f362a7af7e2ba0ecb9e0a6bb6"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.703693 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb" event={"ID":"1fc3fe65-482e-43ed-9669-7849bfc0bfd2","Type":"ContainerStarted","Data":"712c46c7d5fab2a41e280fe9030c342506e51fc640a742beec8371d4151ff5e9"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.703810 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fwcgm"] Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.705359 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" event={"ID":"7d4f9796-6468-488d-ac2e-afcf480c57fc","Type":"ContainerStarted","Data":"3103cde4f8d45ef856ed5e02860bfe508f58b7e38de9ef1808ca73b10c2bbc7b"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.707138 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" event={"ID":"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f","Type":"ContainerStarted","Data":"2cd56ee75f7f01baba9e359bc1f48a3f53719ee64930616d66c3601ed893647c"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.708367 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-g9znh" event={"ID":"76ca4784-e584-413d-b1cb-77f336e4f695","Type":"ContainerStarted","Data":"d7ddb4f090279be487d56724a382271980cb0be23557f67ebdecdc80f33239c7"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.709570 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" event={"ID":"3ded0e7e-1cf4-4f3e-9907-33a003e4e5b3","Type":"ContainerStarted","Data":"ae327226601e846fd953fe5cd064e17e33dbe15e586d4baa4fd05587e220d2e9"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.726419 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" event={"ID":"d0fc9922-46d2-4700-88d7-4322397193c2","Type":"ContainerStarted","Data":"709cace29bb08d8a1b7aea2fbe75147d8a405347b1a3a90f68c6fc8a8b51ff2a"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.728445 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-77wn4" event={"ID":"7939233b-508e-485b-91ea-8b266ba6f829","Type":"ContainerStarted","Data":"e9c756fadcf7fdb65c2f72252556dc9e50b61009de9a3a78208ddfa51d9afdc0"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.729796 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-swwr5" event={"ID":"67ed0332-55cb-41e1-8a15-4e497706e00d","Type":"ContainerStarted","Data":"3b2b8813cf78e3b6fab1582c8076d11e7bb6d916381556b65fca5849e38f9378"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.731005 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" event={"ID":"6ea5bb5c-34b8-497e-9193-c3406d2f9756","Type":"ContainerStarted","Data":"eaad7697e50206d2087380702f19c5d58e5e4f2774f537568e9469ceee43c441"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.731851 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" event={"ID":"ac9e18c9-3efe-4b57-a2d8-09aba942b999","Type":"ContainerStarted","Data":"202d022d06a946dc3c1e15fa83f0ffdc78767d638e58b53b9a559297fe919cdd"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.732784 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gmffc" event={"ID":"16627f6c-2bea-4b24-9133-a8a009620d53","Type":"ContainerStarted","Data":"7a6fec7826b8cf92d02ce16d304da5541e7c984d8e9b3422c21a1b8a3e1a9118"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.733731 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" event={"ID":"e4478a49-a8d5-4922-b9b6-c749c64697a6","Type":"ContainerStarted","Data":"6290d8c52146e29e36f7aab8cbf1cab706c74be07a1a1f073994b75858531b54"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.734990 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" event={"ID":"58f8e920-8907-4555-80dc-c00b2af7c80a","Type":"ContainerStarted","Data":"c45e8dc5cef0c78ac8a16d15f3eeed57dc82b541f9ec23f194e8fe78661c936f"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.736332 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" event={"ID":"6068d0a6-b0b7-44af-8dcd-995d728bf03a","Type":"ContainerStarted","Data":"d8439ab6c9eadcb01eea65f5c542deb3144152721c1f235d81724e1315c87d87"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.736620 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.737194 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b5tdm" event={"ID":"5df7d34a-5265-4653-8525-68dc1e2109fd","Type":"ContainerStarted","Data":"fee4ff3a870373407a7626097a00eef04c6ce50aa7c06c33beb25d0ea8bff5f3"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.738255 4972 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-n6wh5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.738289 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" podUID="6068d0a6-b0b7-44af-8dcd-995d728bf03a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.738737 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bg74p" event={"ID":"e2c40c04-af4a-4f75-ad23-7366287447bf","Type":"ContainerStarted","Data":"915f4cb323562cce55f327747c4e2d955c411a0f93b8150484a27378d31fbf10"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.739536 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" event={"ID":"8238ee9f-1408-49ba-8a8f-961adc1488b8","Type":"ContainerStarted","Data":"01467d86cbb05c6ed7c4fc426ff5ea58c30ff245679602fb370e721b9313a562"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.740614 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" event={"ID":"ac83f439-8fd3-4813-b348-bdea75672000","Type":"ContainerStarted","Data":"0c9948d6eca6d78e1cbd52accca5a84d76fa73e985a945d992482d40b59abd87"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.741954 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" event={"ID":"7558514b-88da-4a2a-818b-cd0cee240faa","Type":"ContainerStarted","Data":"83137dbc78911c2dba7c356fc8846f3eccc3fa8ba4e85f2505baff3bc2fa74f5"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.742944 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7" event={"ID":"e7bdccc3-c26f-4d11-a892-caf246a8630f","Type":"ContainerStarted","Data":"e3a742875116ca34a0dfe54a289783ab36041ecae3ceb72a8d4edaab56baeab4"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.745157 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zdj2c" event={"ID":"f7d0d69c-8687-4e4f-9069-7db996719dab","Type":"ContainerStarted","Data":"9aa0b30b4b126e250faab67359d877c2b14b3438a55b9054daaaacca75162793"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.745955 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" event={"ID":"31dd0dd8-9279-46ab-83bf-92282256204b","Type":"ContainerStarted","Data":"6cae68fa90badf5058f38ae662756d3bed0fb16cb0a438ee9bf26ef74346dcaf"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.747392 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" event={"ID":"cd926b1a-e534-4f79-ab19-afca2be13183","Type":"ContainerStarted","Data":"ed061a60d898bfd7eec5a0c3076c5b96d25592d573add011c7a62aa841c24ea8"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.748871 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" event={"ID":"e9280ad8-85ad-4faa-a025-a021e417e522","Type":"ContainerStarted","Data":"c96e3aaacc68e893520835b64cd129d2f8e0f79567aad007cc9e931313968197"} Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.749492 4972 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-hrxkw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.749555 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" podUID="a33da252-8a42-4fb1-8663-b4046881cae0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.791553 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.792071 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:33.292054376 +0000 UTC m=+158.401196874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:32 crc kubenswrapper[4972]: W1121 09:43:32.840789 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fc91391_3c93_4fe0_9c24_f8aad9c21fd2.slice/crio-aca76d3f479448d1c69c197124947fcb916ce62a15d42ee55b3290ecd732ea52 WatchSource:0}: Error finding container aca76d3f479448d1c69c197124947fcb916ce62a15d42ee55b3290ecd732ea52: Status 404 returned error can't find the container with id aca76d3f479448d1c69c197124947fcb916ce62a15d42ee55b3290ecd732ea52 Nov 21 09:43:32 crc kubenswrapper[4972]: W1121 09:43:32.850090 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6597c90_c4ee_4856_b03f_f0fa1d3062f5.slice/crio-afe52bf97a3f33c45093066167ddc780bcfe248f927d9191835cdf3b1ccbed40 WatchSource:0}: Error finding container afe52bf97a3f33c45093066167ddc780bcfe248f927d9191835cdf3b1ccbed40: Status 404 returned error can't find the container with id afe52bf97a3f33c45093066167ddc780bcfe248f927d9191835cdf3b1ccbed40 Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.893219 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.893624 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:33.393598639 +0000 UTC m=+158.502741147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.894296 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.894649 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:33.394634649 +0000 UTC m=+158.503777147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:32 crc kubenswrapper[4972]: I1121 09:43:32.995803 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:32 crc kubenswrapper[4972]: E1121 09:43:32.996153 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:33.496133731 +0000 UTC m=+158.605276239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.097683 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:33 crc kubenswrapper[4972]: E1121 09:43:33.098174 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:33.598136837 +0000 UTC m=+158.707279345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.121366 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" podStartSLOduration=136.121342856 podStartE2EDuration="2m16.121342856s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:33.120811301 +0000 UTC m=+158.229953819" watchObservedRunningTime="2025-11-21 09:43:33.121342856 +0000 UTC m=+158.230485354" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.166181 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" podStartSLOduration=135.166163878 podStartE2EDuration="2m15.166163878s" podCreationTimestamp="2025-11-21 09:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:33.164596054 +0000 UTC m=+158.273738572" watchObservedRunningTime="2025-11-21 09:43:33.166163878 +0000 UTC m=+158.275306376" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.198243 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:33 crc kubenswrapper[4972]: E1121 09:43:33.198784 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:33.698762424 +0000 UTC m=+158.807904922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.200793 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jdqq6" podStartSLOduration=136.20074735 podStartE2EDuration="2m16.20074735s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:33.198124606 +0000 UTC m=+158.307267114" watchObservedRunningTime="2025-11-21 09:43:33.20074735 +0000 UTC m=+158.309889848" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.239428 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lswwj" podStartSLOduration=136.239408038 podStartE2EDuration="2m16.239408038s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:33.239173421 +0000 UTC m=+158.348315929" watchObservedRunningTime="2025-11-21 09:43:33.239408038 +0000 UTC m=+158.348550536" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.286079 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2sdbs" podStartSLOduration=136.286061082 podStartE2EDuration="2m16.286061082s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:33.284361414 +0000 UTC m=+158.393503922" watchObservedRunningTime="2025-11-21 09:43:33.286061082 +0000 UTC m=+158.395203580" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.300020 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:33 crc kubenswrapper[4972]: E1121 09:43:33.300296 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:33.800285426 +0000 UTC m=+158.909427924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.320174 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-j7xxl" podStartSLOduration=136.32015205 podStartE2EDuration="2m16.32015205s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:33.31802408 +0000 UTC m=+158.427166608" watchObservedRunningTime="2025-11-21 09:43:33.32015205 +0000 UTC m=+158.429294548" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.401560 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:33 crc kubenswrapper[4972]: E1121 09:43:33.401860 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:33.9018451 +0000 UTC m=+159.010987598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.508125 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:33 crc kubenswrapper[4972]: E1121 09:43:33.508676 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.008662993 +0000 UTC m=+159.117805491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.609504 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:33 crc kubenswrapper[4972]: E1121 09:43:33.609667 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.10964413 +0000 UTC m=+159.218786628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.609980 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:33 crc kubenswrapper[4972]: E1121 09:43:33.610383 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.110333439 +0000 UTC m=+159.219475937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.710793 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:33 crc kubenswrapper[4972]: E1121 09:43:33.711461 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.21143608 +0000 UTC m=+159.320578568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.711591 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:33 crc kubenswrapper[4972]: E1121 09:43:33.711954 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.211947605 +0000 UTC m=+159.321090103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.766226 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-swwr5" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.766281 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" event={"ID":"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6","Type":"ContainerStarted","Data":"b2a3dde4dc3f0fb293197a0364585ff62af8e719d0bf07d618c07942b1300725"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.766313 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" event={"ID":"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2","Type":"ContainerStarted","Data":"aca76d3f479448d1c69c197124947fcb916ce62a15d42ee55b3290ecd732ea52"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.766324 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" event={"ID":"ac83f439-8fd3-4813-b348-bdea75672000","Type":"ContainerStarted","Data":"7102b33b0bbb23b5ed667ffb86fa1d49904239fb0ea0faddf9cbdce4ac4968ee"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.766358 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" event={"ID":"7558514b-88da-4a2a-818b-cd0cee240faa","Type":"ContainerStarted","Data":"7ee05aac8310701f3148dd42df84f4c897c1cce57e0bfe6bf2f677a12d712c68"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.766372 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gmffc" event={"ID":"16627f6c-2bea-4b24-9133-a8a009620d53","Type":"ContainerStarted","Data":"da762a5f17b21e8a5ae7b79d2e8ed159bab53c73ad530f72ecb8fe46b0abd4af"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.766382 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-swwr5" event={"ID":"67ed0332-55cb-41e1-8a15-4e497706e00d","Type":"ContainerStarted","Data":"ffbcce0376856f19fa35c76542b1a93d34469bb8f644efcbb58b87ccbf0af1cf"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.770247 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.770286 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.774178 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" event={"ID":"d0fc9922-46d2-4700-88d7-4322397193c2","Type":"ContainerStarted","Data":"4f1811b554be3212345342ff096eefc637d4a507f018fe4c3c279592a1ff883c"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.775274 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7" event={"ID":"e7bdccc3-c26f-4d11-a892-caf246a8630f","Type":"ContainerStarted","Data":"7722ddec030455717a209d165ec80b77431e1a46073e76ce80339fd6ecefdae5"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.776821 4972 generic.go:334] "Generic (PLEG): container finished" podID="0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561" containerID="dfe82abe3bb9415840f001a11eedd27b36ba24546be844271e993c1c273e7cac" exitCode=0 Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.776903 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" event={"ID":"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561","Type":"ContainerDied","Data":"dfe82abe3bb9415840f001a11eedd27b36ba24546be844271e993c1c273e7cac"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.779669 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-77wn4" event={"ID":"7939233b-508e-485b-91ea-8b266ba6f829","Type":"ContainerStarted","Data":"05b514425f08805295f7c131c10c67da05700bac3d7b9847a04e15dc498d9ca2"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.781041 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgd8m" podStartSLOduration=136.781021246 podStartE2EDuration="2m16.781021246s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:33.780987765 +0000 UTC m=+158.890130263" watchObservedRunningTime="2025-11-21 09:43:33.781021246 +0000 UTC m=+158.890163744" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.781122 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b5tdm" event={"ID":"5df7d34a-5265-4653-8525-68dc1e2109fd","Type":"ContainerStarted","Data":"fe3a18da8cdd9567d3673017c5d7db42bd8c0ff16551400d13d0074c5916219d"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.782854 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" event={"ID":"8238ee9f-1408-49ba-8a8f-961adc1488b8","Type":"ContainerStarted","Data":"31f8b170606e6a17a926b0725adaa792e5fa5123efcc978fe5bd5bf229045860"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.789747 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" event={"ID":"58f8e920-8907-4555-80dc-c00b2af7c80a","Type":"ContainerStarted","Data":"487b752e57a49feacd3d28c686021af99e6257a7a8727d0bb5b96bdbff314b16"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.790581 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.792798 4972 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sqxsm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.792883 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" podUID="58f8e920-8907-4555-80dc-c00b2af7c80a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.798506 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-swwr5" podStartSLOduration=136.798482622 podStartE2EDuration="2m16.798482622s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:33.79703014 +0000 UTC m=+158.906172658" watchObservedRunningTime="2025-11-21 09:43:33.798482622 +0000 UTC m=+158.907625120" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.805634 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" event={"ID":"84b5c77c-f0dd-4457-9333-2ed1ae64baa1","Type":"ContainerStarted","Data":"071ae1b181463bcc9cdeb8a8b89b349734223e3919e450bf56c7562ac37295a5"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.808807 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" event={"ID":"e9280ad8-85ad-4faa-a025-a021e417e522","Type":"ContainerStarted","Data":"86e755517447eba118194bf20ff2979ffb82a3d78a41b59b05698c90448ab299"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.808852 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.812844 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:33 crc kubenswrapper[4972]: E1121 09:43:33.813246 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.31322555 +0000 UTC m=+159.422368048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.814537 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" podStartSLOduration=135.814526797 podStartE2EDuration="2m15.814526797s" podCreationTimestamp="2025-11-21 09:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:33.813605311 +0000 UTC m=+158.922747809" watchObservedRunningTime="2025-11-21 09:43:33.814526797 +0000 UTC m=+158.923669295" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.814907 4972 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-w2c2r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.814972 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" podUID="e9280ad8-85ad-4faa-a025-a021e417e522" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.820921 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" event={"ID":"12c25e78-a24e-4962-8976-3bc097fdaaf6","Type":"ContainerStarted","Data":"9d01636f881474da9e541d1fce8881349ba404e9db35a5bcc1edc26ae9a67828"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.829518 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" event={"ID":"31dd0dd8-9279-46ab-83bf-92282256204b","Type":"ContainerStarted","Data":"4bad0ec333e64c171bd062d3369160b754e31613b943ad8cf386c7f261d3a733"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.833540 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" event={"ID":"289b461d-7f4c-4c5d-99a6-ff44db300d7a","Type":"ContainerStarted","Data":"ee9e20a074e2ae14a72b12da20c1999b8f566e5dd515452b07541436871d7e4c"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.836474 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" event={"ID":"f11172f5-cbc2-4f41-bfb5-7cf480d8af7f","Type":"ContainerStarted","Data":"b63286d97d73dae78ca0804d4ab484a93d8dd12bb144415a0706f6d062054c6d"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.844595 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb" event={"ID":"1fc3fe65-482e-43ed-9669-7849bfc0bfd2","Type":"ContainerStarted","Data":"2776accfaa34cfa805bc5b20870e41066682ca582abfc6f58360ab58ade61ecd"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.848971 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-g9znh" event={"ID":"76ca4784-e584-413d-b1cb-77f336e4f695","Type":"ContainerStarted","Data":"c54b53711a913ff1f9a757ba182a447e4421b9da6264ff1640f1b012a7b54ebb"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.853091 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-drf8x" event={"ID":"3b724f1a-56bb-4151-8740-29fdd824a900","Type":"ContainerStarted","Data":"45d3ed133af5ca34f31a7d15ade6e01593dd2373654e3bd9867b37328159a818"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.855137 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" event={"ID":"a6597c90-c4ee-4856-b03f-f0fa1d3062f5","Type":"ContainerStarted","Data":"afe52bf97a3f33c45093066167ddc780bcfe248f927d9191835cdf3b1ccbed40"} Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.856301 4972 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-n6wh5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.856341 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" podUID="6068d0a6-b0b7-44af-8dcd-995d728bf03a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.861856 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" podStartSLOduration=136.861789179 podStartE2EDuration="2m16.861789179s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:33.861403648 +0000 UTC m=+158.970546156" watchObservedRunningTime="2025-11-21 09:43:33.861789179 +0000 UTC m=+158.970931677" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.882570 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrxfz" podStartSLOduration=136.882554049 podStartE2EDuration="2m16.882554049s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:33.880347626 +0000 UTC m=+158.989490124" watchObservedRunningTime="2025-11-21 09:43:33.882554049 +0000 UTC m=+158.991696547" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.903058 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-g9znh" podStartSLOduration=136.903036 podStartE2EDuration="2m16.903036s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:33.901370533 +0000 UTC m=+159.010513051" watchObservedRunningTime="2025-11-21 09:43:33.903036 +0000 UTC m=+159.012178498" Nov 21 09:43:33 crc kubenswrapper[4972]: I1121 09:43:33.914430 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:33 crc kubenswrapper[4972]: E1121 09:43:33.916100 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.416088861 +0000 UTC m=+159.525231359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.015469 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:34 crc kubenswrapper[4972]: E1121 09:43:34.015701 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.515664918 +0000 UTC m=+159.624807426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.016029 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:34 crc kubenswrapper[4972]: E1121 09:43:34.017805 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.517789068 +0000 UTC m=+159.626931766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.117306 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:34 crc kubenswrapper[4972]: E1121 09:43:34.117758 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.617738116 +0000 UTC m=+159.726880614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.219023 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:34 crc kubenswrapper[4972]: E1121 09:43:34.219348 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.71931701 +0000 UTC m=+159.828459508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.320488 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:34 crc kubenswrapper[4972]: E1121 09:43:34.320951 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.820930075 +0000 UTC m=+159.930072573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.422412 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:34 crc kubenswrapper[4972]: E1121 09:43:34.423006 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:34.922987733 +0000 UTC m=+160.032130231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.523040 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:34 crc kubenswrapper[4972]: E1121 09:43:34.523536 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.023519948 +0000 UTC m=+160.132662446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.624245 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:34 crc kubenswrapper[4972]: E1121 09:43:34.624532 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.124521015 +0000 UTC m=+160.233663513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.626525 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.628887 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.628934 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.725981 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:34 crc kubenswrapper[4972]: E1121 09:43:34.726720 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.226701826 +0000 UTC m=+160.335844324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.828025 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:34 crc kubenswrapper[4972]: E1121 09:43:34.828634 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.32862001 +0000 UTC m=+160.437762508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.862381 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-drf8x" event={"ID":"3b724f1a-56bb-4151-8740-29fdd824a900","Type":"ContainerStarted","Data":"efbae1db9a73bfba1c65361d74e4fe8a885b19046bc471f84dcaa669e4d0cc3f"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.865784 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" event={"ID":"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2","Type":"ContainerStarted","Data":"c770f2b39614924c55c37a5e6f1314439f648f8b6a36680aa10924ad5d983fba"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.868137 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9t6sj" event={"ID":"9fa458e6-be33-42f2-94ea-16ef5b241fa8","Type":"ContainerStarted","Data":"1e569cde61737700231ad39cc34f13e36573b84cb458541244d0406adfd1fd7c"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.870089 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" event={"ID":"84b5c77c-f0dd-4457-9333-2ed1ae64baa1","Type":"ContainerStarted","Data":"e93f09d940c402850ea27c71978daa6552e70f9d43f499a268b2dc99f52022e3"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.872033 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7" event={"ID":"e7bdccc3-c26f-4d11-a892-caf246a8630f","Type":"ContainerStarted","Data":"7fa809c75e72f7c1451bbd4e7ca17d28ef392b9f986df32a8d068c8fe4eae727"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.873915 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" event={"ID":"e4478a49-a8d5-4922-b9b6-c749c64697a6","Type":"ContainerStarted","Data":"0f92cedd86b47504735e3a0ba8452062382ca8bc7c25931ef179125f379561ae"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.876283 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bg74p" event={"ID":"e2c40c04-af4a-4f75-ad23-7366287447bf","Type":"ContainerStarted","Data":"6ee3f99e638fcdeb49f57beebdc0cb22a80a86dd1228bd4c18cbd61e0f3635cc"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.877791 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" event={"ID":"6ea5bb5c-34b8-497e-9193-c3406d2f9756","Type":"ContainerStarted","Data":"884889ae1563af65570881e385edaf79cfbde94ce99a7c6e1edeeaa40c316269"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.891704 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" event={"ID":"ac9e18c9-3efe-4b57-a2d8-09aba942b999","Type":"ContainerStarted","Data":"f0dfaf73ffecfb5417c773692aa4c91d86f00f935cf156a426625cb962f33d51"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.894653 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" event={"ID":"127e1d7b-1e8e-492f-905c-3c0027bd1a45","Type":"ContainerStarted","Data":"6cb390a2a30c4cb04e9625e2a0915b596c91c4d337b547765a8d3d4f0a252304"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.897223 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" event={"ID":"e4f03066-ed74-40ad-ac94-c9c2d83f648e","Type":"ContainerStarted","Data":"6a9eec55fc19202c14c0da5b0c79518d177695b5299d951d0f4fa80a7fe830c6"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.899139 4972 generic.go:334] "Generic (PLEG): container finished" podID="8238ee9f-1408-49ba-8a8f-961adc1488b8" containerID="31f8b170606e6a17a926b0725adaa792e5fa5123efcc978fe5bd5bf229045860" exitCode=0 Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.899195 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" event={"ID":"8238ee9f-1408-49ba-8a8f-961adc1488b8","Type":"ContainerDied","Data":"31f8b170606e6a17a926b0725adaa792e5fa5123efcc978fe5bd5bf229045860"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.900615 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" event={"ID":"1380a7fd-719d-420e-8a63-bd959e4e18ab","Type":"ContainerStarted","Data":"897053e792e044887b9842cb26351c6bb3e99c8edb2cad31b4a5507aad9a349d"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.905219 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" event={"ID":"cd926b1a-e534-4f79-ab19-afca2be13183","Type":"ContainerStarted","Data":"5b9d64fd960eda155670f4b8515ba976f00a2b82845748cb92d026e81081771a"} Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.905274 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.905398 4972 patch_prober.go:28] interesting pod/console-operator-58897d9998-b5tdm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.905436 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-b5tdm" podUID="5df7d34a-5265-4653-8525-68dc1e2109fd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.905455 4972 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-w2c2r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.905502 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" podUID="e9280ad8-85ad-4faa-a025-a021e417e522" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.905956 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.906288 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.906322 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.906587 4972 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sqxsm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.906641 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" podUID="58f8e920-8907-4555-80dc-c00b2af7c80a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.907183 4972 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pm48b container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.907225 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" podUID="d0fc9922-46d2-4700-88d7-4322397193c2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.929124 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:34 crc kubenswrapper[4972]: E1121 09:43:34.929263 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.429235237 +0000 UTC m=+160.538377745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.929646 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:34 crc kubenswrapper[4972]: E1121 09:43:34.930300 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.430280067 +0000 UTC m=+160.539422565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.935067 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-b5tdm" podStartSLOduration=137.935050452 podStartE2EDuration="2m17.935050452s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:34.935040422 +0000 UTC m=+160.044182940" watchObservedRunningTime="2025-11-21 09:43:34.935050452 +0000 UTC m=+160.044192960" Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.995748 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hlzb" podStartSLOduration=137.995724395 podStartE2EDuration="2m17.995724395s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:34.953090204 +0000 UTC m=+160.062232722" watchObservedRunningTime="2025-11-21 09:43:34.995724395 +0000 UTC m=+160.104866893" Nov 21 09:43:34 crc kubenswrapper[4972]: I1121 09:43:34.997383 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" podStartSLOduration=136.997373952 podStartE2EDuration="2m16.997373952s" podCreationTimestamp="2025-11-21 09:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:34.99626059 +0000 UTC m=+160.105403108" watchObservedRunningTime="2025-11-21 09:43:34.997373952 +0000 UTC m=+160.106516450" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.030685 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.031015 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-n8mj7" podStartSLOduration=138.030996886 podStartE2EDuration="2m18.030996886s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:35.029187865 +0000 UTC m=+160.138330363" watchObservedRunningTime="2025-11-21 09:43:35.030996886 +0000 UTC m=+160.140139384" Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.032877 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.532850469 +0000 UTC m=+160.641993017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.066933 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ffrcv" podStartSLOduration=138.066915496 podStartE2EDuration="2m18.066915496s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:35.061359298 +0000 UTC m=+160.170501806" watchObservedRunningTime="2025-11-21 09:43:35.066915496 +0000 UTC m=+160.176057994" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.111090 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" podStartSLOduration=137.11106999 podStartE2EDuration="2m17.11106999s" podCreationTimestamp="2025-11-21 09:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:35.11003171 +0000 UTC m=+160.219174228" watchObservedRunningTime="2025-11-21 09:43:35.11106999 +0000 UTC m=+160.220212498" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.128771 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gnpln" podStartSLOduration=138.128753402 podStartE2EDuration="2m18.128753402s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:35.127153517 +0000 UTC m=+160.236296015" watchObservedRunningTime="2025-11-21 09:43:35.128753402 +0000 UTC m=+160.237895900" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.135687 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.136064 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.636052449 +0000 UTC m=+160.745194947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.237462 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.237586 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.737557901 +0000 UTC m=+160.846700409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.238097 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.238400 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.738385685 +0000 UTC m=+160.847528183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.339270 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.339419 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.839395013 +0000 UTC m=+160.948537521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.339657 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.340035 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.840025371 +0000 UTC m=+160.949167869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.440570 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.440741 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.940709739 +0000 UTC m=+161.049852237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.440969 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.441290 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:35.941277145 +0000 UTC m=+161.050419643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.542660 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.542853 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.042809778 +0000 UTC m=+161.151952276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.542959 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.543306 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.043294192 +0000 UTC m=+161.152436760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.627919 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.627972 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.643779 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.643942 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.143913889 +0000 UTC m=+161.253056377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.644091 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.644395 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.144387622 +0000 UTC m=+161.253530120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.744734 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.744925 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.244899456 +0000 UTC m=+161.354041954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.745421 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.745843 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.245809112 +0000 UTC m=+161.354951650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.846995 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.847356 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.347343105 +0000 UTC m=+161.456485603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.911507 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" event={"ID":"ac9e18c9-3efe-4b57-a2d8-09aba942b999","Type":"ContainerStarted","Data":"67487678a6c9e368479427e2c97cb19a571712bc92467582d3236abdcac38679"} Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.915502 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" event={"ID":"12c25e78-a24e-4962-8976-3bc097fdaaf6","Type":"ContainerStarted","Data":"dc0bddab4688a01230186b1bca64475a3c514a2f08f9928066ce34878f9daf14"} Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.917450 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zdj2c" event={"ID":"f7d0d69c-8687-4e4f-9069-7db996719dab","Type":"ContainerStarted","Data":"ba4d87d6784062438cb583b478f46a44bac24b267092e1332998ce724188171e"} Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.919673 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gmffc" event={"ID":"16627f6c-2bea-4b24-9133-a8a009620d53","Type":"ContainerStarted","Data":"553fe1319aec1dd738cbd4dd9347a97ad57ae999a1ce76ade6e4e758f2970a79"} Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.921575 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" event={"ID":"ae42bdbe-152c-4f34-8bcf-f2284b1a09c6","Type":"ContainerStarted","Data":"4b398d97b8a512d0b69b488bfef3450ad5082eaf67a7ba2344ee5986c1485e46"} Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.924371 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9t6sj" event={"ID":"9fa458e6-be33-42f2-94ea-16ef5b241fa8","Type":"ContainerStarted","Data":"9eb7ccc8527c9936aeeae450b33a9f0d5e5899b61e8398f05d6491802d34a243"} Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.926281 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" event={"ID":"9741a397-9e67-459c-9dcd-9163fb05c6e4","Type":"ContainerStarted","Data":"f65bf8efee0748cc3173625bd7eeb514f26c056076d0138b30f300bfe11793fb"} Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.926330 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" event={"ID":"9741a397-9e67-459c-9dcd-9163fb05c6e4","Type":"ContainerStarted","Data":"07db780f7256da22e354bc2bc936336ae2a5f015ca06578be70c0520dab0022d"} Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.926352 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.927919 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" event={"ID":"1380a7fd-719d-420e-8a63-bd959e4e18ab","Type":"ContainerStarted","Data":"55028cd650be71f248d7dde1e2585ab458cef83646ec05f1e2a4b49bfa1d92be"} Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.929631 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" event={"ID":"0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561","Type":"ContainerStarted","Data":"ffe68db40b61abdc4767e4758d9a6d78a7edae43805ea73f1736e1fa8bc03bf7"} Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.931299 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-77wn4" event={"ID":"7939233b-508e-485b-91ea-8b266ba6f829","Type":"ContainerStarted","Data":"43816bbee78dfe79e0991d8ebf43d972e2bab63c42739fb18652f246643dd87f"} Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.931435 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.933135 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" event={"ID":"8238ee9f-1408-49ba-8a8f-961adc1488b8","Type":"ContainerStarted","Data":"abbd4a20d8ca7f7b8c487156f4a14037a506b25ec29ca3de7d8ac858bd3f09ea"} Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.933811 4972 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sqxsm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.933871 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.933916 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.933943 4972 patch_prober.go:28] interesting pod/console-operator-58897d9998-b5tdm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.933872 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" podUID="58f8e920-8907-4555-80dc-c00b2af7c80a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.933987 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-b5tdm" podUID="5df7d34a-5265-4653-8525-68dc1e2109fd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.934168 4972 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pm48b container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.934196 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" podUID="d0fc9922-46d2-4700-88d7-4322397193c2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.934853 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.948328 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:35 crc kubenswrapper[4972]: E1121 09:43:35.948694 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.448679682 +0000 UTC m=+161.557822180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.965857 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kk9j4" podStartSLOduration=138.965818739 podStartE2EDuration="2m18.965818739s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:35.940097799 +0000 UTC m=+161.049240307" watchObservedRunningTime="2025-11-21 09:43:35.965818739 +0000 UTC m=+161.074961237" Nov 21 09:43:35 crc kubenswrapper[4972]: I1121 09:43:35.999093 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4xjjp" podStartSLOduration=137.999075713 podStartE2EDuration="2m17.999075713s" podCreationTimestamp="2025-11-21 09:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:35.969488993 +0000 UTC m=+161.078631511" watchObservedRunningTime="2025-11-21 09:43:35.999075713 +0000 UTC m=+161.108218211" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.002348 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vfq5m" podStartSLOduration=139.002334616 podStartE2EDuration="2m19.002334616s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:35.998199798 +0000 UTC m=+161.107342306" watchObservedRunningTime="2025-11-21 09:43:36.002334616 +0000 UTC m=+161.111477114" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.047809 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" podStartSLOduration=138.047787796 podStartE2EDuration="2m18.047787796s" podCreationTimestamp="2025-11-21 09:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.04511681 +0000 UTC m=+161.154259318" watchObservedRunningTime="2025-11-21 09:43:36.047787796 +0000 UTC m=+161.156930294" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.050530 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.051960 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.551935054 +0000 UTC m=+161.661077552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.055300 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.055665 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.555648969 +0000 UTC m=+161.664791547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.055913 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.056043 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.063981 4972 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-kj5r8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.064050 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" podUID="0e05f644-87b8-4b3b-b5c4-6bc8fe8f6561" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.100540 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-zdj2c" podStartSLOduration=139.100520553 podStartE2EDuration="2m19.100520553s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.069478822 +0000 UTC m=+161.178621340" watchObservedRunningTime="2025-11-21 09:43:36.100520553 +0000 UTC m=+161.209663051" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.101383 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" podStartSLOduration=139.101374598 podStartE2EDuration="2m19.101374598s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.099240757 +0000 UTC m=+161.208383265" watchObservedRunningTime="2025-11-21 09:43:36.101374598 +0000 UTC m=+161.210517096" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.119903 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gmffc" podStartSLOduration=139.119882883 podStartE2EDuration="2m19.119882883s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.116550198 +0000 UTC m=+161.225692716" watchObservedRunningTime="2025-11-21 09:43:36.119882883 +0000 UTC m=+161.229025391" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.143460 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" podStartSLOduration=138.143438852 podStartE2EDuration="2m18.143438852s" podCreationTimestamp="2025-11-21 09:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.138497352 +0000 UTC m=+161.247639860" watchObservedRunningTime="2025-11-21 09:43:36.143438852 +0000 UTC m=+161.252581360" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.157303 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.157437 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.657418439 +0000 UTC m=+161.766560937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.157727 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.158035 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.658026986 +0000 UTC m=+161.767169484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.163014 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9t6sj" podStartSLOduration=138.162999977 podStartE2EDuration="2m18.162999977s" podCreationTimestamp="2025-11-21 09:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.162286187 +0000 UTC m=+161.271428695" watchObservedRunningTime="2025-11-21 09:43:36.162999977 +0000 UTC m=+161.272142475" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.192520 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7kbhj" podStartSLOduration=139.192503785 podStartE2EDuration="2m19.192503785s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.191112816 +0000 UTC m=+161.300255314" watchObservedRunningTime="2025-11-21 09:43:36.192503785 +0000 UTC m=+161.301646283" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.215581 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dm986" podStartSLOduration=138.21556252 podStartE2EDuration="2m18.21556252s" podCreationTimestamp="2025-11-21 09:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.209880368 +0000 UTC m=+161.319022876" watchObservedRunningTime="2025-11-21 09:43:36.21556252 +0000 UTC m=+161.324705028" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.250759 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" podStartSLOduration=139.250734738 podStartE2EDuration="2m19.250734738s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.247684602 +0000 UTC m=+161.356827120" watchObservedRunningTime="2025-11-21 09:43:36.250734738 +0000 UTC m=+161.359877236" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.260441 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.260677 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.76065835 +0000 UTC m=+161.869800848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.260847 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.261135 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.761128313 +0000 UTC m=+161.870270801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.278922 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz4l8" podStartSLOduration=139.278895717 podStartE2EDuration="2m19.278895717s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.273996139 +0000 UTC m=+161.383138657" watchObservedRunningTime="2025-11-21 09:43:36.278895717 +0000 UTC m=+161.388038215" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.293242 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-77wn4" podStartSLOduration=8.293220744 podStartE2EDuration="8.293220744s" podCreationTimestamp="2025-11-21 09:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.292427471 +0000 UTC m=+161.401569979" watchObservedRunningTime="2025-11-21 09:43:36.293220744 +0000 UTC m=+161.402363242" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.311039 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bg74p" podStartSLOduration=8.311017909 podStartE2EDuration="8.311017909s" podCreationTimestamp="2025-11-21 09:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.30822717 +0000 UTC m=+161.417369678" watchObservedRunningTime="2025-11-21 09:43:36.311017909 +0000 UTC m=+161.420160417" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.332738 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" podStartSLOduration=138.332714345 podStartE2EDuration="2m18.332714345s" podCreationTimestamp="2025-11-21 09:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.3307615 +0000 UTC m=+161.439904008" watchObservedRunningTime="2025-11-21 09:43:36.332714345 +0000 UTC m=+161.441856843" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.348068 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-drf8x" podStartSLOduration=8.34805011 podStartE2EDuration="8.34805011s" podCreationTimestamp="2025-11-21 09:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.345053995 +0000 UTC m=+161.454196513" watchObservedRunningTime="2025-11-21 09:43:36.34805011 +0000 UTC m=+161.457192608" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.362203 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.362649 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.862632695 +0000 UTC m=+161.971775193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.410819 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" podStartSLOduration=138.410799942 podStartE2EDuration="2m18.410799942s" podCreationTimestamp="2025-11-21 09:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.405880722 +0000 UTC m=+161.515023230" watchObservedRunningTime="2025-11-21 09:43:36.410799942 +0000 UTC m=+161.519942440" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.455482 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mjkx2" podStartSLOduration=139.45546042 podStartE2EDuration="2m19.45546042s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.452783394 +0000 UTC m=+161.561925892" watchObservedRunningTime="2025-11-21 09:43:36.45546042 +0000 UTC m=+161.564602928" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.463984 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.464468 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:36.964454805 +0000 UTC m=+162.073597303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.565316 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.565482 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.065457413 +0000 UTC m=+162.174599911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.565592 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.565953 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.065942157 +0000 UTC m=+162.175084745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.644588 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:36 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:36 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:36 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.644675 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.666847 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.667247 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.167229053 +0000 UTC m=+162.276371551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.768678 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.769094 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.269075265 +0000 UTC m=+162.378217833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.870490 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.870712 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.37068335 +0000 UTC m=+162.479825848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.870802 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.871145 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.371136122 +0000 UTC m=+162.480278620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.972228 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.972439 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.472408908 +0000 UTC m=+162.581551406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:36 crc kubenswrapper[4972]: I1121 09:43:36.972651 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:36 crc kubenswrapper[4972]: E1121 09:43:36.972981 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.472967884 +0000 UTC m=+162.582110382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.073592 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.073820 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.573794607 +0000 UTC m=+162.682937095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.074356 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.075622 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.575607048 +0000 UTC m=+162.684749606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.177340 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.177497 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.67746505 +0000 UTC m=+162.786607548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.177681 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.178080 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.678069887 +0000 UTC m=+162.787212475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.279143 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.279515 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.779499677 +0000 UTC m=+162.888642175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.381318 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.381712 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.881697699 +0000 UTC m=+162.990840197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.482201 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.482550 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.982526362 +0000 UTC m=+163.091668860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.482639 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.482999 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:37.982986795 +0000 UTC m=+163.092129293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.584292 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.584629 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.08461413 +0000 UTC m=+163.193756628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.630914 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:37 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:37 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:37 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.631123 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.686344 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.686711 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.186698709 +0000 UTC m=+163.295841207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.787366 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.787745 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.287727387 +0000 UTC m=+163.396869895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.889248 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.889893 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.389877698 +0000 UTC m=+163.499020196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.943196 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" event={"ID":"a6597c90-c4ee-4856-b03f-f0fa1d3062f5","Type":"ContainerStarted","Data":"4ad3c2ca8a5207867bbcb1f951b750c6bcfd13fd61a134917233be8cebfbe381"} Nov 21 09:43:37 crc kubenswrapper[4972]: I1121 09:43:37.990891 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:37 crc kubenswrapper[4972]: E1121 09:43:37.991549 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.491534064 +0000 UTC m=+163.600676552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.092580 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.092990 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.592976794 +0000 UTC m=+163.702119282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.194279 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.194485 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.694454095 +0000 UTC m=+163.803596593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.194821 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.195257 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.695248498 +0000 UTC m=+163.804390996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.295558 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.295678 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.795658799 +0000 UTC m=+163.904801297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.295980 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.296374 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.796357529 +0000 UTC m=+163.905500027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.397634 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.397813 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.897785569 +0000 UTC m=+164.006928067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.398096 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.398398 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.898390186 +0000 UTC m=+164.007532684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.499069 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.499280 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.999248619 +0000 UTC m=+164.108391137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.499576 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.499876 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:38.999850876 +0000 UTC m=+164.108993374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.600676 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.600921 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.100891395 +0000 UTC m=+164.210033893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.601009 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.601396 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.101384589 +0000 UTC m=+164.210527157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.630130 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:38 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:38 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:38 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.630190 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.702351 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.702535 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.202510921 +0000 UTC m=+164.311653419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.702781 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.703106 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.203098777 +0000 UTC m=+164.312241275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.803882 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.804089 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.304054404 +0000 UTC m=+164.413196912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.804186 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.804503 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.304493856 +0000 UTC m=+164.413636384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.905010 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.905214 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.405186255 +0000 UTC m=+164.514328753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:38 crc kubenswrapper[4972]: I1121 09:43:38.905405 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:38 crc kubenswrapper[4972]: E1121 09:43:38.905692 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.405680709 +0000 UTC m=+164.514823207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.006170 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.006330 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.506308326 +0000 UTC m=+164.615450824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.006558 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.006858 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.506847762 +0000 UTC m=+164.615990260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.107975 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.108453 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.608429076 +0000 UTC m=+164.717571584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.175238 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fd2p7" podStartSLOduration=142.175223192 podStartE2EDuration="2m22.175223192s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:36.497667408 +0000 UTC m=+161.606809916" watchObservedRunningTime="2025-11-21 09:43:39.175223192 +0000 UTC m=+164.284365690" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.176699 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.177360 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.179717 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.179926 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.204107 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.209812 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.210148 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.710128043 +0000 UTC m=+164.819270621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.308908 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lqggx"] Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.309949 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.310717 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.310894 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.810804702 +0000 UTC m=+164.919947200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.310988 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c76dda7-f44d-4fa6-9471-841d962d757c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8c76dda7-f44d-4fa6-9471-841d962d757c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.311103 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c76dda7-f44d-4fa6-9471-841d962d757c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8c76dda7-f44d-4fa6-9471-841d962d757c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.311129 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.311355 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.811344467 +0000 UTC m=+164.920486965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.311680 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.336859 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lqggx"] Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.412028 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.412255 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.912222161 +0000 UTC m=+165.021364659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.412324 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.412378 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c76dda7-f44d-4fa6-9471-841d962d757c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8c76dda7-f44d-4fa6-9471-841d962d757c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.412475 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e0ba187-0ec6-40e7-bd83-771510a29a5b-utilities\") pod \"certified-operators-lqggx\" (UID: \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\") " pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.412672 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:39.912663374 +0000 UTC m=+165.021805872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.412705 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e0ba187-0ec6-40e7-bd83-771510a29a5b-catalog-content\") pod \"certified-operators-lqggx\" (UID: \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\") " pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.412780 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9s4p\" (UniqueName: \"kubernetes.io/projected/6e0ba187-0ec6-40e7-bd83-771510a29a5b-kube-api-access-z9s4p\") pod \"certified-operators-lqggx\" (UID: \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\") " pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.412815 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c76dda7-f44d-4fa6-9471-841d962d757c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8c76dda7-f44d-4fa6-9471-841d962d757c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.412898 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c76dda7-f44d-4fa6-9471-841d962d757c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8c76dda7-f44d-4fa6-9471-841d962d757c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.460101 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c76dda7-f44d-4fa6-9471-841d962d757c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8c76dda7-f44d-4fa6-9471-841d962d757c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.492059 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.513913 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.514102 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.014076853 +0000 UTC m=+165.123219351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.514163 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e0ba187-0ec6-40e7-bd83-771510a29a5b-utilities\") pod \"certified-operators-lqggx\" (UID: \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\") " pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.514251 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e0ba187-0ec6-40e7-bd83-771510a29a5b-catalog-content\") pod \"certified-operators-lqggx\" (UID: \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\") " pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.514302 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9s4p\" (UniqueName: \"kubernetes.io/projected/6e0ba187-0ec6-40e7-bd83-771510a29a5b-kube-api-access-z9s4p\") pod \"certified-operators-lqggx\" (UID: \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\") " pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.514334 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.514655 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.014640949 +0000 UTC m=+165.123783447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.514738 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e0ba187-0ec6-40e7-bd83-771510a29a5b-utilities\") pod \"certified-operators-lqggx\" (UID: \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\") " pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.514765 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e0ba187-0ec6-40e7-bd83-771510a29a5b-catalog-content\") pod \"certified-operators-lqggx\" (UID: \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\") " pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.517673 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b4t9l"] Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.518613 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.528006 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.546351 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4t9l"] Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.576507 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9s4p\" (UniqueName: \"kubernetes.io/projected/6e0ba187-0ec6-40e7-bd83-771510a29a5b-kube-api-access-z9s4p\") pod \"certified-operators-lqggx\" (UID: \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\") " pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.615343 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.615492 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.115475432 +0000 UTC m=+165.224617930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.615525 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ef553c-f6bd-4af2-9c0e-643cd14f9290-utilities\") pod \"community-operators-b4t9l\" (UID: \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\") " pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.615587 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.615610 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ef553c-f6bd-4af2-9c0e-643cd14f9290-catalog-content\") pod \"community-operators-b4t9l\" (UID: \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\") " pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.615643 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvfxg\" (UniqueName: \"kubernetes.io/projected/13ef553c-f6bd-4af2-9c0e-643cd14f9290-kube-api-access-gvfxg\") pod \"community-operators-b4t9l\" (UID: \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\") " pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.615854 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.115841313 +0000 UTC m=+165.224983811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.625914 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.635425 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:39 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:39 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:39 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.635491 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.705172 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pnsjx"] Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.706167 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.716241 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.716452 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ef553c-f6bd-4af2-9c0e-643cd14f9290-utilities\") pod \"community-operators-b4t9l\" (UID: \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\") " pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.716546 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ef553c-f6bd-4af2-9c0e-643cd14f9290-catalog-content\") pod \"community-operators-b4t9l\" (UID: \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\") " pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.716593 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvfxg\" (UniqueName: \"kubernetes.io/projected/13ef553c-f6bd-4af2-9c0e-643cd14f9290-kube-api-access-gvfxg\") pod \"community-operators-b4t9l\" (UID: \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\") " pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.717181 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ef553c-f6bd-4af2-9c0e-643cd14f9290-utilities\") pod \"community-operators-b4t9l\" (UID: \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\") " pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.717250 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.217237872 +0000 UTC m=+165.326380370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.718273 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ef553c-f6bd-4af2-9c0e-643cd14f9290-catalog-content\") pod \"community-operators-b4t9l\" (UID: \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\") " pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.728147 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pnsjx"] Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.797880 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvfxg\" (UniqueName: \"kubernetes.io/projected/13ef553c-f6bd-4af2-9c0e-643cd14f9290-kube-api-access-gvfxg\") pod \"community-operators-b4t9l\" (UID: \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\") " pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.820575 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6136a605-ff46-4462-808b-cc8d2c28faea-utilities\") pod \"certified-operators-pnsjx\" (UID: \"6136a605-ff46-4462-808b-cc8d2c28faea\") " pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.820930 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr6sk\" (UniqueName: \"kubernetes.io/projected/6136a605-ff46-4462-808b-cc8d2c28faea-kube-api-access-kr6sk\") pod \"certified-operators-pnsjx\" (UID: \"6136a605-ff46-4462-808b-cc8d2c28faea\") " pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.821010 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6136a605-ff46-4462-808b-cc8d2c28faea-catalog-content\") pod \"certified-operators-pnsjx\" (UID: \"6136a605-ff46-4462-808b-cc8d2c28faea\") " pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.821101 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.821435 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.32142089 +0000 UTC m=+165.430563388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.850183 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.910925 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rc758"] Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.915040 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rc758" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.922360 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.923200 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6136a605-ff46-4462-808b-cc8d2c28faea-utilities\") pod \"certified-operators-pnsjx\" (UID: \"6136a605-ff46-4462-808b-cc8d2c28faea\") " pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.923259 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr6sk\" (UniqueName: \"kubernetes.io/projected/6136a605-ff46-4462-808b-cc8d2c28faea-kube-api-access-kr6sk\") pod \"certified-operators-pnsjx\" (UID: \"6136a605-ff46-4462-808b-cc8d2c28faea\") " pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.923302 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.423280491 +0000 UTC m=+165.532422989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.923350 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.923446 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6136a605-ff46-4462-808b-cc8d2c28faea-catalog-content\") pod \"certified-operators-pnsjx\" (UID: \"6136a605-ff46-4462-808b-cc8d2c28faea\") " pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.923642 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.923676 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6136a605-ff46-4462-808b-cc8d2c28faea-utilities\") pod \"certified-operators-pnsjx\" (UID: \"6136a605-ff46-4462-808b-cc8d2c28faea\") " pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.924217 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6136a605-ff46-4462-808b-cc8d2c28faea-catalog-content\") pod \"certified-operators-pnsjx\" (UID: \"6136a605-ff46-4462-808b-cc8d2c28faea\") " pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:43:39 crc kubenswrapper[4972]: E1121 09:43:39.924222 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.424197247 +0000 UTC m=+165.533339745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.936281 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rc758"] Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.939749 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df5e96f4-727c-44c1-8e2f-e624c912430b-metrics-certs\") pod \"network-metrics-daemon-k9mnh\" (UID: \"df5e96f4-727c-44c1-8e2f-e624c912430b\") " pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.982971 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr6sk\" (UniqueName: \"kubernetes.io/projected/6136a605-ff46-4462-808b-cc8d2c28faea-kube-api-access-kr6sk\") pod \"certified-operators-pnsjx\" (UID: \"6136a605-ff46-4462-808b-cc8d2c28faea\") " pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:43:39 crc kubenswrapper[4972]: I1121 09:43:39.983429 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9mnh" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.030312 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.030552 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.030714 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b43815a-969e-432e-ac57-843bee51860c-utilities\") pod \"community-operators-rc758\" (UID: \"1b43815a-969e-432e-ac57-843bee51860c\") " pod="openshift-marketplace/community-operators-rc758" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.030780 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psklv\" (UniqueName: \"kubernetes.io/projected/1b43815a-969e-432e-ac57-843bee51860c-kube-api-access-psklv\") pod \"community-operators-rc758\" (UID: \"1b43815a-969e-432e-ac57-843bee51860c\") " pod="openshift-marketplace/community-operators-rc758" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.030865 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b43815a-969e-432e-ac57-843bee51860c-catalog-content\") pod \"community-operators-rc758\" (UID: \"1b43815a-969e-432e-ac57-843bee51860c\") " pod="openshift-marketplace/community-operators-rc758" Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.030897 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.530871826 +0000 UTC m=+165.640014324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.035067 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.035229 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.094140 4972 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xxsnz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 21 09:43:40 crc kubenswrapper[4972]: [+]log ok Nov 21 09:43:40 crc kubenswrapper[4972]: [+]etcd ok Nov 21 09:43:40 crc kubenswrapper[4972]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 21 09:43:40 crc kubenswrapper[4972]: [+]poststarthook/generic-apiserver-start-informers ok Nov 21 09:43:40 crc kubenswrapper[4972]: [+]poststarthook/max-in-flight-filter ok Nov 21 09:43:40 crc kubenswrapper[4972]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 21 09:43:40 crc kubenswrapper[4972]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 21 09:43:40 crc kubenswrapper[4972]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 21 09:43:40 crc kubenswrapper[4972]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 21 09:43:40 crc kubenswrapper[4972]: [+]poststarthook/project.openshift.io-projectcache ok Nov 21 09:43:40 crc kubenswrapper[4972]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 21 09:43:40 crc kubenswrapper[4972]: [+]poststarthook/openshift.io-startinformers ok Nov 21 09:43:40 crc kubenswrapper[4972]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 21 09:43:40 crc kubenswrapper[4972]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 21 09:43:40 crc kubenswrapper[4972]: livez check failed Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.094199 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" podUID="12c25e78-a24e-4962-8976-3bc097fdaaf6" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.132244 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b43815a-969e-432e-ac57-843bee51860c-catalog-content\") pod \"community-operators-rc758\" (UID: \"1b43815a-969e-432e-ac57-843bee51860c\") " pod="openshift-marketplace/community-operators-rc758" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.132333 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b43815a-969e-432e-ac57-843bee51860c-utilities\") pod \"community-operators-rc758\" (UID: \"1b43815a-969e-432e-ac57-843bee51860c\") " pod="openshift-marketplace/community-operators-rc758" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.132377 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.132450 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psklv\" (UniqueName: \"kubernetes.io/projected/1b43815a-969e-432e-ac57-843bee51860c-kube-api-access-psklv\") pod \"community-operators-rc758\" (UID: \"1b43815a-969e-432e-ac57-843bee51860c\") " pod="openshift-marketplace/community-operators-rc758" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.133517 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b43815a-969e-432e-ac57-843bee51860c-catalog-content\") pod \"community-operators-rc758\" (UID: \"1b43815a-969e-432e-ac57-843bee51860c\") " pod="openshift-marketplace/community-operators-rc758" Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.134468 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.634433666 +0000 UTC m=+165.743576324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.134544 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b43815a-969e-432e-ac57-843bee51860c-utilities\") pod \"community-operators-rc758\" (UID: \"1b43815a-969e-432e-ac57-843bee51860c\") " pod="openshift-marketplace/community-operators-rc758" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.190524 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psklv\" (UniqueName: \"kubernetes.io/projected/1b43815a-969e-432e-ac57-843bee51860c-kube-api-access-psklv\") pod \"community-operators-rc758\" (UID: \"1b43815a-969e-432e-ac57-843bee51860c\") " pod="openshift-marketplace/community-operators-rc758" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.233233 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.233585 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.733571011 +0000 UTC m=+165.842713509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.277395 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rc758" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.301845 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-n6wh5"] Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.302346 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" podUID="6068d0a6-b0b7-44af-8dcd-995d728bf03a" containerName="controller-manager" containerID="cri-o://d8439ab6c9eadcb01eea65f5c542deb3144152721c1f235d81724e1315c87d87" gracePeriod=30 Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.338258 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.338831 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.339214 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.83919584 +0000 UTC m=+165.948338338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.375003 4972 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-n6wh5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": read tcp 10.217.0.2:39150->10.217.0.30:8443: read: connection reset by peer" start-of-body= Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.375076 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" podUID="6068d0a6-b0b7-44af-8dcd-995d728bf03a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": read tcp 10.217.0.2:39150->10.217.0.30:8443: read: connection reset by peer" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.440462 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.440957 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:40.940936209 +0000 UTC m=+166.050078707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.474235 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lqggx"] Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.490243 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4t9l"] Nov 21 09:43:40 crc kubenswrapper[4972]: W1121 09:43:40.531092 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e0ba187_0ec6_40e7_bd83_771510a29a5b.slice/crio-33d88298d53f389d72ba4a0713680da8157d5565c8e9e3452cca956f3d4fbf3b WatchSource:0}: Error finding container 33d88298d53f389d72ba4a0713680da8157d5565c8e9e3452cca956f3d4fbf3b: Status 404 returned error can't find the container with id 33d88298d53f389d72ba4a0713680da8157d5565c8e9e3452cca956f3d4fbf3b Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.542018 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.542026 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-k9mnh"] Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.542326 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.042312337 +0000 UTC m=+166.151454835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.560446 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.561142 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.568669 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.568934 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.591058 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.614590 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.614642 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sgptm" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.620016 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.624990 4972 patch_prober.go:28] interesting pod/console-f9d7485db-j7xxl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.625901 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j7xxl" podUID="7b0e4d64-f901-4a4e-9644-408eb534401e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.630146 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:40 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:40 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:40 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.630418 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.646251 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.646385 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.146350121 +0000 UTC m=+166.255492619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.646671 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b88d1d32-e641-4b28-beb1-a3103fbf22d8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b88d1d32-e641-4b28-beb1-a3103fbf22d8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.646769 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.646895 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b88d1d32-e641-4b28-beb1-a3103fbf22d8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b88d1d32-e641-4b28-beb1-a3103fbf22d8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.647175 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.147128563 +0000 UTC m=+166.256271131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.748719 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.748748 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.248725578 +0000 UTC m=+166.357868076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.748945 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.749005 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b88d1d32-e641-4b28-beb1-a3103fbf22d8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b88d1d32-e641-4b28-beb1-a3103fbf22d8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.749116 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b88d1d32-e641-4b28-beb1-a3103fbf22d8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b88d1d32-e641-4b28-beb1-a3103fbf22d8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.749174 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b88d1d32-e641-4b28-beb1-a3103fbf22d8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b88d1d32-e641-4b28-beb1-a3103fbf22d8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.749539 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.24951799 +0000 UTC m=+166.358660558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.772245 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b88d1d32-e641-4b28-beb1-a3103fbf22d8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b88d1d32-e641-4b28-beb1-a3103fbf22d8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.777871 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.836386 4972 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-n6wh5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.836437 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" podUID="6068d0a6-b0b7-44af-8dcd-995d728bf03a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.850760 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.850931 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.350908769 +0000 UTC m=+166.460051257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.851167 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.851487 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.351478205 +0000 UTC m=+166.460620703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.866425 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.884527 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.884585 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.886565 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.886619 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.891219 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-b5tdm" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.934838 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rc758"] Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.941985 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.957065 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:40 crc kubenswrapper[4972]: E1121 09:43:40.962958 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.462922909 +0000 UTC m=+166.572065407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:40 crc kubenswrapper[4972]: I1121 09:43:40.964262 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pnsjx"] Nov 21 09:43:40 crc kubenswrapper[4972]: W1121 09:43:40.985538 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6136a605_ff46_4462_808b_cc8d2c28faea.slice/crio-bfe80c59b5e6d7be3d314228eee18e046c1dae87b08de0707b6fd74753b2dbdc WatchSource:0}: Error finding container bfe80c59b5e6d7be3d314228eee18e046c1dae87b08de0707b6fd74753b2dbdc: Status 404 returned error can't find the container with id bfe80c59b5e6d7be3d314228eee18e046c1dae87b08de0707b6fd74753b2dbdc Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.005767 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t9l" event={"ID":"13ef553c-f6bd-4af2-9c0e-643cd14f9290","Type":"ContainerStarted","Data":"312e2f5f576f6348b8d342fbd9cd617a12125dd520678f5db0be33b2c5bf8bb4"} Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.007202 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8c76dda7-f44d-4fa6-9471-841d962d757c","Type":"ContainerStarted","Data":"6a89f7992c1bc8798563a4b83a06989777ce46a6f2930da1e6a24d58ff0ac16a"} Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.010817 4972 generic.go:334] "Generic (PLEG): container finished" podID="6068d0a6-b0b7-44af-8dcd-995d728bf03a" containerID="d8439ab6c9eadcb01eea65f5c542deb3144152721c1f235d81724e1315c87d87" exitCode=0 Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.010866 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" event={"ID":"6068d0a6-b0b7-44af-8dcd-995d728bf03a","Type":"ContainerDied","Data":"d8439ab6c9eadcb01eea65f5c542deb3144152721c1f235d81724e1315c87d87"} Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.012251 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqggx" event={"ID":"6e0ba187-0ec6-40e7-bd83-771510a29a5b","Type":"ContainerStarted","Data":"33d88298d53f389d72ba4a0713680da8157d5565c8e9e3452cca956f3d4fbf3b"} Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.013228 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc758" event={"ID":"1b43815a-969e-432e-ac57-843bee51860c","Type":"ContainerStarted","Data":"4f1df5923109ad16885a7783181fa83da6c4805b05ffd21b2e5c639d3e85d98e"} Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.014365 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnsjx" event={"ID":"6136a605-ff46-4462-808b-cc8d2c28faea","Type":"ContainerStarted","Data":"bfe80c59b5e6d7be3d314228eee18e046c1dae87b08de0707b6fd74753b2dbdc"} Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.044426 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" event={"ID":"df5e96f4-727c-44c1-8e2f-e624c912430b","Type":"ContainerStarted","Data":"520a95806a277d347a51f9af2d89c83a3322dec74e152ea2b01bfb0414791e90"} Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.046763 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqxsm" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.059722 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.071295 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.571277736 +0000 UTC m=+166.680420234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.092112 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.113745 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kj5r8" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.160665 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.161433 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.661419825 +0000 UTC m=+166.770562323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.262109 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.262524 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.762507906 +0000 UTC m=+166.871650404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.291541 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.338400 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.343267 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l4nps" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.363282 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.363419 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.86340142 +0000 UTC m=+166.972543918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.363592 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.363880 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.863872294 +0000 UTC m=+166.973014792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.464544 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.464669 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.964648795 +0000 UTC m=+167.073791293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.465027 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.465296 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:41.965289133 +0000 UTC m=+167.074431631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.504902 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hh4hc"] Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.506150 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.510931 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.520914 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hh4hc"] Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.566816 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.567009 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.066982951 +0000 UTC m=+167.176125439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.567115 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.567457 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.067448324 +0000 UTC m=+167.176590822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.607110 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pm48b" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.627503 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.630115 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:41 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:41 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:41 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.630165 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.667732 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.668074 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.16805287 +0000 UTC m=+167.277195368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.668135 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.668171 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-catalog-content\") pod \"redhat-marketplace-hh4hc\" (UID: \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\") " pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.668223 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-utilities\") pod \"redhat-marketplace-hh4hc\" (UID: \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\") " pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.668249 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvczx\" (UniqueName: \"kubernetes.io/projected/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-kube-api-access-zvczx\") pod \"redhat-marketplace-hh4hc\" (UID: \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\") " pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.668487 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.168475862 +0000 UTC m=+167.277618350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.672216 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.684793 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.769341 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.769492 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.26947165 +0000 UTC m=+167.378614148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.769749 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.770304 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-catalog-content\") pod \"redhat-marketplace-hh4hc\" (UID: \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\") " pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.770389 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-utilities\") pod \"redhat-marketplace-hh4hc\" (UID: \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\") " pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.770414 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvczx\" (UniqueName: \"kubernetes.io/projected/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-kube-api-access-zvczx\") pod \"redhat-marketplace-hh4hc\" (UID: \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\") " pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.771230 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.271207209 +0000 UTC m=+167.380349917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.782487 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-catalog-content\") pod \"redhat-marketplace-hh4hc\" (UID: \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\") " pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.785992 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-utilities\") pod \"redhat-marketplace-hh4hc\" (UID: \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\") " pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.799289 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvczx\" (UniqueName: \"kubernetes.io/projected/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-kube-api-access-zvczx\") pod \"redhat-marketplace-hh4hc\" (UID: \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\") " pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.819559 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.871623 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.871760 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.371730143 +0000 UTC m=+167.480872651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.871900 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.872367 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.37232945 +0000 UTC m=+167.481471958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.905421 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7ccfh"] Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.906836 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.923012 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ccfh"] Nov 21 09:43:41 crc kubenswrapper[4972]: I1121 09:43:41.973671 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:41 crc kubenswrapper[4972]: E1121 09:43:41.974117 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.47410181 +0000 UTC m=+167.583244308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.021044 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hh4hc"] Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.058941 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8c76dda7-f44d-4fa6-9471-841d962d757c","Type":"ContainerStarted","Data":"0de23111b6c2ffa91c286b54b39d0de574796f055f6dfe1b16d3bc66cfcfdc93"} Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.060619 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqggx" event={"ID":"6e0ba187-0ec6-40e7-bd83-771510a29a5b","Type":"ContainerStarted","Data":"156961e40bf26f82a583f0e2dcaee19f8282962279c498b17d1ce0543b0c5ae2"} Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.062396 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh4hc" event={"ID":"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827","Type":"ContainerStarted","Data":"ffd7f437b1743e69e0479c5822368148add76b86ba0ff8451ceed2d13cf696c3"} Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.063266 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b88d1d32-e641-4b28-beb1-a3103fbf22d8","Type":"ContainerStarted","Data":"c56d006e0a9147cba9c391e1d9f593216ac14b721c72fc710817be8c181d6b46"} Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.076098 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49b866e2-c40e-4b45-acfc-965161cabf5c-utilities\") pod \"redhat-marketplace-7ccfh\" (UID: \"49b866e2-c40e-4b45-acfc-965161cabf5c\") " pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.076153 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc6rx\" (UniqueName: \"kubernetes.io/projected/49b866e2-c40e-4b45-acfc-965161cabf5c-kube-api-access-kc6rx\") pod \"redhat-marketplace-7ccfh\" (UID: \"49b866e2-c40e-4b45-acfc-965161cabf5c\") " pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.076381 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.076478 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49b866e2-c40e-4b45-acfc-965161cabf5c-catalog-content\") pod \"redhat-marketplace-7ccfh\" (UID: \"49b866e2-c40e-4b45-acfc-965161cabf5c\") " pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.076776 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.576760365 +0000 UTC m=+167.685903053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.177801 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.178007 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.677977278 +0000 UTC m=+167.787119776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.178065 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc6rx\" (UniqueName: \"kubernetes.io/projected/49b866e2-c40e-4b45-acfc-965161cabf5c-kube-api-access-kc6rx\") pod \"redhat-marketplace-7ccfh\" (UID: \"49b866e2-c40e-4b45-acfc-965161cabf5c\") " pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.178141 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.178210 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49b866e2-c40e-4b45-acfc-965161cabf5c-catalog-content\") pod \"redhat-marketplace-7ccfh\" (UID: \"49b866e2-c40e-4b45-acfc-965161cabf5c\") " pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.178357 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49b866e2-c40e-4b45-acfc-965161cabf5c-utilities\") pod \"redhat-marketplace-7ccfh\" (UID: \"49b866e2-c40e-4b45-acfc-965161cabf5c\") " pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.179504 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.679492782 +0000 UTC m=+167.788635370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.181155 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49b866e2-c40e-4b45-acfc-965161cabf5c-utilities\") pod \"redhat-marketplace-7ccfh\" (UID: \"49b866e2-c40e-4b45-acfc-965161cabf5c\") " pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.181408 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49b866e2-c40e-4b45-acfc-965161cabf5c-catalog-content\") pod \"redhat-marketplace-7ccfh\" (UID: \"49b866e2-c40e-4b45-acfc-965161cabf5c\") " pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.196362 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc6rx\" (UniqueName: \"kubernetes.io/projected/49b866e2-c40e-4b45-acfc-965161cabf5c-kube-api-access-kc6rx\") pod \"redhat-marketplace-7ccfh\" (UID: \"49b866e2-c40e-4b45-acfc-965161cabf5c\") " pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.238597 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.279593 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.280172 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.78015219 +0000 UTC m=+167.889294688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.280319 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.280590 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.780581822 +0000 UTC m=+167.889724320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.319571 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.349444 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kwmvg"] Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.349651 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6068d0a6-b0b7-44af-8dcd-995d728bf03a" containerName="controller-manager" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.349662 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6068d0a6-b0b7-44af-8dcd-995d728bf03a" containerName="controller-manager" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.349763 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="6068d0a6-b0b7-44af-8dcd-995d728bf03a" containerName="controller-manager" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.351205 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.367709 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kwmvg"] Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.381726 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.381943 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.881904469 +0000 UTC m=+167.991046967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.382366 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.382719 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.882711411 +0000 UTC m=+167.991853909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.412211 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ccfh"] Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.483379 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-client-ca\") pod \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.483484 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-proxy-ca-bundles\") pod \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.483592 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-config\") pod \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.483790 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.483879 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drsg6\" (UniqueName: \"kubernetes.io/projected/6068d0a6-b0b7-44af-8dcd-995d728bf03a-kube-api-access-drsg6\") pod \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.483967 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6068d0a6-b0b7-44af-8dcd-995d728bf03a-serving-cert\") pod \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\" (UID: \"6068d0a6-b0b7-44af-8dcd-995d728bf03a\") " Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.484186 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-config\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.484269 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:42.984246174 +0000 UTC m=+168.093388682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.484342 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.484430 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zww24\" (UniqueName: \"kubernetes.io/projected/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-kube-api-access-zww24\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.484461 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-serving-cert\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.484482 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-client-ca\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.484645 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-client-ca" (OuterVolumeSpecName: "client-ca") pod "6068d0a6-b0b7-44af-8dcd-995d728bf03a" (UID: "6068d0a6-b0b7-44af-8dcd-995d728bf03a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.484690 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-config" (OuterVolumeSpecName: "config") pod "6068d0a6-b0b7-44af-8dcd-995d728bf03a" (UID: "6068d0a6-b0b7-44af-8dcd-995d728bf03a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.494567 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6068d0a6-b0b7-44af-8dcd-995d728bf03a-kube-api-access-drsg6" (OuterVolumeSpecName: "kube-api-access-drsg6") pod "6068d0a6-b0b7-44af-8dcd-995d728bf03a" (UID: "6068d0a6-b0b7-44af-8dcd-995d728bf03a"). InnerVolumeSpecName "kube-api-access-drsg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.495226 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6068d0a6-b0b7-44af-8dcd-995d728bf03a" (UID: "6068d0a6-b0b7-44af-8dcd-995d728bf03a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.501797 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6068d0a6-b0b7-44af-8dcd-995d728bf03a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6068d0a6-b0b7-44af-8dcd-995d728bf03a" (UID: "6068d0a6-b0b7-44af-8dcd-995d728bf03a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.505525 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-58kms"] Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.506602 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.508418 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.514734 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58kms"] Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.585250 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.585530 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zww24\" (UniqueName: \"kubernetes.io/projected/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-kube-api-access-zww24\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.585611 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-serving-cert\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.585704 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-client-ca\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.585788 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-config\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.585890 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.585980 4972 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.586041 4972 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.586113 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6068d0a6-b0b7-44af-8dcd-995d728bf03a-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.586170 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drsg6\" (UniqueName: \"kubernetes.io/projected/6068d0a6-b0b7-44af-8dcd-995d728bf03a-kube-api-access-drsg6\") on node \"crc\" DevicePath \"\"" Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.586192 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.086173368 +0000 UTC m=+168.195315926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.586237 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6068d0a6-b0b7-44af-8dcd-995d728bf03a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.586650 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.586727 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-client-ca\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.587225 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-config\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.589600 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-serving-cert\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.605063 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zww24\" (UniqueName: \"kubernetes.io/projected/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-kube-api-access-zww24\") pod \"controller-manager-879f6c89f-kwmvg\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.630734 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:42 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:42 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:42 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.630796 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.688288 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.688603 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42ww7\" (UniqueName: \"kubernetes.io/projected/882787b1-4df4-446b-972f-8a07c4eb5782-kube-api-access-42ww7\") pod \"redhat-operators-58kms\" (UID: \"882787b1-4df4-446b-972f-8a07c4eb5782\") " pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.688675 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.188658308 +0000 UTC m=+168.297800796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.688716 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/882787b1-4df4-446b-972f-8a07c4eb5782-utilities\") pod \"redhat-operators-58kms\" (UID: \"882787b1-4df4-446b-972f-8a07c4eb5782\") " pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.688770 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/882787b1-4df4-446b-972f-8a07c4eb5782-catalog-content\") pod \"redhat-operators-58kms\" (UID: \"882787b1-4df4-446b-972f-8a07c4eb5782\") " pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.688970 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.689348 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.189322457 +0000 UTC m=+168.298464955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.739359 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.790795 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.791113 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.291091787 +0000 UTC m=+168.400234285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.791356 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/882787b1-4df4-446b-972f-8a07c4eb5782-utilities\") pod \"redhat-operators-58kms\" (UID: \"882787b1-4df4-446b-972f-8a07c4eb5782\") " pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.791467 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/882787b1-4df4-446b-972f-8a07c4eb5782-catalog-content\") pod \"redhat-operators-58kms\" (UID: \"882787b1-4df4-446b-972f-8a07c4eb5782\") " pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.791569 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.791659 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42ww7\" (UniqueName: \"kubernetes.io/projected/882787b1-4df4-446b-972f-8a07c4eb5782-kube-api-access-42ww7\") pod \"redhat-operators-58kms\" (UID: \"882787b1-4df4-446b-972f-8a07c4eb5782\") " pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.791983 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/882787b1-4df4-446b-972f-8a07c4eb5782-catalog-content\") pod \"redhat-operators-58kms\" (UID: \"882787b1-4df4-446b-972f-8a07c4eb5782\") " pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.792049 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.292030613 +0000 UTC m=+168.401173111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.792064 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/882787b1-4df4-446b-972f-8a07c4eb5782-utilities\") pod \"redhat-operators-58kms\" (UID: \"882787b1-4df4-446b-972f-8a07c4eb5782\") " pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.810812 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42ww7\" (UniqueName: \"kubernetes.io/projected/882787b1-4df4-446b-972f-8a07c4eb5782-kube-api-access-42ww7\") pod \"redhat-operators-58kms\" (UID: \"882787b1-4df4-446b-972f-8a07c4eb5782\") " pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.892700 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.893247 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.393226617 +0000 UTC m=+168.502369135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.903384 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sqvm8"] Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.905652 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.924942 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sqvm8"] Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.992131 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.994876 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s7ml\" (UniqueName: \"kubernetes.io/projected/9a2865e3-5706-4a03-8529-571895dde1ea-kube-api-access-7s7ml\") pod \"redhat-operators-sqvm8\" (UID: \"9a2865e3-5706-4a03-8529-571895dde1ea\") " pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.994942 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2865e3-5706-4a03-8529-571895dde1ea-utilities\") pod \"redhat-operators-sqvm8\" (UID: \"9a2865e3-5706-4a03-8529-571895dde1ea\") " pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.995112 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:42 crc kubenswrapper[4972]: I1121 09:43:42.995156 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2865e3-5706-4a03-8529-571895dde1ea-catalog-content\") pod \"redhat-operators-sqvm8\" (UID: \"9a2865e3-5706-4a03-8529-571895dde1ea\") " pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:43:42 crc kubenswrapper[4972]: E1121 09:43:42.995538 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.495522601 +0000 UTC m=+168.604665099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.008478 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kwmvg"] Nov 21 09:43:43 crc kubenswrapper[4972]: W1121 09:43:43.021112 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6ff0ba3_a662_4497_a3f1_70ea785beb6e.slice/crio-9e14cda0b3f6af96548d29be5ced0c27311ed48f51f249dda8da4cfdd47b2a28 WatchSource:0}: Error finding container 9e14cda0b3f6af96548d29be5ced0c27311ed48f51f249dda8da4cfdd47b2a28: Status 404 returned error can't find the container with id 9e14cda0b3f6af96548d29be5ced0c27311ed48f51f249dda8da4cfdd47b2a28 Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.070442 4972 generic.go:334] "Generic (PLEG): container finished" podID="5fc91391-3c93-4fe0-9c24-f8aad9c21fd2" containerID="c770f2b39614924c55c37a5e6f1314439f648f8b6a36680aa10924ad5d983fba" exitCode=0 Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.070509 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" event={"ID":"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2","Type":"ContainerDied","Data":"c770f2b39614924c55c37a5e6f1314439f648f8b6a36680aa10924ad5d983fba"} Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.072134 4972 generic.go:334] "Generic (PLEG): container finished" podID="1b43815a-969e-432e-ac57-843bee51860c" containerID="b4d0280b35c91aebc5337859ed711f8af1c27f1505945c5773fad8e46ccf98fb" exitCode=0 Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.072243 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc758" event={"ID":"1b43815a-969e-432e-ac57-843bee51860c","Type":"ContainerDied","Data":"b4d0280b35c91aebc5337859ed711f8af1c27f1505945c5773fad8e46ccf98fb"} Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.076580 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" event={"ID":"c6ff0ba3-a662-4497-a3f1-70ea785beb6e","Type":"ContainerStarted","Data":"9e14cda0b3f6af96548d29be5ced0c27311ed48f51f249dda8da4cfdd47b2a28"} Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.078181 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" event={"ID":"df5e96f4-727c-44c1-8e2f-e624c912430b","Type":"ContainerStarted","Data":"3baac24de0904f6ef3b1bb5afa66c110f87cb13ac96151cc443a867a1cab6c69"} Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.086183 4972 generic.go:334] "Generic (PLEG): container finished" podID="6136a605-ff46-4462-808b-cc8d2c28faea" containerID="ef8104784c6b32be85ee29952b39fb16f9ed8f2029c963adf7df2aa8818736bf" exitCode=0 Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.086287 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnsjx" event={"ID":"6136a605-ff46-4462-808b-cc8d2c28faea","Type":"ContainerDied","Data":"ef8104784c6b32be85ee29952b39fb16f9ed8f2029c963adf7df2aa8818736bf"} Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.092786 4972 generic.go:334] "Generic (PLEG): container finished" podID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" containerID="d352d776ec765c02e42de25bbda5bb47ea680ea3a972d677d2599215a29d098f" exitCode=0 Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.092858 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t9l" event={"ID":"13ef553c-f6bd-4af2-9c0e-643cd14f9290","Type":"ContainerDied","Data":"d352d776ec765c02e42de25bbda5bb47ea680ea3a972d677d2599215a29d098f"} Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.094411 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" event={"ID":"6068d0a6-b0b7-44af-8dcd-995d728bf03a","Type":"ContainerDied","Data":"6dac65444381615580d7c5c2a13c1b28392da59ffef9205b56f9dbd137b7e792"} Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.094442 4972 scope.go:117] "RemoveContainer" containerID="d8439ab6c9eadcb01eea65f5c542deb3144152721c1f235d81724e1315c87d87" Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.094564 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-n6wh5" Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.096331 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.097953 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.098116 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s7ml\" (UniqueName: \"kubernetes.io/projected/9a2865e3-5706-4a03-8529-571895dde1ea-kube-api-access-7s7ml\") pod \"redhat-operators-sqvm8\" (UID: \"9a2865e3-5706-4a03-8529-571895dde1ea\") " pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.098137 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2865e3-5706-4a03-8529-571895dde1ea-utilities\") pod \"redhat-operators-sqvm8\" (UID: \"9a2865e3-5706-4a03-8529-571895dde1ea\") " pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.098277 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2865e3-5706-4a03-8529-571895dde1ea-catalog-content\") pod \"redhat-operators-sqvm8\" (UID: \"9a2865e3-5706-4a03-8529-571895dde1ea\") " pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.098706 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2865e3-5706-4a03-8529-571895dde1ea-catalog-content\") pod \"redhat-operators-sqvm8\" (UID: \"9a2865e3-5706-4a03-8529-571895dde1ea\") " pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.098821 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.598799803 +0000 UTC m=+168.707942391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.098935 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2865e3-5706-4a03-8529-571895dde1ea-utilities\") pod \"redhat-operators-sqvm8\" (UID: \"9a2865e3-5706-4a03-8529-571895dde1ea\") " pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.107238 4972 generic.go:334] "Generic (PLEG): container finished" podID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" containerID="156961e40bf26f82a583f0e2dcaee19f8282962279c498b17d1ce0543b0c5ae2" exitCode=0 Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.107296 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqggx" event={"ID":"6e0ba187-0ec6-40e7-bd83-771510a29a5b","Type":"ContainerDied","Data":"156961e40bf26f82a583f0e2dcaee19f8282962279c498b17d1ce0543b0c5ae2"} Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.113225 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ccfh" event={"ID":"49b866e2-c40e-4b45-acfc-965161cabf5c","Type":"ContainerStarted","Data":"6302ea3281d3d0b5d70d9fde039d97793a814ff243cfc2b3be22f19a74ed6e6f"} Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.120977 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s7ml\" (UniqueName: \"kubernetes.io/projected/9a2865e3-5706-4a03-8529-571895dde1ea-kube-api-access-7s7ml\") pod \"redhat-operators-sqvm8\" (UID: \"9a2865e3-5706-4a03-8529-571895dde1ea\") " pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.151830 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-n6wh5"] Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.154537 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-n6wh5"] Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.175614 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.175596124 podStartE2EDuration="4.175596124s" podCreationTimestamp="2025-11-21 09:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:43:43.174153073 +0000 UTC m=+168.283295581" watchObservedRunningTime="2025-11-21 09:43:43.175596124 +0000 UTC m=+168.284738622" Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.200322 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.200837 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.70082082 +0000 UTC m=+168.809963318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.202731 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58kms"] Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.229657 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.301813 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.301937 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.801916531 +0000 UTC m=+168.911059039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.302074 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.302422 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.802413015 +0000 UTC m=+168.911555523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.403108 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.403330 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.903299279 +0000 UTC m=+169.012441777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.403488 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.403880 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:43.903864495 +0000 UTC m=+169.013007063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.509314 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.509672 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.009651778 +0000 UTC m=+169.118794286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.509752 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.510125 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.010116971 +0000 UTC m=+169.119259469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.610410 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.610565 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.110537482 +0000 UTC m=+169.219679980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.611026 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.611790 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.111748717 +0000 UTC m=+169.220891255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.632931 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sqvm8"] Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.633024 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:43 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:43 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:43 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.633069 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:43 crc kubenswrapper[4972]: W1121 09:43:43.656685 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a2865e3_5706_4a03_8529_571895dde1ea.slice/crio-db40660ad04f730b7f71ffebbf5e2fedc8b43c8930dbb70a3627449133e1743c WatchSource:0}: Error finding container db40660ad04f730b7f71ffebbf5e2fedc8b43c8930dbb70a3627449133e1743c: Status 404 returned error can't find the container with id db40660ad04f730b7f71ffebbf5e2fedc8b43c8930dbb70a3627449133e1743c Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.712034 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.712567 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.212531418 +0000 UTC m=+169.321673916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.724537 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.725265 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.225251419 +0000 UTC m=+169.334393917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.766054 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6068d0a6-b0b7-44af-8dcd-995d728bf03a" path="/var/lib/kubelet/pods/6068d0a6-b0b7-44af-8dcd-995d728bf03a/volumes" Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.825466 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.825688 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.32566097 +0000 UTC m=+169.434803468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.825782 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.826123 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.326116083 +0000 UTC m=+169.435258581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.927034 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.927190 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.427172822 +0000 UTC m=+169.536315310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:43 crc kubenswrapper[4972]: I1121 09:43:43.927432 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:43 crc kubenswrapper[4972]: E1121 09:43:43.927804 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.42779437 +0000 UTC m=+169.536936868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.029295 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.029507 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.529477527 +0000 UTC m=+169.638620025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.030065 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.030439 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.530430924 +0000 UTC m=+169.639573422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.123050 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58kms" event={"ID":"882787b1-4df4-446b-972f-8a07c4eb5782","Type":"ContainerStarted","Data":"55e180bff46e973ca6c027af5df8d3e9866e9d67c7aa098a0a74452d84545816"} Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.125667 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqvm8" event={"ID":"9a2865e3-5706-4a03-8529-571895dde1ea","Type":"ContainerStarted","Data":"db40660ad04f730b7f71ffebbf5e2fedc8b43c8930dbb70a3627449133e1743c"} Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.127448 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.130820 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.130956 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.630930848 +0000 UTC m=+169.740073346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.131123 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.131439 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.631427292 +0000 UTC m=+169.740569790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.232070 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.232310 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.732274035 +0000 UTC m=+169.841416553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.232391 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.232734 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.732719208 +0000 UTC m=+169.841861796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.334134 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.334285 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.834262291 +0000 UTC m=+169.943404789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.334477 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.334804 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.834796916 +0000 UTC m=+169.943939414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.368460 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.435248 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-secret-volume\") pod \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\" (UID: \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\") " Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.435364 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.435451 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.935437084 +0000 UTC m=+170.044579592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.435469 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-config-volume\") pod \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\" (UID: \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\") " Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.435519 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w28fs\" (UniqueName: \"kubernetes.io/projected/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-kube-api-access-w28fs\") pod \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\" (UID: \"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2\") " Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.435628 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.435981 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:44.935972369 +0000 UTC m=+170.045114867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.436451 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-config-volume" (OuterVolumeSpecName: "config-volume") pod "5fc91391-3c93-4fe0-9c24-f8aad9c21fd2" (UID: "5fc91391-3c93-4fe0-9c24-f8aad9c21fd2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.440423 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5fc91391-3c93-4fe0-9c24-f8aad9c21fd2" (UID: "5fc91391-3c93-4fe0-9c24-f8aad9c21fd2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.440764 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-kube-api-access-w28fs" (OuterVolumeSpecName: "kube-api-access-w28fs") pod "5fc91391-3c93-4fe0-9c24-f8aad9c21fd2" (UID: "5fc91391-3c93-4fe0-9c24-f8aad9c21fd2"). InnerVolumeSpecName "kube-api-access-w28fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.536479 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.536659 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.036632117 +0000 UTC m=+170.145774615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.537629 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.537706 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w28fs\" (UniqueName: \"kubernetes.io/projected/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-kube-api-access-w28fs\") on node \"crc\" DevicePath \"\"" Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.537722 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.537736 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.538072 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.038059447 +0000 UTC m=+170.147201995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.629230 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:44 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:44 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:44 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.629318 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.639256 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.639466 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.139437386 +0000 UTC m=+170.248579894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.639625 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.639987 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.139971031 +0000 UTC m=+170.249113529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.740456 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.740700 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.240684331 +0000 UTC m=+170.349826829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.841863 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.842332 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.342314556 +0000 UTC m=+170.451457054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.942803 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.942988 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.442960724 +0000 UTC m=+170.552103222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:44 crc kubenswrapper[4972]: I1121 09:43:44.943109 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:44 crc kubenswrapper[4972]: E1121 09:43:44.943381 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.443373746 +0000 UTC m=+170.552516244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.039245 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.043893 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.044067 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.544044474 +0000 UTC m=+170.653186982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.044257 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.044585 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.544569869 +0000 UTC m=+170.653712377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.048582 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xxsnz" Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.144747 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" event={"ID":"5fc91391-3c93-4fe0-9c24-f8aad9c21fd2","Type":"ContainerDied","Data":"aca76d3f479448d1c69c197124947fcb916ce62a15d42ee55b3290ecd732ea52"} Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.144827 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aca76d3f479448d1c69c197124947fcb916ce62a15d42ee55b3290ecd732ea52" Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.144772 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc" Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.152313 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.152679 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.652651257 +0000 UTC m=+170.761793755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.153105 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.153977 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.653954274 +0000 UTC m=+170.763096962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.255303 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.255648 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.755626291 +0000 UTC m=+170.864768799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.256136 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.256943 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.756918158 +0000 UTC m=+170.866060656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.358233 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.358484 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.85843589 +0000 UTC m=+170.967578388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.358568 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.359368 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.859355876 +0000 UTC m=+170.968498574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.460124 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.460294 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.960261051 +0000 UTC m=+171.069403559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.460439 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.460722 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:45.960710304 +0000 UTC m=+171.069852802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.562139 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.562306 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.062281358 +0000 UTC m=+171.171423856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.562354 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.562621 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.062614667 +0000 UTC m=+171.171757165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.629813 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:45 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:45 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:45 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.629888 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.663942 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.664230 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.164190202 +0000 UTC m=+171.273332740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.664551 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.664812 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.164797559 +0000 UTC m=+171.273940107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.765285 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.765654 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.265639072 +0000 UTC m=+171.374781580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.867759 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.868300 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.368272626 +0000 UTC m=+171.477415134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.943039 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.969603 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.969861 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.469804979 +0000 UTC m=+171.578947487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:45 crc kubenswrapper[4972]: I1121 09:43:45.969985 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:45 crc kubenswrapper[4972]: E1121 09:43:45.970438 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.470426627 +0000 UTC m=+171.579569135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.071518 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.071806 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.571792255 +0000 UTC m=+171.680934753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.173030 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.173403 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.673386079 +0000 UTC m=+171.782528577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.273975 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.274429 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.774408097 +0000 UTC m=+171.883550595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.375311 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.375757 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.875738915 +0000 UTC m=+171.984881413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.476226 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.476430 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.976403873 +0000 UTC m=+172.085546371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.476636 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.477051 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:46.977042431 +0000 UTC m=+172.086184929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.578060 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.578221 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.078199433 +0000 UTC m=+172.187341931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.578580 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.578933 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.078924374 +0000 UTC m=+172.188066872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.629844 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:46 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:46 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:46 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.629912 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.680225 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.680444 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.180415145 +0000 UTC m=+172.289557643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.680644 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.681069 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.181057653 +0000 UTC m=+172.290200241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.782103 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.782859 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.282820043 +0000 UTC m=+172.391962551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.884049 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.884438 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.384421728 +0000 UTC m=+172.493564226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.986768 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.986987 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.486950579 +0000 UTC m=+172.596093097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:46 crc kubenswrapper[4972]: I1121 09:43:46.987444 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:46 crc kubenswrapper[4972]: E1121 09:43:46.987891 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.487876355 +0000 UTC m=+172.597018923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.091817 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.092198 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.592152565 +0000 UTC m=+172.701295103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.092533 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.092997 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.592981348 +0000 UTC m=+172.702123876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.193523 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.193941 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.693922124 +0000 UTC m=+172.803064622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.295390 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.295760 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.795747885 +0000 UTC m=+172.904890383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.396409 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.396691 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.89665628 +0000 UTC m=+173.005798778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.396770 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.397302 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.897286848 +0000 UTC m=+173.006429346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.498960 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.499171 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:47.99914375 +0000 UTC m=+173.108286248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.499630 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.500123 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.000098557 +0000 UTC m=+173.109241085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.601144 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.601618 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.101599449 +0000 UTC m=+173.210741947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.629446 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:47 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:47 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:47 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.629517 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.703521 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.703964 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.203945035 +0000 UTC m=+173.313087613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.804586 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.804772 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.304745367 +0000 UTC m=+173.413887885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.805587 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.805963 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.305946711 +0000 UTC m=+173.415089209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:47 crc kubenswrapper[4972]: I1121 09:43:47.906045 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:47 crc kubenswrapper[4972]: E1121 09:43:47.906573 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.406549668 +0000 UTC m=+173.515692166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.009394 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.009823 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.5098103 +0000 UTC m=+173.618952798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.110411 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.110584 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.61055042 +0000 UTC m=+173.719692918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.111088 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.111391 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.611381834 +0000 UTC m=+173.720524422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.212195 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.212322 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.712299979 +0000 UTC m=+173.821442487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.212526 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.212804 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.712794443 +0000 UTC m=+173.821936941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.313638 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.313815 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.813753009 +0000 UTC m=+173.922895507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.313872 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.314205 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.814195432 +0000 UTC m=+173.923337930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.415687 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.416163 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:48.916124836 +0000 UTC m=+174.025267344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.517892 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.518227 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.018214585 +0000 UTC m=+174.127357083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.619801 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.619980 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.119952423 +0000 UTC m=+174.229094921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.620161 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.620497 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.120490509 +0000 UTC m=+174.229633007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.629683 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:48 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:48 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:48 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.629742 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.721688 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.721884 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.221853507 +0000 UTC m=+174.330996005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.722096 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.722529 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.222517826 +0000 UTC m=+174.331660424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.823683 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.823931 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.323889444 +0000 UTC m=+174.433031942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.824047 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.824477 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.32446396 +0000 UTC m=+174.433606508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.925221 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.925428 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.425400546 +0000 UTC m=+174.534543044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:48 crc kubenswrapper[4972]: I1121 09:43:48.925685 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:48 crc kubenswrapper[4972]: E1121 09:43:48.926146 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.426128677 +0000 UTC m=+174.535271175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.027348 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.027598 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.527571307 +0000 UTC m=+174.636713805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.027857 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.028250 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.528232156 +0000 UTC m=+174.637374644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.128915 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.129088 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.629059329 +0000 UTC m=+174.738201837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.129141 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.129478 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.62946481 +0000 UTC m=+174.738607358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.230939 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.231176 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.731134107 +0000 UTC m=+174.840276625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.231614 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.232193 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.732167826 +0000 UTC m=+174.841310364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.332684 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.332905 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.832868355 +0000 UTC m=+174.942010873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.333321 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.333699 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.833684548 +0000 UTC m=+174.942827056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.435007 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.435211 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.93518471 +0000 UTC m=+175.044327208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.435351 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.435710 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:49.935673014 +0000 UTC m=+175.044815512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.536066 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.536386 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.036325682 +0000 UTC m=+175.145468210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.536692 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.537077 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.037064643 +0000 UTC m=+175.146207141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.629477 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:49 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:49 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:49 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.629564 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.638034 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.638173 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.138147133 +0000 UTC m=+175.247289631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.638297 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.638576 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.138568085 +0000 UTC m=+175.247710583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.739197 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.739359 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.239338816 +0000 UTC m=+175.348481324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.740014 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.740516 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.240478998 +0000 UTC m=+175.349621526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.841659 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.841893 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.341861657 +0000 UTC m=+175.451004155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.842252 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.842645 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.342635899 +0000 UTC m=+175.451778397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.943757 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.943961 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.443932015 +0000 UTC m=+175.553074523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:49 crc kubenswrapper[4972]: I1121 09:43:49.944199 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:49 crc kubenswrapper[4972]: E1121 09:43:49.944555 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.444542232 +0000 UTC m=+175.553684740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.045973 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.046210 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.546178518 +0000 UTC m=+175.655321066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.046626 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.047042 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.547030902 +0000 UTC m=+175.656173400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.147334 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.147554 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.647521256 +0000 UTC m=+175.756663764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.147891 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.148260 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.648245256 +0000 UTC m=+175.757387764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.249081 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.249404 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.749369817 +0000 UTC m=+175.858512325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.249572 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.249974 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.749957634 +0000 UTC m=+175.859100212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.351281 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.351413 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.851396034 +0000 UTC m=+175.960538532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.351456 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.351746 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.851737384 +0000 UTC m=+175.960879882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.452522 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.452690 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.95266236 +0000 UTC m=+176.061804858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.452751 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.453068 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:50.953057201 +0000 UTC m=+176.062199689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.554306 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.554463 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.054441989 +0000 UTC m=+176.163584497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.554620 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.554953 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.054943344 +0000 UTC m=+176.164085842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.612132 4972 patch_prober.go:28] interesting pod/console-f9d7485db-j7xxl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.612192 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j7xxl" podUID="7b0e4d64-f901-4a4e-9644-408eb534401e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.630691 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:50 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:50 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:50 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.630776 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.655715 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.656296 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.156238849 +0000 UTC m=+176.265381377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.657064 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.657452 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.157434313 +0000 UTC m=+176.266576811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.758339 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.758435 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.25841746 +0000 UTC m=+176.367559958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.758686 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.758990 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.258983256 +0000 UTC m=+176.368125744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.860181 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.860394 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.360368385 +0000 UTC m=+176.469510883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.860615 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.860926 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.36091272 +0000 UTC m=+176.470055218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.884541 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.884582 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.884772 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.884875 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:43:50 crc kubenswrapper[4972]: I1121 09:43:50.962129 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:50 crc kubenswrapper[4972]: E1121 09:43:50.962466 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.462452113 +0000 UTC m=+176.571594611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.063287 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.063721 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.563699518 +0000 UTC m=+176.672842106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.164179 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.164500 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.664460399 +0000 UTC m=+176.773602937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.164566 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.165044 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.665025295 +0000 UTC m=+176.774167793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.265462 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.265683 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.765652532 +0000 UTC m=+176.874795040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.266024 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.266386 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.766371152 +0000 UTC m=+176.875513730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.367072 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.367329 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.867303208 +0000 UTC m=+176.976445706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.367592 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.367924 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.867912475 +0000 UTC m=+176.977054973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.468593 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.468884 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.968851061 +0000 UTC m=+177.077993559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.469114 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.469512 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:51.96950219 +0000 UTC m=+177.078644778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.569730 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.569985 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.069954272 +0000 UTC m=+177.179096770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.570191 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.570532 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.070518798 +0000 UTC m=+177.179661296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.629317 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:51 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:51 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:51 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.629389 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.671939 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.672076 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.172056051 +0000 UTC m=+177.281198559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.672457 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.672736 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.17272574 +0000 UTC m=+177.281868248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.774369 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.774594 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.274565021 +0000 UTC m=+177.383707529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.774963 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.775565 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.275541189 +0000 UTC m=+177.384683717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.876984 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.877152 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.377123203 +0000 UTC m=+177.486265711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.877424 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.878060 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.378000778 +0000 UTC m=+177.487143276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.902110 4972 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-w2c2r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.902196 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" podUID="e9280ad8-85ad-4faa-a025-a021e417e522" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.978718 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:51 crc kubenswrapper[4972]: E1121 09:43:51.978951 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.478924974 +0000 UTC m=+177.588067472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.983109 4972 patch_prober.go:28] interesting pod/dns-default-77wn4 container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.20:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 09:43:51 crc kubenswrapper[4972]: I1121 09:43:51.983199 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-77wn4" podUID="7939233b-508e-485b-91ea-8b266ba6f829" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.20:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.080823 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.081384 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.581362932 +0000 UTC m=+177.690505490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.182199 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.182368 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.682340269 +0000 UTC m=+177.791482777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.182699 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.183068 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.68305858 +0000 UTC m=+177.792201168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.284445 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.284712 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.784661635 +0000 UTC m=+177.893804143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.284859 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.285415 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.785404286 +0000 UTC m=+177.894546794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.385591 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.385752 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.885732184 +0000 UTC m=+177.994874682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.386106 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.386598 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.886587549 +0000 UTC m=+177.995730047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.487027 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.487185 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.987147794 +0000 UTC m=+178.096290352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.487391 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.487690 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:52.987680539 +0000 UTC m=+178.096823037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.588390 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.588615 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.088583574 +0000 UTC m=+178.197726082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.588734 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.589167 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.08915697 +0000 UTC m=+178.198299548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.629808 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:52 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:52 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:52 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.629933 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.689767 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.690007 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.189976033 +0000 UTC m=+178.299118531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.690357 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.690683 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.190676212 +0000 UTC m=+178.299818710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.791663 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.792132 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.292084532 +0000 UTC m=+178.401227100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.792471 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.793022 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.292930046 +0000 UTC m=+178.402072574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.894032 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.894367 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.394332695 +0000 UTC m=+178.503475183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.895173 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.895725 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.395698684 +0000 UTC m=+178.504841212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.996213 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.996461 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.496423784 +0000 UTC m=+178.605566322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:52 crc kubenswrapper[4972]: I1121 09:43:52.996506 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:52 crc kubenswrapper[4972]: E1121 09:43:52.997015 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.49699956 +0000 UTC m=+178.606142088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.097345 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.097477 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.597452402 +0000 UTC m=+178.706594900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.097650 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.097950 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.597939416 +0000 UTC m=+178.707081914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.199100 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.199245 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.699224132 +0000 UTC m=+178.808366630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.199498 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.199802 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.699795388 +0000 UTC m=+178.808937886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.301656 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.301864 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.801822265 +0000 UTC m=+178.910964753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.302142 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.302441 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.802434672 +0000 UTC m=+178.911577170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.403483 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.403999 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.903957785 +0000 UTC m=+179.013100313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.404158 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.404692 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:53.904676335 +0000 UTC m=+179.013818873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.505820 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.506116 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.006077375 +0000 UTC m=+179.115219883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.506351 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.506758 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.006748524 +0000 UTC m=+179.115891022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.607913 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.608261 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.108223606 +0000 UTC m=+179.217366134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.608327 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.608790 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.108773021 +0000 UTC m=+179.217915549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.630923 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:53 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:53 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:53 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.630990 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.709352 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.709557 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.209527372 +0000 UTC m=+179.318669870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.709966 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.710559 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.21053788 +0000 UTC m=+179.319680408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.811655 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.811984 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.31194197 +0000 UTC m=+179.421084528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.812042 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.812535 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.312517627 +0000 UTC m=+179.421660165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.913330 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.913543 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.413502594 +0000 UTC m=+179.522645132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:53 crc kubenswrapper[4972]: I1121 09:43:53.914032 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:53 crc kubenswrapper[4972]: E1121 09:43:53.914540 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.414524323 +0000 UTC m=+179.523666861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.015174 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.015263 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.515246923 +0000 UTC m=+179.624389431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.015490 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.015797 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.515788638 +0000 UTC m=+179.624931126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.116794 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.117101 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.617075754 +0000 UTC m=+179.726218282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.117145 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.117568 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.617554947 +0000 UTC m=+179.726697455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.218360 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.218535 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.718511993 +0000 UTC m=+179.827654501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.218716 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.219056 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.719046348 +0000 UTC m=+179.828188856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.320306 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.320423 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.820401606 +0000 UTC m=+179.929544124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.320572 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.320953 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.820943431 +0000 UTC m=+179.930085929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.421358 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.421562 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.921535567 +0000 UTC m=+180.030678065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.421778 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.422122 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:54.922088713 +0000 UTC m=+180.031231211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.523162 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.523529 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.023490082 +0000 UTC m=+180.132632610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.523673 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.524217 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.024194342 +0000 UTC m=+180.133336880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.624641 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.624953 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.124918622 +0000 UTC m=+180.234061160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.625073 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.625370 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.125359074 +0000 UTC m=+180.234501572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.630410 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:54 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:54 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:54 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.630451 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.725900 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.726061 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.226034663 +0000 UTC m=+180.335177201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.726309 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.726713 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.226697142 +0000 UTC m=+180.335839670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.826943 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.827361 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.327344929 +0000 UTC m=+180.436487427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:54 crc kubenswrapper[4972]: I1121 09:43:54.928129 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:54 crc kubenswrapper[4972]: E1121 09:43:54.928635 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.428618025 +0000 UTC m=+180.537760573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.024223 4972 patch_prober.go:28] interesting pod/dns-default-77wn4 container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.20:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.024352 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-77wn4" podUID="7939233b-508e-485b-91ea-8b266ba6f829" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.20:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.029711 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.030139 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.530052585 +0000 UTC m=+180.639195123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.030577 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.030886 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.530873338 +0000 UTC m=+180.640015826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.132221 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.132438 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.632401531 +0000 UTC m=+180.741544029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.132537 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.132957 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.632947286 +0000 UTC m=+180.742089854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.234090 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.234293 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.734260883 +0000 UTC m=+180.843403381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.234530 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.234976 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.734961463 +0000 UTC m=+180.844104051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.335785 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.335999 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.83595762 +0000 UTC m=+180.945100128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.336196 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.336598 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.836587048 +0000 UTC m=+180.945729556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.437925 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.438059 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.938029119 +0000 UTC m=+181.047171617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.438474 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.438849 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:55.938822951 +0000 UTC m=+181.047965509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.539854 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.540036 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.040011364 +0000 UTC m=+181.149153862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.540165 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.540490 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.040483088 +0000 UTC m=+181.149625586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.629978 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:55 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:55 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:55 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.630034 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.640879 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.641072 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.141042233 +0000 UTC m=+181.250184731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.641189 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.641495 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.141486116 +0000 UTC m=+181.250628694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.742936 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.242917006 +0000 UTC m=+181.352059514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.742818 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.743158 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.743476 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.243469191 +0000 UTC m=+181.352611689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.845136 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.845279 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.345257441 +0000 UTC m=+181.454399939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.845467 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.845795 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.345784326 +0000 UTC m=+181.454926824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.946204 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.946526 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.446479215 +0000 UTC m=+181.555621723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:55 crc kubenswrapper[4972]: I1121 09:43:55.946932 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:55 crc kubenswrapper[4972]: E1121 09:43:55.947276 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.447262607 +0000 UTC m=+181.556405105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.048824 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.049012 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.548970345 +0000 UTC m=+181.658112873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.049270 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.049698 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.549683316 +0000 UTC m=+181.658825834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.149802 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.149990 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.649956633 +0000 UTC m=+181.759099131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.150511 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.150936 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.65092717 +0000 UTC m=+181.760069658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.179120 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.179192 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.252246 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.252428 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.752400781 +0000 UTC m=+181.861543279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.252681 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.253085 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.753076101 +0000 UTC m=+181.862218599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.353989 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.354223 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.854193272 +0000 UTC m=+181.963335770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.354355 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.354741 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.854734007 +0000 UTC m=+181.963876505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.456224 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.456414 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.956388903 +0000 UTC m=+182.065531401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.456500 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.456916 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:56.956908498 +0000 UTC m=+182.066050996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.558401 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.558557 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.058532213 +0000 UTC m=+182.167674711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.558967 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.559281 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.059266764 +0000 UTC m=+182.168409262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.630511 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:56 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:56 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:56 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.630588 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.660547 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.660713 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.160682424 +0000 UTC m=+182.269824932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.661002 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.661318 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.161309352 +0000 UTC m=+182.270451850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.761815 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.761992 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.261953139 +0000 UTC m=+182.371095677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.762206 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.762757 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.262736061 +0000 UTC m=+182.371878599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.864199 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.864451 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.364403298 +0000 UTC m=+182.473545846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.864558 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.864999 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.364978714 +0000 UTC m=+182.474121272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.965713 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.965961 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.465923871 +0000 UTC m=+182.575066369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:56 crc kubenswrapper[4972]: I1121 09:43:56.966130 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:56 crc kubenswrapper[4972]: E1121 09:43:56.966527 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.466515487 +0000 UTC m=+182.575658045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.067052 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.067210 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.567188156 +0000 UTC m=+182.676330664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.067394 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.067697 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.56768853 +0000 UTC m=+182.676831028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.168900 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.169135 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.669104029 +0000 UTC m=+182.778246547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.169312 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.169724 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.669710947 +0000 UTC m=+182.778853445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.270422 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.270610 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.770584371 +0000 UTC m=+182.879726869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.270908 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.271341 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.771327512 +0000 UTC m=+182.880470070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.372560 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.372804 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.872747581 +0000 UTC m=+182.981890079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.372888 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.373404 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.873381929 +0000 UTC m=+182.982524507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.474415 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.474511 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.97449612 +0000 UTC m=+183.083638618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.474729 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.475291 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:57.975269522 +0000 UTC m=+183.084412110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.575598 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.576268 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:58.076218479 +0000 UTC m=+183.185360977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.629376 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:57 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:57 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:57 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.629442 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.677950 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.678544 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:58.178523643 +0000 UTC m=+183.287666231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.779449 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.780155 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:58.280131877 +0000 UTC m=+183.389274385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.880949 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.881330 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:58.38131431 +0000 UTC m=+183.490456808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:57 crc kubenswrapper[4972]: I1121 09:43:57.984012 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:57 crc kubenswrapper[4972]: E1121 09:43:57.984452 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:58.484394667 +0000 UTC m=+183.593537205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.066179 4972 patch_prober.go:28] interesting pod/dns-default-77wn4 container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.20:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.066266 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-77wn4" podUID="7939233b-508e-485b-91ea-8b266ba6f829" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.20:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.066761 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.085744 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.086238 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:58.586223828 +0000 UTC m=+183.695366336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.187228 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.187381 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:58.68736067 +0000 UTC m=+183.796503168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.187623 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.188153 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:58.688141242 +0000 UTC m=+183.797283740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.288556 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.289030 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:58.788988465 +0000 UTC m=+183.898131013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.391463 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.391966 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:58.891941028 +0000 UTC m=+184.001083617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.493205 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.493451 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:58.993409639 +0000 UTC m=+184.102552177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.493772 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.494321 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:58.994301045 +0000 UTC m=+184.103443583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.595692 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.595923 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.095888169 +0000 UTC m=+184.205030707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.596671 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.597255 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.097235067 +0000 UTC m=+184.206377605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.629916 4972 patch_prober.go:28] interesting pod/router-default-5444994796-g9znh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 09:43:58 crc kubenswrapper[4972]: [-]has-synced failed: reason withheld Nov 21 09:43:58 crc kubenswrapper[4972]: [+]process-running ok Nov 21 09:43:58 crc kubenswrapper[4972]: healthz check failed Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.630023 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-g9znh" podUID="76ca4784-e584-413d-b1cb-77f336e4f695" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.697607 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.698008 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.197970228 +0000 UTC m=+184.307112756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.698362 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.698853 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.198807141 +0000 UTC m=+184.307949669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.800448 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.800961 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.300921901 +0000 UTC m=+184.410064419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.801333 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.801821 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.301799426 +0000 UTC m=+184.410941954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.888128 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-77wn4" Nov 21 09:43:58 crc kubenswrapper[4972]: I1121 09:43:58.903486 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:58 crc kubenswrapper[4972]: E1121 09:43:58.904871 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.40481501 +0000 UTC m=+184.513957508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.006771 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.007784 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.507769834 +0000 UTC m=+184.616912332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.108145 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.108277 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.608232486 +0000 UTC m=+184.717374994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.108433 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.108814 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.608804912 +0000 UTC m=+184.717947410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.209284 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.209655 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.709629895 +0000 UTC m=+184.818772383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.209866 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.210201 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.710188721 +0000 UTC m=+184.819331219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.236736 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh4hc" event={"ID":"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827","Type":"ContainerStarted","Data":"f1d840d59ae61eabc8be3b62b0f1fe3ff491c3d0fc8fb30da87081a30c8748d9"} Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.238030 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" event={"ID":"a6597c90-c4ee-4856-b03f-f0fa1d3062f5","Type":"ContainerStarted","Data":"d1d7cde2fac2ee58aeeda7627bc3a9600d15df9a70e49f10fa6695c090768159"} Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.239984 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b88d1d32-e641-4b28-beb1-a3103fbf22d8","Type":"ContainerStarted","Data":"acdad0cb1786f14952b9c4c9c2b8bf5cadc87e3a4483ab0aec95d70b243f9814"} Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.310485 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.310746 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.810723236 +0000 UTC m=+184.919865734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.311000 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.311978 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.811966101 +0000 UTC m=+184.921108599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.412160 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.412538 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:43:59.912520056 +0000 UTC m=+185.021662554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.514274 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.514652 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.014637745 +0000 UTC m=+185.123780253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.615027 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.615763 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.115744116 +0000 UTC m=+185.224886614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.630027 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.632696 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-g9znh" Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.717071 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.717773 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.217760473 +0000 UTC m=+185.326902971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.819807 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.820043 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.320004796 +0000 UTC m=+185.429147304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.820295 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.820741 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.320723466 +0000 UTC m=+185.429865964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.921297 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.921563 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.421515948 +0000 UTC m=+185.530658446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:43:59 crc kubenswrapper[4972]: I1121 09:43:59.922013 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:43:59 crc kubenswrapper[4972]: E1121 09:43:59.922903 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.422851816 +0000 UTC m=+185.531994314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.022985 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.023751 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.523708149 +0000 UTC m=+185.632850647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.124624 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.124936 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.624925023 +0000 UTC m=+185.734067521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.225550 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.225708 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.725690094 +0000 UTC m=+185.834832592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.225901 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.226201 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.726190888 +0000 UTC m=+185.835333386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.245688 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" event={"ID":"c6ff0ba3-a662-4497-a3f1-70ea785beb6e","Type":"ContainerStarted","Data":"f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42"} Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.327183 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.327469 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.827424703 +0000 UTC m=+185.936567201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.327779 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.328319 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.828301868 +0000 UTC m=+185.937444556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.428915 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.429017 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.928998867 +0000 UTC m=+186.038141365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.429463 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.429798 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:00.929788779 +0000 UTC m=+186.038931297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.530377 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.530589 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.03055701 +0000 UTC m=+186.139699518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.530722 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.531063 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.031052114 +0000 UTC m=+186.140194612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.611329 4972 patch_prober.go:28] interesting pod/console-f9d7485db-j7xxl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.611763 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j7xxl" podUID="7b0e4d64-f901-4a4e-9644-408eb534401e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.631608 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.631704 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.131689412 +0000 UTC m=+186.240831900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.631989 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.632341 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.13233381 +0000 UTC m=+186.241476308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.734011 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.734447 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.234416568 +0000 UTC m=+186.343559056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.734564 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.735024 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.235016866 +0000 UTC m=+186.344159354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.836266 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.836451 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.336429545 +0000 UTC m=+186.445572043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.836542 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.836876 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.336863757 +0000 UTC m=+186.446006255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.884575 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.884637 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.884715 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.884803 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.884923 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-swwr5" Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.885567 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.885630 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.885915 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"ffbcce0376856f19fa35c76542b1a93d34469bb8f644efcbb58b87ccbf0af1cf"} pod="openshift-console/downloads-7954f5f757-swwr5" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.886086 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" containerID="cri-o://ffbcce0376856f19fa35c76542b1a93d34469bb8f644efcbb58b87ccbf0af1cf" gracePeriod=2 Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.937956 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.938145 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.438115482 +0000 UTC m=+186.547257980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:00 crc kubenswrapper[4972]: I1121 09:44:00.938292 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:00 crc kubenswrapper[4972]: E1121 09:44:00.938635 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.438622266 +0000 UTC m=+186.547764764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.040146 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.041443 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.541427535 +0000 UTC m=+186.650570033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.141572 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.142114 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.642095864 +0000 UTC m=+186.751238372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.242978 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.243332 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.743317868 +0000 UTC m=+186.852460366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.252363 4972 generic.go:334] "Generic (PLEG): container finished" podID="8c76dda7-f44d-4fa6-9471-841d962d757c" containerID="0de23111b6c2ffa91c286b54b39d0de574796f055f6dfe1b16d3bc66cfcfdc93" exitCode=0 Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.252418 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8c76dda7-f44d-4fa6-9471-841d962d757c","Type":"ContainerDied","Data":"0de23111b6c2ffa91c286b54b39d0de574796f055f6dfe1b16d3bc66cfcfdc93"} Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.254093 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqvm8" event={"ID":"9a2865e3-5706-4a03-8529-571895dde1ea","Type":"ContainerStarted","Data":"c65c64ce17dad3c7a3bc611725befe031f878b67bff913288074d28cb2ca45df"} Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.256128 4972 generic.go:334] "Generic (PLEG): container finished" podID="882787b1-4df4-446b-972f-8a07c4eb5782" containerID="f10a3889b250afdd80c638147321ba259cd479521fa0c8021b35eebd1311edc7" exitCode=0 Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.256171 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58kms" event={"ID":"882787b1-4df4-446b-972f-8a07c4eb5782","Type":"ContainerDied","Data":"f10a3889b250afdd80c638147321ba259cd479521fa0c8021b35eebd1311edc7"} Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.257739 4972 generic.go:334] "Generic (PLEG): container finished" podID="49b866e2-c40e-4b45-acfc-965161cabf5c" containerID="bad40d5d1659e669d5ca4093b6cd4204d076b29e247d6215d413276eacfb8158" exitCode=0 Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.257779 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ccfh" event={"ID":"49b866e2-c40e-4b45-acfc-965161cabf5c","Type":"ContainerDied","Data":"bad40d5d1659e669d5ca4093b6cd4204d076b29e247d6215d413276eacfb8158"} Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.259390 4972 generic.go:334] "Generic (PLEG): container finished" podID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" containerID="f1d840d59ae61eabc8be3b62b0f1fe3ff491c3d0fc8fb30da87081a30c8748d9" exitCode=0 Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.259417 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh4hc" event={"ID":"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827","Type":"ContainerDied","Data":"f1d840d59ae61eabc8be3b62b0f1fe3ff491c3d0fc8fb30da87081a30c8748d9"} Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.313099 4972 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.344315 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.344947 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.844922051 +0000 UTC m=+186.954064539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.445094 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.445921 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:01.945897079 +0000 UTC m=+187.055039597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.547462 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.547878 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:02.047861284 +0000 UTC m=+187.157003782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.648393 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.648528 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:02.148488951 +0000 UTC m=+187.257631459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.649029 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.649509 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:02.149485499 +0000 UTC m=+187.258628037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.750579 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.750788 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:02.250759244 +0000 UTC m=+187.359901742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.750883 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.751204 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:02.251191937 +0000 UTC m=+187.360334435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.853274 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.853483 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:02.35344541 +0000 UTC m=+187.462587898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.853770 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.854469 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 09:44:02.354454659 +0000 UTC m=+187.463597157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5s9h7" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:01 crc kubenswrapper[4972]: I1121 09:44:01.956176 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:01 crc kubenswrapper[4972]: E1121 09:44:01.956926 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 09:44:02.456883467 +0000 UTC m=+187.566026005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.039269 4972 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-21T09:44:01.31313546Z","Handler":null,"Name":""} Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.045576 4972 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.045624 4972 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.058431 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.147658 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.147760 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.176611 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5s9h7\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.260504 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.266971 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-k9mnh" event={"ID":"df5e96f4-727c-44c1-8e2f-e624c912430b","Type":"ContainerStarted","Data":"513c0a98b9bc0c7448eb15123350914a53f1a1616ca46d1bb863701d4189f8a5"} Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.267368 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.268171 4972 generic.go:334] "Generic (PLEG): container finished" podID="9a2865e3-5706-4a03-8529-571895dde1ea" containerID="c65c64ce17dad3c7a3bc611725befe031f878b67bff913288074d28cb2ca45df" exitCode=0 Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.268220 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqvm8" event={"ID":"9a2865e3-5706-4a03-8529-571895dde1ea","Type":"ContainerDied","Data":"c65c64ce17dad3c7a3bc611725befe031f878b67bff913288074d28cb2ca45df"} Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.269970 4972 generic.go:334] "Generic (PLEG): container finished" podID="b88d1d32-e641-4b28-beb1-a3103fbf22d8" containerID="acdad0cb1786f14952b9c4c9c2b8bf5cadc87e3a4483ab0aec95d70b243f9814" exitCode=0 Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.270042 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b88d1d32-e641-4b28-beb1-a3103fbf22d8","Type":"ContainerDied","Data":"acdad0cb1786f14952b9c4c9c2b8bf5cadc87e3a4483ab0aec95d70b243f9814"} Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.271607 4972 generic.go:334] "Generic (PLEG): container finished" podID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerID="ffbcce0376856f19fa35c76542b1a93d34469bb8f644efcbb58b87ccbf0af1cf" exitCode=0 Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.271705 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-swwr5" event={"ID":"67ed0332-55cb-41e1-8a15-4e497706e00d","Type":"ContainerDied","Data":"ffbcce0376856f19fa35c76542b1a93d34469bb8f644efcbb58b87ccbf0af1cf"} Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.338403 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" podStartSLOduration=21.338380569 podStartE2EDuration="21.338380569s" podCreationTimestamp="2025-11-21 09:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:44:02.331449992 +0000 UTC m=+187.440592500" watchObservedRunningTime="2025-11-21 09:44:02.338380569 +0000 UTC m=+187.447523067" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.437689 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.449947 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.667660 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.740507 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.745365 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.770296 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c76dda7-f44d-4fa6-9471-841d962d757c-kubelet-dir\") pod \"8c76dda7-f44d-4fa6-9471-841d962d757c\" (UID: \"8c76dda7-f44d-4fa6-9471-841d962d757c\") " Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.770469 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c76dda7-f44d-4fa6-9471-841d962d757c-kube-api-access\") pod \"8c76dda7-f44d-4fa6-9471-841d962d757c\" (UID: \"8c76dda7-f44d-4fa6-9471-841d962d757c\") " Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.770857 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c76dda7-f44d-4fa6-9471-841d962d757c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8c76dda7-f44d-4fa6-9471-841d962d757c" (UID: "8c76dda7-f44d-4fa6-9471-841d962d757c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.777027 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c76dda7-f44d-4fa6-9471-841d962d757c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8c76dda7-f44d-4fa6-9471-841d962d757c" (UID: "8c76dda7-f44d-4fa6-9471-841d962d757c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.872365 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c76dda7-f44d-4fa6-9471-841d962d757c-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.872396 4972 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c76dda7-f44d-4fa6-9471-841d962d757c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 21 09:44:02 crc kubenswrapper[4972]: I1121 09:44:02.944186 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5s9h7"] Nov 21 09:44:02 crc kubenswrapper[4972]: W1121 09:44:02.950419 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e05e924_7aac_419c_82a7_0d9b9592b39f.slice/crio-0ab29f002debf56eff0738f7cd665ea0f6e68c3cd4068c01015efb813b970404 WatchSource:0}: Error finding container 0ab29f002debf56eff0738f7cd665ea0f6e68c3cd4068c01015efb813b970404: Status 404 returned error can't find the container with id 0ab29f002debf56eff0738f7cd665ea0f6e68c3cd4068c01015efb813b970404 Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.279344 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" event={"ID":"a6597c90-c4ee-4856-b03f-f0fa1d3062f5","Type":"ContainerStarted","Data":"ab008e043b69fe77307d08b3cc116f14d9a7ef22b3facdd8aa8b3b477323fd76"} Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.283249 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.283308 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8c76dda7-f44d-4fa6-9471-841d962d757c","Type":"ContainerDied","Data":"6a89f7992c1bc8798563a4b83a06989777ce46a6f2930da1e6a24d58ff0ac16a"} Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.283372 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a89f7992c1bc8798563a4b83a06989777ce46a6f2930da1e6a24d58ff0ac16a" Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.285942 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" event={"ID":"6e05e924-7aac-419c-82a7-0d9b9592b39f","Type":"ContainerStarted","Data":"0ab29f002debf56eff0738f7cd665ea0f6e68c3cd4068c01015efb813b970404"} Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.523303 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.541729 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-k9mnh" podStartSLOduration=166.541710006 podStartE2EDuration="2m46.541710006s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:44:03.309928464 +0000 UTC m=+188.419070992" watchObservedRunningTime="2025-11-21 09:44:03.541710006 +0000 UTC m=+188.650852504" Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.683377 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b88d1d32-e641-4b28-beb1-a3103fbf22d8-kubelet-dir\") pod \"b88d1d32-e641-4b28-beb1-a3103fbf22d8\" (UID: \"b88d1d32-e641-4b28-beb1-a3103fbf22d8\") " Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.683508 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b88d1d32-e641-4b28-beb1-a3103fbf22d8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b88d1d32-e641-4b28-beb1-a3103fbf22d8" (UID: "b88d1d32-e641-4b28-beb1-a3103fbf22d8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.683985 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b88d1d32-e641-4b28-beb1-a3103fbf22d8-kube-api-access\") pod \"b88d1d32-e641-4b28-beb1-a3103fbf22d8\" (UID: \"b88d1d32-e641-4b28-beb1-a3103fbf22d8\") " Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.684327 4972 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b88d1d32-e641-4b28-beb1-a3103fbf22d8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.688633 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88d1d32-e641-4b28-beb1-a3103fbf22d8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b88d1d32-e641-4b28-beb1-a3103fbf22d8" (UID: "b88d1d32-e641-4b28-beb1-a3103fbf22d8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.767233 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 21 09:44:03 crc kubenswrapper[4972]: I1121 09:44:03.785239 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b88d1d32-e641-4b28-beb1-a3103fbf22d8-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 09:44:04 crc kubenswrapper[4972]: I1121 09:44:04.294557 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b88d1d32-e641-4b28-beb1-a3103fbf22d8","Type":"ContainerDied","Data":"c56d006e0a9147cba9c391e1d9f593216ac14b721c72fc710817be8c181d6b46"} Nov 21 09:44:04 crc kubenswrapper[4972]: I1121 09:44:04.294611 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c56d006e0a9147cba9c391e1d9f593216ac14b721c72fc710817be8c181d6b46" Nov 21 09:44:04 crc kubenswrapper[4972]: I1121 09:44:04.294664 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 09:44:04 crc kubenswrapper[4972]: I1121 09:44:04.298865 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" event={"ID":"6e05e924-7aac-419c-82a7-0d9b9592b39f","Type":"ContainerStarted","Data":"77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0"} Nov 21 09:44:05 crc kubenswrapper[4972]: I1121 09:44:05.308319 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-swwr5" event={"ID":"67ed0332-55cb-41e1-8a15-4e497706e00d","Type":"ContainerStarted","Data":"2d86e4252612fd3cca7f54cab816ca60fc7ef49a9131b25b43824384ca3b3841"} Nov 21 09:44:05 crc kubenswrapper[4972]: I1121 09:44:05.312197 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" event={"ID":"a6597c90-c4ee-4856-b03f-f0fa1d3062f5","Type":"ContainerStarted","Data":"198a49709517631b048dc117240c45053f66a788ac9779af2c0beaf5fcdad4a8"} Nov 21 09:44:06 crc kubenswrapper[4972]: I1121 09:44:06.320054 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:06 crc kubenswrapper[4972]: I1121 09:44:06.357735 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" podStartSLOduration=169.357713659 podStartE2EDuration="2m49.357713659s" podCreationTimestamp="2025-11-21 09:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:44:06.355473256 +0000 UTC m=+191.464615774" watchObservedRunningTime="2025-11-21 09:44:06.357713659 +0000 UTC m=+191.466856157" Nov 21 09:44:07 crc kubenswrapper[4972]: I1121 09:44:07.327735 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-swwr5" Nov 21 09:44:07 crc kubenswrapper[4972]: I1121 09:44:07.328173 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:07 crc kubenswrapper[4972]: I1121 09:44:07.328211 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:07 crc kubenswrapper[4972]: I1121 09:44:07.349040 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-fwcgm" podStartSLOduration=39.349020225 podStartE2EDuration="39.349020225s" podCreationTimestamp="2025-11-21 09:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:44:07.345398393 +0000 UTC m=+192.454540911" watchObservedRunningTime="2025-11-21 09:44:07.349020225 +0000 UTC m=+192.458162743" Nov 21 09:44:08 crc kubenswrapper[4972]: I1121 09:44:08.336006 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:08 crc kubenswrapper[4972]: I1121 09:44:08.336379 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:10 crc kubenswrapper[4972]: I1121 09:44:10.616211 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:44:10 crc kubenswrapper[4972]: I1121 09:44:10.621733 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:44:10 crc kubenswrapper[4972]: I1121 09:44:10.884187 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:10 crc kubenswrapper[4972]: I1121 09:44:10.884235 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:10 crc kubenswrapper[4972]: I1121 09:44:10.884665 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:10 crc kubenswrapper[4972]: I1121 09:44:10.884689 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:11 crc kubenswrapper[4972]: I1121 09:44:11.660489 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" Nov 21 09:44:20 crc kubenswrapper[4972]: I1121 09:44:20.884547 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:20 crc kubenswrapper[4972]: I1121 09:44:20.885084 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:20 crc kubenswrapper[4972]: I1121 09:44:20.884656 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:20 crc kubenswrapper[4972]: I1121 09:44:20.885148 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:22 crc kubenswrapper[4972]: I1121 09:44:22.457191 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:44:26 crc kubenswrapper[4972]: I1121 09:44:26.179218 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:44:26 crc kubenswrapper[4972]: I1121 09:44:26.179596 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:44:26 crc kubenswrapper[4972]: I1121 09:44:26.179642 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:44:26 crc kubenswrapper[4972]: I1121 09:44:26.180414 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 09:44:26 crc kubenswrapper[4972]: I1121 09:44:26.180484 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b" gracePeriod=600 Nov 21 09:44:30 crc kubenswrapper[4972]: I1121 09:44:30.884748 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:30 crc kubenswrapper[4972]: I1121 09:44:30.885157 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:30 crc kubenswrapper[4972]: I1121 09:44:30.884763 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:30 crc kubenswrapper[4972]: I1121 09:44:30.885228 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:30 crc kubenswrapper[4972]: I1121 09:44:30.885278 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-swwr5" Nov 21 09:44:30 crc kubenswrapper[4972]: I1121 09:44:30.885638 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:30 crc kubenswrapper[4972]: I1121 09:44:30.885660 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:30 crc kubenswrapper[4972]: I1121 09:44:30.885889 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"2d86e4252612fd3cca7f54cab816ca60fc7ef49a9131b25b43824384ca3b3841"} pod="openshift-console/downloads-7954f5f757-swwr5" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 21 09:44:30 crc kubenswrapper[4972]: I1121 09:44:30.885933 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" containerID="cri-o://2d86e4252612fd3cca7f54cab816ca60fc7ef49a9131b25b43824384ca3b3841" gracePeriod=2 Nov 21 09:44:31 crc kubenswrapper[4972]: I1121 09:44:31.460664 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b" exitCode=0 Nov 21 09:44:31 crc kubenswrapper[4972]: I1121 09:44:31.460713 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b"} Nov 21 09:44:40 crc kubenswrapper[4972]: I1121 09:44:40.884769 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:40 crc kubenswrapper[4972]: I1121 09:44:40.885538 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:44:50 crc kubenswrapper[4972]: I1121 09:44:50.884075 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:44:50 crc kubenswrapper[4972]: I1121 09:44:50.884461 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.148161 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5"] Nov 21 09:45:00 crc kubenswrapper[4972]: E1121 09:45:00.150595 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fc91391-3c93-4fe0-9c24-f8aad9c21fd2" containerName="collect-profiles" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.150767 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fc91391-3c93-4fe0-9c24-f8aad9c21fd2" containerName="collect-profiles" Nov 21 09:45:00 crc kubenswrapper[4972]: E1121 09:45:00.150936 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c76dda7-f44d-4fa6-9471-841d962d757c" containerName="pruner" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.151046 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c76dda7-f44d-4fa6-9471-841d962d757c" containerName="pruner" Nov 21 09:45:00 crc kubenswrapper[4972]: E1121 09:45:00.151163 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88d1d32-e641-4b28-beb1-a3103fbf22d8" containerName="pruner" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.151313 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88d1d32-e641-4b28-beb1-a3103fbf22d8" containerName="pruner" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.151584 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fc91391-3c93-4fe0-9c24-f8aad9c21fd2" containerName="collect-profiles" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.151715 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88d1d32-e641-4b28-beb1-a3103fbf22d8" containerName="pruner" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.152007 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c76dda7-f44d-4fa6-9471-841d962d757c" containerName="pruner" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.152728 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.154186 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46165fda-5884-4b43-b8fd-533eec95f753-config-volume\") pod \"collect-profiles-29395305-8qkq5\" (UID: \"46165fda-5884-4b43-b8fd-533eec95f753\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.154600 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lhtk\" (UniqueName: \"kubernetes.io/projected/46165fda-5884-4b43-b8fd-533eec95f753-kube-api-access-9lhtk\") pod \"collect-profiles-29395305-8qkq5\" (UID: \"46165fda-5884-4b43-b8fd-533eec95f753\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.154662 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46165fda-5884-4b43-b8fd-533eec95f753-secret-volume\") pod \"collect-profiles-29395305-8qkq5\" (UID: \"46165fda-5884-4b43-b8fd-533eec95f753\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.156364 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.156372 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.161703 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5"] Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.256350 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46165fda-5884-4b43-b8fd-533eec95f753-config-volume\") pod \"collect-profiles-29395305-8qkq5\" (UID: \"46165fda-5884-4b43-b8fd-533eec95f753\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.256492 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lhtk\" (UniqueName: \"kubernetes.io/projected/46165fda-5884-4b43-b8fd-533eec95f753-kube-api-access-9lhtk\") pod \"collect-profiles-29395305-8qkq5\" (UID: \"46165fda-5884-4b43-b8fd-533eec95f753\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.256529 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46165fda-5884-4b43-b8fd-533eec95f753-secret-volume\") pod \"collect-profiles-29395305-8qkq5\" (UID: \"46165fda-5884-4b43-b8fd-533eec95f753\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.358804 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46165fda-5884-4b43-b8fd-533eec95f753-config-volume\") pod \"collect-profiles-29395305-8qkq5\" (UID: \"46165fda-5884-4b43-b8fd-533eec95f753\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.367170 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lhtk\" (UniqueName: \"kubernetes.io/projected/46165fda-5884-4b43-b8fd-533eec95f753-kube-api-access-9lhtk\") pod \"collect-profiles-29395305-8qkq5\" (UID: \"46165fda-5884-4b43-b8fd-533eec95f753\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.368564 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46165fda-5884-4b43-b8fd-533eec95f753-secret-volume\") pod \"collect-profiles-29395305-8qkq5\" (UID: \"46165fda-5884-4b43-b8fd-533eec95f753\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.472855 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.884497 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:45:00 crc kubenswrapper[4972]: I1121 09:45:00.884847 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:45:10 crc kubenswrapper[4972]: I1121 09:45:10.884634 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:45:10 crc kubenswrapper[4972]: I1121 09:45:10.885058 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:45:20 crc kubenswrapper[4972]: I1121 09:45:20.884657 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:45:20 crc kubenswrapper[4972]: I1121 09:45:20.885099 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.744957 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.745760 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.745904 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.745998 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.749540 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.749549 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.749737 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.759141 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.767776 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.770377 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.773309 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.796936 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.803252 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:45:27 crc kubenswrapper[4972]: I1121 09:45:27.907340 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:45:28 crc kubenswrapper[4972]: I1121 09:45:28.084443 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 09:45:30 crc kubenswrapper[4972]: I1121 09:45:30.885146 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:45:30 crc kubenswrapper[4972]: I1121 09:45:30.885640 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:45:37 crc kubenswrapper[4972]: E1121 09:45:37.399364 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 21 09:45:37 crc kubenswrapper[4972]: E1121 09:45:37.400019 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7s7ml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sqvm8_openshift-marketplace(9a2865e3-5706-4a03-8529-571895dde1ea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 09:45:37 crc kubenswrapper[4972]: E1121 09:45:37.401236 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sqvm8" podUID="9a2865e3-5706-4a03-8529-571895dde1ea" Nov 21 09:45:38 crc kubenswrapper[4972]: E1121 09:45:38.341543 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sqvm8" podUID="9a2865e3-5706-4a03-8529-571895dde1ea" Nov 21 09:45:38 crc kubenswrapper[4972]: E1121 09:45:38.388326 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 21 09:45:38 crc kubenswrapper[4972]: E1121 09:45:38.388762 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zvczx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hh4hc_openshift-marketplace(ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 09:45:38 crc kubenswrapper[4972]: E1121 09:45:38.389999 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hh4hc" podUID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" Nov 21 09:45:40 crc kubenswrapper[4972]: E1121 09:45:40.374944 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hh4hc" podUID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" Nov 21 09:45:40 crc kubenswrapper[4972]: E1121 09:45:40.491712 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 21 09:45:40 crc kubenswrapper[4972]: E1121 09:45:40.491929 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvfxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-b4t9l_openshift-marketplace(13ef553c-f6bd-4af2-9c0e-643cd14f9290): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 09:45:40 crc kubenswrapper[4972]: E1121 09:45:40.493106 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-b4t9l" podUID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" Nov 21 09:45:40 crc kubenswrapper[4972]: I1121 09:45:40.884950 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:45:40 crc kubenswrapper[4972]: I1121 09:45:40.885016 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.042910 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-b4t9l" podUID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.103034 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.103194 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kc6rx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-7ccfh_openshift-marketplace(49b866e2-c40e-4b45-acfc-965161cabf5c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.104378 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-7ccfh" podUID="49b866e2-c40e-4b45-acfc-965161cabf5c" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.138446 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.139285 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42ww7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-58kms_openshift-marketplace(882787b1-4df4-446b-972f-8a07c4eb5782): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.141879 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-58kms" podUID="882787b1-4df4-446b-972f-8a07c4eb5782" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.290888 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.291354 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kr6sk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pnsjx_openshift-marketplace(6136a605-ff46-4462-808b-cc8d2c28faea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.292723 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-pnsjx" podUID="6136a605-ff46-4462-808b-cc8d2c28faea" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.427029 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.427302 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9s4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lqggx_openshift-marketplace(6e0ba187-0ec6-40e7-bd83-771510a29a5b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.429077 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lqggx" podUID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" Nov 21 09:45:42 crc kubenswrapper[4972]: W1121 09:45:42.513881 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-4878a3885dc587271ca26f195b81955d5c875418c9ef49dad477572f64eb20b8 WatchSource:0}: Error finding container 4878a3885dc587271ca26f195b81955d5c875418c9ef49dad477572f64eb20b8: Status 404 returned error can't find the container with id 4878a3885dc587271ca26f195b81955d5c875418c9ef49dad477572f64eb20b8 Nov 21 09:45:42 crc kubenswrapper[4972]: I1121 09:45:42.558456 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5"] Nov 21 09:45:42 crc kubenswrapper[4972]: W1121 09:45:42.574504 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46165fda_5884_4b43_b8fd_533eec95f753.slice/crio-e23d30ae6333c9290117ea6a1e861c89bdd32c41c4d91863b007bc35325c9d39 WatchSource:0}: Error finding container e23d30ae6333c9290117ea6a1e861c89bdd32c41c4d91863b007bc35325c9d39: Status 404 returned error can't find the container with id e23d30ae6333c9290117ea6a1e861c89bdd32c41c4d91863b007bc35325c9d39 Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.586711 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.587170 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-psklv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rc758_openshift-marketplace(1b43815a-969e-432e-ac57-843bee51860c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.588353 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rc758" podUID="1b43815a-969e-432e-ac57-843bee51860c" Nov 21 09:45:42 crc kubenswrapper[4972]: W1121 09:45:42.591068 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-749848d508da857d17a4ccbca88429c466a51facab5482c85aeb70ac91fe5644 WatchSource:0}: Error finding container 749848d508da857d17a4ccbca88429c466a51facab5482c85aeb70ac91fe5644: Status 404 returned error can't find the container with id 749848d508da857d17a4ccbca88429c466a51facab5482c85aeb70ac91fe5644 Nov 21 09:45:42 crc kubenswrapper[4972]: I1121 09:45:42.900920 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"40c74353a4c642a35641b7798031c86ca3d02231f11515bc627bbb25c1837507"} Nov 21 09:45:42 crc kubenswrapper[4972]: I1121 09:45:42.901196 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4878a3885dc587271ca26f195b81955d5c875418c9ef49dad477572f64eb20b8"} Nov 21 09:45:42 crc kubenswrapper[4972]: I1121 09:45:42.902300 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a1bb1b532a563916e85c478d44824822048910b1bee5e7b0becd0ed00265e70a"} Nov 21 09:45:42 crc kubenswrapper[4972]: I1121 09:45:42.902323 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"749848d508da857d17a4ccbca88429c466a51facab5482c85aeb70ac91fe5644"} Nov 21 09:45:42 crc kubenswrapper[4972]: I1121 09:45:42.906407 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" event={"ID":"46165fda-5884-4b43-b8fd-533eec95f753","Type":"ContainerStarted","Data":"3fd33b2e5d75e40284819912341bede747a8a0b6687db9b6f0b96da3ad485d06"} Nov 21 09:45:42 crc kubenswrapper[4972]: I1121 09:45:42.906457 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" event={"ID":"46165fda-5884-4b43-b8fd-533eec95f753","Type":"ContainerStarted","Data":"e23d30ae6333c9290117ea6a1e861c89bdd32c41c4d91863b007bc35325c9d39"} Nov 21 09:45:42 crc kubenswrapper[4972]: I1121 09:45:42.907926 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"4d6ef47244cd721a2f376762ea2eeca1f7022ab7431ea40b087c23a5af7850eb"} Nov 21 09:45:42 crc kubenswrapper[4972]: I1121 09:45:42.910029 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"59bc15af7d0e0349b15cc510c9ff86605fc88e9f4f3557fa642ee5d54391dffb"} Nov 21 09:45:42 crc kubenswrapper[4972]: I1121 09:45:42.910051 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bff2fea0c9f0589aa1b623969db54dfcf6da5032cc1603b6b87af3a439cdbcf8"} Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.911530 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-7ccfh" podUID="49b866e2-c40e-4b45-acfc-965161cabf5c" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.911790 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rc758" podUID="1b43815a-969e-432e-ac57-843bee51860c" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.912001 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pnsjx" podUID="6136a605-ff46-4462-808b-cc8d2c28faea" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.912203 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lqggx" podUID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" Nov 21 09:45:42 crc kubenswrapper[4972]: E1121 09:45:42.912266 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-58kms" podUID="882787b1-4df4-446b-972f-8a07c4eb5782" Nov 21 09:45:43 crc kubenswrapper[4972]: I1121 09:45:43.920613 4972 generic.go:334] "Generic (PLEG): container finished" podID="46165fda-5884-4b43-b8fd-533eec95f753" containerID="3fd33b2e5d75e40284819912341bede747a8a0b6687db9b6f0b96da3ad485d06" exitCode=0 Nov 21 09:45:43 crc kubenswrapper[4972]: I1121 09:45:43.920665 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" event={"ID":"46165fda-5884-4b43-b8fd-533eec95f753","Type":"ContainerDied","Data":"3fd33b2e5d75e40284819912341bede747a8a0b6687db9b6f0b96da3ad485d06"} Nov 21 09:45:43 crc kubenswrapper[4972]: I1121 09:45:43.921868 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.181307 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.308481 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lhtk\" (UniqueName: \"kubernetes.io/projected/46165fda-5884-4b43-b8fd-533eec95f753-kube-api-access-9lhtk\") pod \"46165fda-5884-4b43-b8fd-533eec95f753\" (UID: \"46165fda-5884-4b43-b8fd-533eec95f753\") " Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.308648 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46165fda-5884-4b43-b8fd-533eec95f753-config-volume\") pod \"46165fda-5884-4b43-b8fd-533eec95f753\" (UID: \"46165fda-5884-4b43-b8fd-533eec95f753\") " Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.308681 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46165fda-5884-4b43-b8fd-533eec95f753-secret-volume\") pod \"46165fda-5884-4b43-b8fd-533eec95f753\" (UID: \"46165fda-5884-4b43-b8fd-533eec95f753\") " Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.309978 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46165fda-5884-4b43-b8fd-533eec95f753-config-volume" (OuterVolumeSpecName: "config-volume") pod "46165fda-5884-4b43-b8fd-533eec95f753" (UID: "46165fda-5884-4b43-b8fd-533eec95f753"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.328084 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46165fda-5884-4b43-b8fd-533eec95f753-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "46165fda-5884-4b43-b8fd-533eec95f753" (UID: "46165fda-5884-4b43-b8fd-533eec95f753"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.334341 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46165fda-5884-4b43-b8fd-533eec95f753-kube-api-access-9lhtk" (OuterVolumeSpecName: "kube-api-access-9lhtk") pod "46165fda-5884-4b43-b8fd-533eec95f753" (UID: "46165fda-5884-4b43-b8fd-533eec95f753"). InnerVolumeSpecName "kube-api-access-9lhtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.410891 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lhtk\" (UniqueName: \"kubernetes.io/projected/46165fda-5884-4b43-b8fd-533eec95f753-kube-api-access-9lhtk\") on node \"crc\" DevicePath \"\"" Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.410964 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46165fda-5884-4b43-b8fd-533eec95f753-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.410981 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46165fda-5884-4b43-b8fd-533eec95f753-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.934548 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" event={"ID":"46165fda-5884-4b43-b8fd-533eec95f753","Type":"ContainerDied","Data":"e23d30ae6333c9290117ea6a1e861c89bdd32c41c4d91863b007bc35325c9d39"} Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.934962 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e23d30ae6333c9290117ea6a1e861c89bdd32c41c4d91863b007bc35325c9d39" Nov 21 09:45:45 crc kubenswrapper[4972]: I1121 09:45:45.934628 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5" Nov 21 09:45:49 crc kubenswrapper[4972]: I1121 09:45:49.961077 4972 generic.go:334] "Generic (PLEG): container finished" podID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerID="2d86e4252612fd3cca7f54cab816ca60fc7ef49a9131b25b43824384ca3b3841" exitCode=0 Nov 21 09:45:49 crc kubenswrapper[4972]: I1121 09:45:49.961146 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-swwr5" event={"ID":"67ed0332-55cb-41e1-8a15-4e497706e00d","Type":"ContainerDied","Data":"2d86e4252612fd3cca7f54cab816ca60fc7ef49a9131b25b43824384ca3b3841"} Nov 21 09:45:49 crc kubenswrapper[4972]: I1121 09:45:49.961675 4972 scope.go:117] "RemoveContainer" containerID="ffbcce0376856f19fa35c76542b1a93d34469bb8f644efcbb58b87ccbf0af1cf" Nov 21 09:45:50 crc kubenswrapper[4972]: I1121 09:45:50.884756 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:45:50 crc kubenswrapper[4972]: I1121 09:45:50.885095 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:45:54 crc kubenswrapper[4972]: I1121 09:45:54.986958 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-swwr5" event={"ID":"67ed0332-55cb-41e1-8a15-4e497706e00d","Type":"ContainerStarted","Data":"714fc4e436423989d05ce89b317e30643d0660eb62de1860ca181df0344e7578"} Nov 21 09:45:55 crc kubenswrapper[4972]: I1121 09:45:55.488746 4972 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 21 09:45:55 crc kubenswrapper[4972]: I1121 09:45:55.994316 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-swwr5" Nov 21 09:45:55 crc kubenswrapper[4972]: I1121 09:45:55.996149 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:45:55 crc kubenswrapper[4972]: I1121 09:45:55.996251 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:45:56 crc kubenswrapper[4972]: I1121 09:45:56.998516 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:45:56 crc kubenswrapper[4972]: I1121 09:45:56.998571 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:46:00 crc kubenswrapper[4972]: I1121 09:46:00.884768 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:46:00 crc kubenswrapper[4972]: I1121 09:46:00.885342 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:46:00 crc kubenswrapper[4972]: I1121 09:46:00.884878 4972 patch_prober.go:28] interesting pod/downloads-7954f5f757-swwr5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 21 09:46:00 crc kubenswrapper[4972]: I1121 09:46:00.885440 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-swwr5" podUID="67ed0332-55cb-41e1-8a15-4e497706e00d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 21 09:46:10 crc kubenswrapper[4972]: I1121 09:46:10.905324 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-swwr5" Nov 21 09:46:11 crc kubenswrapper[4972]: I1121 09:46:11.923034 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w2c2r"] Nov 21 09:46:18 crc kubenswrapper[4972]: I1121 09:46:17.999149 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 09:46:36 crc kubenswrapper[4972]: I1121 09:46:36.966333 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" podUID="e9280ad8-85ad-4faa-a025-a021e417e522" containerName="oauth-openshift" containerID="cri-o://86e755517447eba118194bf20ff2979ffb82a3d78a41b59b05698c90448ab299" gracePeriod=15 Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.237248 4972 generic.go:334] "Generic (PLEG): container finished" podID="e9280ad8-85ad-4faa-a025-a021e417e522" containerID="86e755517447eba118194bf20ff2979ffb82a3d78a41b59b05698c90448ab299" exitCode=0 Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.237288 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" event={"ID":"e9280ad8-85ad-4faa-a025-a021e417e522","Type":"ContainerDied","Data":"86e755517447eba118194bf20ff2979ffb82a3d78a41b59b05698c90448ab299"} Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.404565 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.439445 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-router-certs\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.441001 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g"] Nov 21 09:46:37 crc kubenswrapper[4972]: E1121 09:46:37.441282 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46165fda-5884-4b43-b8fd-533eec95f753" containerName="collect-profiles" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.441302 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="46165fda-5884-4b43-b8fd-533eec95f753" containerName="collect-profiles" Nov 21 09:46:37 crc kubenswrapper[4972]: E1121 09:46:37.441323 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9280ad8-85ad-4faa-a025-a021e417e522" containerName="oauth-openshift" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.441333 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9280ad8-85ad-4faa-a025-a021e417e522" containerName="oauth-openshift" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.441466 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9280ad8-85ad-4faa-a025-a021e417e522" containerName="oauth-openshift" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.441489 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="46165fda-5884-4b43-b8fd-533eec95f753" containerName="collect-profiles" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.442100 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.449893 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.454023 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g"] Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.540302 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-cliconfig\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.540687 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-service-ca\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.540745 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-serving-cert\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.540822 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-ocp-branding-template\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.540868 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-error\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.540897 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl4g2\" (UniqueName: \"kubernetes.io/projected/e9280ad8-85ad-4faa-a025-a021e417e522-kube-api-access-kl4g2\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.540906 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.541162 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.541351 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-idp-0-file-data\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.541388 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9280ad8-85ad-4faa-a025-a021e417e522-audit-dir\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.541416 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-trusted-ca-bundle\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.541459 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-session\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.541486 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-login\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.541516 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-audit-policies\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.541550 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-provider-selection\") pod \"e9280ad8-85ad-4faa-a025-a021e417e522\" (UID: \"e9280ad8-85ad-4faa-a025-a021e417e522\") " Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.541453 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9280ad8-85ad-4faa-a025-a021e417e522-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.541918 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.541976 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542184 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7d9s\" (UniqueName: \"kubernetes.io/projected/77e8d5de-5468-46b2-be4b-671f00a4efdd-kube-api-access-q7d9s\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542231 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542367 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542447 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542484 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542608 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542642 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542670 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-session\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542695 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-user-template-login\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542725 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77e8d5de-5468-46b2-be4b-671f00a4efdd-audit-policies\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542754 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77e8d5de-5468-46b2-be4b-671f00a4efdd-audit-dir\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542857 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.542907 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.543021 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-user-template-error\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.543081 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.543098 4972 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9280ad8-85ad-4faa-a025-a021e417e522-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.543113 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.543124 4972 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.543135 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.543145 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.545126 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.545185 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9280ad8-85ad-4faa-a025-a021e417e522-kube-api-access-kl4g2" (OuterVolumeSpecName: "kube-api-access-kl4g2") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "kube-api-access-kl4g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.545582 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.545819 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.546165 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.546302 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.546555 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.548614 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e9280ad8-85ad-4faa-a025-a021e417e522" (UID: "e9280ad8-85ad-4faa-a025-a021e417e522"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.644645 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.644702 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.644724 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-user-template-error\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.644760 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7d9s\" (UniqueName: \"kubernetes.io/projected/77e8d5de-5468-46b2-be4b-671f00a4efdd-kube-api-access-q7d9s\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.644802 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.644838 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.644899 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.644918 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.644943 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.644959 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.644977 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-user-template-login\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.644993 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-session\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.645011 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77e8d5de-5468-46b2-be4b-671f00a4efdd-audit-policies\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.645031 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77e8d5de-5468-46b2-be4b-671f00a4efdd-audit-dir\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.645071 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.645081 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.645092 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.645103 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.645113 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.645122 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.645132 4972 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9280ad8-85ad-4faa-a025-a021e417e522-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.645142 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl4g2\" (UniqueName: \"kubernetes.io/projected/e9280ad8-85ad-4faa-a025-a021e417e522-kube-api-access-kl4g2\") on node \"crc\" DevicePath \"\"" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.645184 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77e8d5de-5468-46b2-be4b-671f00a4efdd-audit-dir\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.646905 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.646945 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.646739 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77e8d5de-5468-46b2-be4b-671f00a4efdd-audit-policies\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.647891 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.650491 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-user-template-login\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.650508 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-user-template-error\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.650673 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.651161 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.651511 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.652217 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-session\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.652922 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.654758 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/77e8d5de-5468-46b2-be4b-671f00a4efdd-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.664733 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7d9s\" (UniqueName: \"kubernetes.io/projected/77e8d5de-5468-46b2-be4b-671f00a4efdd-kube-api-access-q7d9s\") pod \"oauth-openshift-7d9bbcf4d4-8dh8g\" (UID: \"77e8d5de-5468-46b2-be4b-671f00a4efdd\") " pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:37 crc kubenswrapper[4972]: I1121 09:46:37.782226 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:38 crc kubenswrapper[4972]: I1121 09:46:38.070869 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g"] Nov 21 09:46:38 crc kubenswrapper[4972]: W1121 09:46:38.073785 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77e8d5de_5468_46b2_be4b_671f00a4efdd.slice/crio-525864a59c7e681a9177efb19a1bda02d0e4868a7ee4a0e6bbb36c9c9a412ec0 WatchSource:0}: Error finding container 525864a59c7e681a9177efb19a1bda02d0e4868a7ee4a0e6bbb36c9c9a412ec0: Status 404 returned error can't find the container with id 525864a59c7e681a9177efb19a1bda02d0e4868a7ee4a0e6bbb36c9c9a412ec0 Nov 21 09:46:38 crc kubenswrapper[4972]: I1121 09:46:38.243102 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" event={"ID":"77e8d5de-5468-46b2-be4b-671f00a4efdd","Type":"ContainerStarted","Data":"525864a59c7e681a9177efb19a1bda02d0e4868a7ee4a0e6bbb36c9c9a412ec0"} Nov 21 09:46:38 crc kubenswrapper[4972]: I1121 09:46:38.244346 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" event={"ID":"e9280ad8-85ad-4faa-a025-a021e417e522","Type":"ContainerDied","Data":"c96e3aaacc68e893520835b64cd129d2f8e0f79567aad007cc9e931313968197"} Nov 21 09:46:38 crc kubenswrapper[4972]: I1121 09:46:38.244377 4972 scope.go:117] "RemoveContainer" containerID="86e755517447eba118194bf20ff2979ffb82a3d78a41b59b05698c90448ab299" Nov 21 09:46:38 crc kubenswrapper[4972]: I1121 09:46:38.244424 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-w2c2r" Nov 21 09:46:38 crc kubenswrapper[4972]: I1121 09:46:38.273615 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w2c2r"] Nov 21 09:46:38 crc kubenswrapper[4972]: I1121 09:46:38.276995 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w2c2r"] Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.251094 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" event={"ID":"77e8d5de-5468-46b2-be4b-671f00a4efdd","Type":"ContainerStarted","Data":"c8e7bd34a30b8a7f5c32889e43c6b306f9b076b4b687f6ec92f6062c7ebdc143"} Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.253027 4972 generic.go:334] "Generic (PLEG): container finished" podID="49b866e2-c40e-4b45-acfc-965161cabf5c" containerID="b8cad041f204fc392bdda02c49c4b4a63df19ea23f7fe70985a105ba73cf9c2e" exitCode=0 Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.253148 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ccfh" event={"ID":"49b866e2-c40e-4b45-acfc-965161cabf5c","Type":"ContainerDied","Data":"b8cad041f204fc392bdda02c49c4b4a63df19ea23f7fe70985a105ba73cf9c2e"} Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.254805 4972 generic.go:334] "Generic (PLEG): container finished" podID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" containerID="76085ae4fc2113abe48f13834dd3fd4070d5afc04394a60ae0d424426f729c36" exitCode=0 Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.254859 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh4hc" event={"ID":"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827","Type":"ContainerDied","Data":"76085ae4fc2113abe48f13834dd3fd4070d5afc04394a60ae0d424426f729c36"} Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.257550 4972 generic.go:334] "Generic (PLEG): container finished" podID="1b43815a-969e-432e-ac57-843bee51860c" containerID="fc0bfeb89144b4d6afeeccff9278f47f3531b230319a6c8361078dd65e24f163" exitCode=0 Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.257605 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc758" event={"ID":"1b43815a-969e-432e-ac57-843bee51860c","Type":"ContainerDied","Data":"fc0bfeb89144b4d6afeeccff9278f47f3531b230319a6c8361078dd65e24f163"} Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.259817 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnsjx" event={"ID":"6136a605-ff46-4462-808b-cc8d2c28faea","Type":"ContainerStarted","Data":"b25837c6f832307eae17e525b0876de7db2f37014a64397f6b2f211e33c846e8"} Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.263919 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t9l" event={"ID":"13ef553c-f6bd-4af2-9c0e-643cd14f9290","Type":"ContainerStarted","Data":"3395a5058580693508b56211e0518080e51302f98c0a98f51e073d0c60f46f53"} Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.266258 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqvm8" event={"ID":"9a2865e3-5706-4a03-8529-571895dde1ea","Type":"ContainerStarted","Data":"149f468f66ca305b117eb4dc829ecbfd08e9841d5f6a3297ea586cc71b9fb585"} Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.268175 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqggx" event={"ID":"6e0ba187-0ec6-40e7-bd83-771510a29a5b","Type":"ContainerStarted","Data":"446681fd97cb1d29fadfa51b3195e8d5d9fd353ada4a8d24982aaa75f673adc2"} Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.270230 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58kms" event={"ID":"882787b1-4df4-446b-972f-8a07c4eb5782","Type":"ContainerStarted","Data":"4f9f79ece83905f2497dd6f333a2e3e52e9b1474af918d6db528c5f9458fd61c"} Nov 21 09:46:39 crc kubenswrapper[4972]: I1121 09:46:39.771318 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9280ad8-85ad-4faa-a025-a021e417e522" path="/var/lib/kubelet/pods/e9280ad8-85ad-4faa-a025-a021e417e522/volumes" Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.284215 4972 generic.go:334] "Generic (PLEG): container finished" podID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" containerID="446681fd97cb1d29fadfa51b3195e8d5d9fd353ada4a8d24982aaa75f673adc2" exitCode=0 Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.284656 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqggx" event={"ID":"6e0ba187-0ec6-40e7-bd83-771510a29a5b","Type":"ContainerDied","Data":"446681fd97cb1d29fadfa51b3195e8d5d9fd353ada4a8d24982aaa75f673adc2"} Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.295437 4972 generic.go:334] "Generic (PLEG): container finished" podID="882787b1-4df4-446b-972f-8a07c4eb5782" containerID="4f9f79ece83905f2497dd6f333a2e3e52e9b1474af918d6db528c5f9458fd61c" exitCode=0 Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.295525 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58kms" event={"ID":"882787b1-4df4-446b-972f-8a07c4eb5782","Type":"ContainerDied","Data":"4f9f79ece83905f2497dd6f333a2e3e52e9b1474af918d6db528c5f9458fd61c"} Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.300635 4972 generic.go:334] "Generic (PLEG): container finished" podID="6136a605-ff46-4462-808b-cc8d2c28faea" containerID="b25837c6f832307eae17e525b0876de7db2f37014a64397f6b2f211e33c846e8" exitCode=0 Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.300709 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnsjx" event={"ID":"6136a605-ff46-4462-808b-cc8d2c28faea","Type":"ContainerDied","Data":"b25837c6f832307eae17e525b0876de7db2f37014a64397f6b2f211e33c846e8"} Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.305443 4972 generic.go:334] "Generic (PLEG): container finished" podID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" containerID="3395a5058580693508b56211e0518080e51302f98c0a98f51e073d0c60f46f53" exitCode=0 Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.305496 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t9l" event={"ID":"13ef553c-f6bd-4af2-9c0e-643cd14f9290","Type":"ContainerDied","Data":"3395a5058580693508b56211e0518080e51302f98c0a98f51e073d0c60f46f53"} Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.310924 4972 generic.go:334] "Generic (PLEG): container finished" podID="9a2865e3-5706-4a03-8529-571895dde1ea" containerID="149f468f66ca305b117eb4dc829ecbfd08e9841d5f6a3297ea586cc71b9fb585" exitCode=0 Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.311794 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqvm8" event={"ID":"9a2865e3-5706-4a03-8529-571895dde1ea","Type":"ContainerDied","Data":"149f468f66ca305b117eb4dc829ecbfd08e9841d5f6a3297ea586cc71b9fb585"} Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.311846 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.323433 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" Nov 21 09:46:40 crc kubenswrapper[4972]: I1121 09:46:40.421315 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7d9bbcf4d4-8dh8g" podStartSLOduration=29.421292219 podStartE2EDuration="29.421292219s" podCreationTimestamp="2025-11-21 09:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:46:40.419681615 +0000 UTC m=+345.528824133" watchObservedRunningTime="2025-11-21 09:46:40.421292219 +0000 UTC m=+345.530434727" Nov 21 09:46:47 crc kubenswrapper[4972]: I1121 09:46:47.360481 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ccfh" event={"ID":"49b866e2-c40e-4b45-acfc-965161cabf5c","Type":"ContainerStarted","Data":"034b9b6e52be1d31f33ce6ffb45465b3c565dcbc80c920e0482696833128a6f9"} Nov 21 09:46:49 crc kubenswrapper[4972]: I1121 09:46:49.386989 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7ccfh" podStartSLOduration=26.236330695 podStartE2EDuration="3m8.386972336s" podCreationTimestamp="2025-11-21 09:43:41 +0000 UTC" firstStartedPulling="2025-11-21 09:44:02.288479312 +0000 UTC m=+187.397621810" lastFinishedPulling="2025-11-21 09:46:44.439120933 +0000 UTC m=+349.548263451" observedRunningTime="2025-11-21 09:46:49.384400026 +0000 UTC m=+354.493542524" watchObservedRunningTime="2025-11-21 09:46:49.386972336 +0000 UTC m=+354.496114824" Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.102232 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lqggx"] Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.106698 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pnsjx"] Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.112996 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4t9l"] Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.121502 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rc758"] Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.128446 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qh9tk"] Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.128676 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" podUID="e4f03066-ed74-40ad-ac94-c9c2d83f648e" containerName="marketplace-operator" containerID="cri-o://6a9eec55fc19202c14c0da5b0c79518d177695b5299d951d0f4fa80a7fe830c6" gracePeriod=30 Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.134354 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ccfh"] Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.134657 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7ccfh" podUID="49b866e2-c40e-4b45-acfc-965161cabf5c" containerName="registry-server" containerID="cri-o://034b9b6e52be1d31f33ce6ffb45465b3c565dcbc80c920e0482696833128a6f9" gracePeriod=30 Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.141639 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hh4hc"] Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.148413 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tlh2t"] Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.149284 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58kms"] Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.149369 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.159701 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sqvm8"] Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.162299 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tlh2t"] Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.239238 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.347089 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljhnb\" (UniqueName: \"kubernetes.io/projected/a2f7c374-4f03-452f-aaa2-a3ded791d552-kube-api-access-ljhnb\") pod \"marketplace-operator-79b997595-tlh2t\" (UID: \"a2f7c374-4f03-452f-aaa2-a3ded791d552\") " pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.347648 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2f7c374-4f03-452f-aaa2-a3ded791d552-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tlh2t\" (UID: \"a2f7c374-4f03-452f-aaa2-a3ded791d552\") " pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.347787 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2f7c374-4f03-452f-aaa2-a3ded791d552-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tlh2t\" (UID: \"a2f7c374-4f03-452f-aaa2-a3ded791d552\") " pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.448730 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2f7c374-4f03-452f-aaa2-a3ded791d552-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tlh2t\" (UID: \"a2f7c374-4f03-452f-aaa2-a3ded791d552\") " pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.448792 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2f7c374-4f03-452f-aaa2-a3ded791d552-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tlh2t\" (UID: \"a2f7c374-4f03-452f-aaa2-a3ded791d552\") " pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.448830 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljhnb\" (UniqueName: \"kubernetes.io/projected/a2f7c374-4f03-452f-aaa2-a3ded791d552-kube-api-access-ljhnb\") pod \"marketplace-operator-79b997595-tlh2t\" (UID: \"a2f7c374-4f03-452f-aaa2-a3ded791d552\") " pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.450020 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2f7c374-4f03-452f-aaa2-a3ded791d552-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tlh2t\" (UID: \"a2f7c374-4f03-452f-aaa2-a3ded791d552\") " pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.455398 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2f7c374-4f03-452f-aaa2-a3ded791d552-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tlh2t\" (UID: \"a2f7c374-4f03-452f-aaa2-a3ded791d552\") " pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.467621 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljhnb\" (UniqueName: \"kubernetes.io/projected/a2f7c374-4f03-452f-aaa2-a3ded791d552-kube-api-access-ljhnb\") pod \"marketplace-operator-79b997595-tlh2t\" (UID: \"a2f7c374-4f03-452f-aaa2-a3ded791d552\") " pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:46:52 crc kubenswrapper[4972]: I1121 09:46:52.766422 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:46:53 crc kubenswrapper[4972]: I1121 09:46:53.392419 4972 generic.go:334] "Generic (PLEG): container finished" podID="e4f03066-ed74-40ad-ac94-c9c2d83f648e" containerID="6a9eec55fc19202c14c0da5b0c79518d177695b5299d951d0f4fa80a7fe830c6" exitCode=0 Nov 21 09:46:53 crc kubenswrapper[4972]: I1121 09:46:53.392611 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" event={"ID":"e4f03066-ed74-40ad-ac94-c9c2d83f648e","Type":"ContainerDied","Data":"6a9eec55fc19202c14c0da5b0c79518d177695b5299d951d0f4fa80a7fe830c6"} Nov 21 09:46:53 crc kubenswrapper[4972]: I1121 09:46:53.395526 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7ccfh_49b866e2-c40e-4b45-acfc-965161cabf5c/registry-server/0.log" Nov 21 09:46:53 crc kubenswrapper[4972]: I1121 09:46:53.396454 4972 generic.go:334] "Generic (PLEG): container finished" podID="49b866e2-c40e-4b45-acfc-965161cabf5c" containerID="034b9b6e52be1d31f33ce6ffb45465b3c565dcbc80c920e0482696833128a6f9" exitCode=2 Nov 21 09:46:53 crc kubenswrapper[4972]: I1121 09:46:53.396495 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ccfh" event={"ID":"49b866e2-c40e-4b45-acfc-965161cabf5c","Type":"ContainerDied","Data":"034b9b6e52be1d31f33ce6ffb45465b3c565dcbc80c920e0482696833128a6f9"} Nov 21 09:47:02 crc kubenswrapper[4972]: I1121 09:47:02.671857 4972 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qh9tk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 09:47:02 crc kubenswrapper[4972]: I1121 09:47:02.672380 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" podUID="e4f03066-ed74-40ad-ac94-c9c2d83f648e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.824575 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.829961 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7ccfh_49b866e2-c40e-4b45-acfc-965161cabf5c/registry-server/0.log" Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.830908 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.912608 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ccfh"] Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.920674 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49b866e2-c40e-4b45-acfc-965161cabf5c-utilities\") pod \"49b866e2-c40e-4b45-acfc-965161cabf5c\" (UID: \"49b866e2-c40e-4b45-acfc-965161cabf5c\") " Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.920848 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4f03066-ed74-40ad-ac94-c9c2d83f648e-marketplace-operator-metrics\") pod \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\" (UID: \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\") " Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.920934 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4f03066-ed74-40ad-ac94-c9c2d83f648e-marketplace-trusted-ca\") pod \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\" (UID: \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\") " Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.920998 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc6rx\" (UniqueName: \"kubernetes.io/projected/49b866e2-c40e-4b45-acfc-965161cabf5c-kube-api-access-kc6rx\") pod \"49b866e2-c40e-4b45-acfc-965161cabf5c\" (UID: \"49b866e2-c40e-4b45-acfc-965161cabf5c\") " Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.921037 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4vbn\" (UniqueName: \"kubernetes.io/projected/e4f03066-ed74-40ad-ac94-c9c2d83f648e-kube-api-access-r4vbn\") pod \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\" (UID: \"e4f03066-ed74-40ad-ac94-c9c2d83f648e\") " Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.921070 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49b866e2-c40e-4b45-acfc-965161cabf5c-catalog-content\") pod \"49b866e2-c40e-4b45-acfc-965161cabf5c\" (UID: \"49b866e2-c40e-4b45-acfc-965161cabf5c\") " Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.921770 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49b866e2-c40e-4b45-acfc-965161cabf5c-utilities" (OuterVolumeSpecName: "utilities") pod "49b866e2-c40e-4b45-acfc-965161cabf5c" (UID: "49b866e2-c40e-4b45-acfc-965161cabf5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.922181 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4f03066-ed74-40ad-ac94-c9c2d83f648e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e4f03066-ed74-40ad-ac94-c9c2d83f648e" (UID: "e4f03066-ed74-40ad-ac94-c9c2d83f648e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.931859 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b866e2-c40e-4b45-acfc-965161cabf5c-kube-api-access-kc6rx" (OuterVolumeSpecName: "kube-api-access-kc6rx") pod "49b866e2-c40e-4b45-acfc-965161cabf5c" (UID: "49b866e2-c40e-4b45-acfc-965161cabf5c"). InnerVolumeSpecName "kube-api-access-kc6rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.932550 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f03066-ed74-40ad-ac94-c9c2d83f648e-kube-api-access-r4vbn" (OuterVolumeSpecName: "kube-api-access-r4vbn") pod "e4f03066-ed74-40ad-ac94-c9c2d83f648e" (UID: "e4f03066-ed74-40ad-ac94-c9c2d83f648e"). InnerVolumeSpecName "kube-api-access-r4vbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.935130 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4f03066-ed74-40ad-ac94-c9c2d83f648e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e4f03066-ed74-40ad-ac94-c9c2d83f648e" (UID: "e4f03066-ed74-40ad-ac94-c9c2d83f648e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:47:07 crc kubenswrapper[4972]: I1121 09:47:07.942588 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49b866e2-c40e-4b45-acfc-965161cabf5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49b866e2-c40e-4b45-acfc-965161cabf5c" (UID: "49b866e2-c40e-4b45-acfc-965161cabf5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.022810 4972 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4f03066-ed74-40ad-ac94-c9c2d83f648e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.022883 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc6rx\" (UniqueName: \"kubernetes.io/projected/49b866e2-c40e-4b45-acfc-965161cabf5c-kube-api-access-kc6rx\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.022894 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4vbn\" (UniqueName: \"kubernetes.io/projected/e4f03066-ed74-40ad-ac94-c9c2d83f648e-kube-api-access-r4vbn\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.022907 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49b866e2-c40e-4b45-acfc-965161cabf5c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.022945 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49b866e2-c40e-4b45-acfc-965161cabf5c-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.022961 4972 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4f03066-ed74-40ad-ac94-c9c2d83f648e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.486066 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" event={"ID":"e4f03066-ed74-40ad-ac94-c9c2d83f648e","Type":"ContainerDied","Data":"a489f56d51a3ed7682db89860a8a2eee2fdcbb5a9d93050e230d4cc355dcdf06"} Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.486116 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qh9tk" Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.486134 4972 scope.go:117] "RemoveContainer" containerID="6a9eec55fc19202c14c0da5b0c79518d177695b5299d951d0f4fa80a7fe830c6" Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.488664 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7ccfh_49b866e2-c40e-4b45-acfc-965161cabf5c/registry-server/0.log" Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.489814 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ccfh" event={"ID":"49b866e2-c40e-4b45-acfc-965161cabf5c","Type":"ContainerDied","Data":"6302ea3281d3d0b5d70d9fde039d97793a814ff243cfc2b3be22f19a74ed6e6f"} Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.490078 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ccfh" Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.550163 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qh9tk"] Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.558456 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qh9tk"] Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.564108 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ccfh"] Nov 21 09:47:08 crc kubenswrapper[4972]: I1121 09:47:08.571158 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ccfh"] Nov 21 09:47:09 crc kubenswrapper[4972]: I1121 09:47:09.767561 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49b866e2-c40e-4b45-acfc-965161cabf5c" path="/var/lib/kubelet/pods/49b866e2-c40e-4b45-acfc-965161cabf5c/volumes" Nov 21 09:47:09 crc kubenswrapper[4972]: I1121 09:47:09.769092 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4f03066-ed74-40ad-ac94-c9c2d83f648e" path="/var/lib/kubelet/pods/e4f03066-ed74-40ad-ac94-c9c2d83f648e/volumes" Nov 21 09:47:14 crc kubenswrapper[4972]: I1121 09:47:14.066422 4972 scope.go:117] "RemoveContainer" containerID="034b9b6e52be1d31f33ce6ffb45465b3c565dcbc80c920e0482696833128a6f9" Nov 21 09:47:14 crc kubenswrapper[4972]: I1121 09:47:14.098589 4972 scope.go:117] "RemoveContainer" containerID="b8cad041f204fc392bdda02c49c4b4a63df19ea23f7fe70985a105ba73cf9c2e" Nov 21 09:47:14 crc kubenswrapper[4972]: I1121 09:47:14.172345 4972 scope.go:117] "RemoveContainer" containerID="bad40d5d1659e669d5ca4093b6cd4204d076b29e247d6215d413276eacfb8158" Nov 21 09:47:14 crc kubenswrapper[4972]: I1121 09:47:14.486280 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tlh2t"] Nov 21 09:47:14 crc kubenswrapper[4972]: W1121 09:47:14.496199 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2f7c374_4f03_452f_aaa2_a3ded791d552.slice/crio-78e82612a1129618e9981989c9cb514a2e46d3d5ea188c31dd991c2777feb975 WatchSource:0}: Error finding container 78e82612a1129618e9981989c9cb514a2e46d3d5ea188c31dd991c2777feb975: Status 404 returned error can't find the container with id 78e82612a1129618e9981989c9cb514a2e46d3d5ea188c31dd991c2777feb975 Nov 21 09:47:14 crc kubenswrapper[4972]: I1121 09:47:14.543945 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" event={"ID":"a2f7c374-4f03-452f-aaa2-a3ded791d552","Type":"ContainerStarted","Data":"78e82612a1129618e9981989c9cb514a2e46d3d5ea188c31dd991c2777feb975"} Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.554362 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnsjx" event={"ID":"6136a605-ff46-4462-808b-cc8d2c28faea","Type":"ContainerStarted","Data":"61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd"} Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.554460 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pnsjx" podUID="6136a605-ff46-4462-808b-cc8d2c28faea" containerName="registry-server" containerID="cri-o://61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd" gracePeriod=30 Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.561682 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t9l" event={"ID":"13ef553c-f6bd-4af2-9c0e-643cd14f9290","Type":"ContainerStarted","Data":"336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4"} Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.561795 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b4t9l" podUID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" containerName="registry-server" containerID="cri-o://336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4" gracePeriod=30 Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.564928 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqvm8" event={"ID":"9a2865e3-5706-4a03-8529-571895dde1ea","Type":"ContainerStarted","Data":"ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d"} Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.565112 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sqvm8" podUID="9a2865e3-5706-4a03-8529-571895dde1ea" containerName="registry-server" containerID="cri-o://ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d" gracePeriod=30 Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.567675 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqggx" event={"ID":"6e0ba187-0ec6-40e7-bd83-771510a29a5b","Type":"ContainerStarted","Data":"3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4"} Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.567836 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lqggx" podUID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" containerName="registry-server" containerID="cri-o://3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4" gracePeriod=30 Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.578090 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58kms" event={"ID":"882787b1-4df4-446b-972f-8a07c4eb5782","Type":"ContainerStarted","Data":"f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9"} Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.578150 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-58kms" podUID="882787b1-4df4-446b-972f-8a07c4eb5782" containerName="registry-server" containerID="cri-o://f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9" gracePeriod=30 Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.580727 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" event={"ID":"a2f7c374-4f03-452f-aaa2-a3ded791d552","Type":"ContainerStarted","Data":"309db296985e6199894874dbdf99862f1b166a00f6c9201bcbb35d7864adfdb8"} Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.582440 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.584017 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh4hc" event={"ID":"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827","Type":"ContainerStarted","Data":"2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694"} Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.584140 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hh4hc" podUID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" containerName="registry-server" containerID="cri-o://2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694" gracePeriod=30 Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.586999 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.590270 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pnsjx" podStartSLOduration=23.046535184 podStartE2EDuration="3m36.590259493s" podCreationTimestamp="2025-11-21 09:43:39 +0000 UTC" firstStartedPulling="2025-11-21 09:43:59.241283204 +0000 UTC m=+184.350425702" lastFinishedPulling="2025-11-21 09:47:12.785007463 +0000 UTC m=+377.894150011" observedRunningTime="2025-11-21 09:47:15.588521746 +0000 UTC m=+380.697664254" watchObservedRunningTime="2025-11-21 09:47:15.590259493 +0000 UTC m=+380.699401981" Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.597345 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc758" event={"ID":"1b43815a-969e-432e-ac57-843bee51860c","Type":"ContainerStarted","Data":"727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3"} Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.597547 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rc758" podUID="1b43815a-969e-432e-ac57-843bee51860c" containerName="registry-server" containerID="cri-o://727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3" gracePeriod=30 Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.610779 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-58kms" podStartSLOduration=30.393249988 podStartE2EDuration="3m33.610750329s" podCreationTimestamp="2025-11-21 09:43:42 +0000 UTC" firstStartedPulling="2025-11-21 09:44:02.273371683 +0000 UTC m=+187.382514191" lastFinishedPulling="2025-11-21 09:47:05.490872014 +0000 UTC m=+370.600014532" observedRunningTime="2025-11-21 09:47:15.607203713 +0000 UTC m=+380.716346241" watchObservedRunningTime="2025-11-21 09:47:15.610750329 +0000 UTC m=+380.719892827" Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.629113 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b4t9l" podStartSLOduration=21.804127009 podStartE2EDuration="3m36.629082666s" podCreationTimestamp="2025-11-21 09:43:39 +0000 UTC" firstStartedPulling="2025-11-21 09:43:59.241656504 +0000 UTC m=+184.350799002" lastFinishedPulling="2025-11-21 09:47:14.066612161 +0000 UTC m=+379.175754659" observedRunningTime="2025-11-21 09:47:15.627568585 +0000 UTC m=+380.736711093" watchObservedRunningTime="2025-11-21 09:47:15.629082666 +0000 UTC m=+380.738225164" Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.652373 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hh4hc" podStartSLOduration=22.86408823 podStartE2EDuration="3m34.652336907s" podCreationTimestamp="2025-11-21 09:43:41 +0000 UTC" firstStartedPulling="2025-11-21 09:44:02.285148418 +0000 UTC m=+187.394290916" lastFinishedPulling="2025-11-21 09:47:14.073397105 +0000 UTC m=+379.182539593" observedRunningTime="2025-11-21 09:47:15.645398069 +0000 UTC m=+380.754540587" watchObservedRunningTime="2025-11-21 09:47:15.652336907 +0000 UTC m=+380.761479395" Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.691931 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lqggx" podStartSLOduration=13.051843526 podStartE2EDuration="3m36.691915491s" podCreationTimestamp="2025-11-21 09:43:39 +0000 UTC" firstStartedPulling="2025-11-21 09:43:44.12714117 +0000 UTC m=+169.236283668" lastFinishedPulling="2025-11-21 09:47:07.767213085 +0000 UTC m=+372.876355633" observedRunningTime="2025-11-21 09:47:15.688054486 +0000 UTC m=+380.797197014" watchObservedRunningTime="2025-11-21 09:47:15.691915491 +0000 UTC m=+380.801057989" Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.720232 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" podStartSLOduration=23.720191768 podStartE2EDuration="23.720191768s" podCreationTimestamp="2025-11-21 09:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:47:15.710889166 +0000 UTC m=+380.820031684" watchObservedRunningTime="2025-11-21 09:47:15.720191768 +0000 UTC m=+380.829334286" Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.738967 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sqvm8" podStartSLOduration=21.933666528 podStartE2EDuration="3m33.738946147s" podCreationTimestamp="2025-11-21 09:43:42 +0000 UTC" firstStartedPulling="2025-11-21 09:44:02.269265207 +0000 UTC m=+187.378407705" lastFinishedPulling="2025-11-21 09:47:14.074544816 +0000 UTC m=+379.183687324" observedRunningTime="2025-11-21 09:47:15.736367327 +0000 UTC m=+380.845509845" watchObservedRunningTime="2025-11-21 09:47:15.738946147 +0000 UTC m=+380.848088635" Nov 21 09:47:15 crc kubenswrapper[4972]: I1121 09:47:15.758532 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rc758" podStartSLOduration=21.932680656 podStartE2EDuration="3m36.758500477s" podCreationTimestamp="2025-11-21 09:43:39 +0000 UTC" firstStartedPulling="2025-11-21 09:43:59.241093428 +0000 UTC m=+184.350235926" lastFinishedPulling="2025-11-21 09:47:14.066913229 +0000 UTC m=+379.176055747" observedRunningTime="2025-11-21 09:47:15.75418157 +0000 UTC m=+380.863324078" watchObservedRunningTime="2025-11-21 09:47:15.758500477 +0000 UTC m=+380.867642985" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.464357 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4t9l_13ef553c-f6bd-4af2-9c0e-643cd14f9290/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.465172 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.541725 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvfxg\" (UniqueName: \"kubernetes.io/projected/13ef553c-f6bd-4af2-9c0e-643cd14f9290-kube-api-access-gvfxg\") pod \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\" (UID: \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.541851 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ef553c-f6bd-4af2-9c0e-643cd14f9290-utilities\") pod \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\" (UID: \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.541888 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ef553c-f6bd-4af2-9c0e-643cd14f9290-catalog-content\") pod \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\" (UID: \"13ef553c-f6bd-4af2-9c0e-643cd14f9290\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.544552 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13ef553c-f6bd-4af2-9c0e-643cd14f9290-utilities" (OuterVolumeSpecName: "utilities") pod "13ef553c-f6bd-4af2-9c0e-643cd14f9290" (UID: "13ef553c-f6bd-4af2-9c0e-643cd14f9290"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.547240 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ef553c-f6bd-4af2-9c0e-643cd14f9290-kube-api-access-gvfxg" (OuterVolumeSpecName: "kube-api-access-gvfxg") pod "13ef553c-f6bd-4af2-9c0e-643cd14f9290" (UID: "13ef553c-f6bd-4af2-9c0e-643cd14f9290"). InnerVolumeSpecName "kube-api-access-gvfxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.581205 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rc758_1b43815a-969e-432e-ac57-843bee51860c/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.581692 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rc758" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.588126 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lqggx_6e0ba187-0ec6-40e7-bd83-771510a29a5b/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.589532 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.595823 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pnsjx_6136a605-ff46-4462-808b-cc8d2c28faea/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.601452 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13ef553c-f6bd-4af2-9c0e-643cd14f9290-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13ef553c-f6bd-4af2-9c0e-643cd14f9290" (UID: "13ef553c-f6bd-4af2-9c0e-643cd14f9290"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.601791 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.610279 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sqvm8_9a2865e3-5706-4a03-8529-571895dde1ea/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.611910 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.622451 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sqvm8_9a2865e3-5706-4a03-8529-571895dde1ea/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.622885 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-58kms_882787b1-4df4-446b-972f-8a07c4eb5782/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.623935 4972 generic.go:334] "Generic (PLEG): container finished" podID="9a2865e3-5706-4a03-8529-571895dde1ea" containerID="ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d" exitCode=1 Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.624010 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqvm8" event={"ID":"9a2865e3-5706-4a03-8529-571895dde1ea","Type":"ContainerDied","Data":"ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.624046 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqvm8" event={"ID":"9a2865e3-5706-4a03-8529-571895dde1ea","Type":"ContainerDied","Data":"db40660ad04f730b7f71ffebbf5e2fedc8b43c8930dbb70a3627449133e1743c"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.624067 4972 scope.go:117] "RemoveContainer" containerID="ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.625418 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lqggx_6e0ba187-0ec6-40e7-bd83-771510a29a5b/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.626206 4972 generic.go:334] "Generic (PLEG): container finished" podID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" containerID="3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4" exitCode=1 Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.626255 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqggx" event={"ID":"6e0ba187-0ec6-40e7-bd83-771510a29a5b","Type":"ContainerDied","Data":"3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.626281 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqggx" event={"ID":"6e0ba187-0ec6-40e7-bd83-771510a29a5b","Type":"ContainerDied","Data":"33d88298d53f389d72ba4a0713680da8157d5565c8e9e3452cca956f3d4fbf3b"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.626341 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqggx" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.624222 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.628495 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-58kms_882787b1-4df4-446b-972f-8a07c4eb5782/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.628496 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh4hc_ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.629188 4972 generic.go:334] "Generic (PLEG): container finished" podID="882787b1-4df4-446b-972f-8a07c4eb5782" containerID="f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9" exitCode=1 Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.629266 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58kms" event={"ID":"882787b1-4df4-446b-972f-8a07c4eb5782","Type":"ContainerDied","Data":"f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.629297 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58kms" event={"ID":"882787b1-4df4-446b-972f-8a07c4eb5782","Type":"ContainerDied","Data":"55e180bff46e973ca6c027af5df8d3e9866e9d67c7aa098a0a74452d84545816"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.629446 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.631373 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh4hc_ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.632071 4972 generic.go:334] "Generic (PLEG): container finished" podID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" containerID="2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694" exitCode=1 Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.632143 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh4hc" event={"ID":"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827","Type":"ContainerDied","Data":"2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.632176 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh4hc" event={"ID":"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827","Type":"ContainerDied","Data":"ffd7f437b1743e69e0479c5822368148add76b86ba0ff8451ceed2d13cf696c3"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.634056 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rc758_1b43815a-969e-432e-ac57-843bee51860c/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.634970 4972 generic.go:334] "Generic (PLEG): container finished" podID="1b43815a-969e-432e-ac57-843bee51860c" containerID="727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3" exitCode=1 Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.639755 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rc758" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.640135 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc758" event={"ID":"1b43815a-969e-432e-ac57-843bee51860c","Type":"ContainerDied","Data":"727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.640160 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc758" event={"ID":"1b43815a-969e-432e-ac57-843bee51860c","Type":"ContainerDied","Data":"4f1df5923109ad16885a7783181fa83da6c4805b05ffd21b2e5c639d3e85d98e"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.645640 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e0ba187-0ec6-40e7-bd83-771510a29a5b-utilities\") pod \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\" (UID: \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.645703 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9s4p\" (UniqueName: \"kubernetes.io/projected/6e0ba187-0ec6-40e7-bd83-771510a29a5b-kube-api-access-z9s4p\") pod \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\" (UID: \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.645746 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42ww7\" (UniqueName: \"kubernetes.io/projected/882787b1-4df4-446b-972f-8a07c4eb5782-kube-api-access-42ww7\") pod \"882787b1-4df4-446b-972f-8a07c4eb5782\" (UID: \"882787b1-4df4-446b-972f-8a07c4eb5782\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.645797 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/882787b1-4df4-446b-972f-8a07c4eb5782-catalog-content\") pod \"882787b1-4df4-446b-972f-8a07c4eb5782\" (UID: \"882787b1-4df4-446b-972f-8a07c4eb5782\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.645855 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b43815a-969e-432e-ac57-843bee51860c-utilities\") pod \"1b43815a-969e-432e-ac57-843bee51860c\" (UID: \"1b43815a-969e-432e-ac57-843bee51860c\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.645918 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvczx\" (UniqueName: \"kubernetes.io/projected/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-kube-api-access-zvczx\") pod \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\" (UID: \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.645983 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e0ba187-0ec6-40e7-bd83-771510a29a5b-catalog-content\") pod \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\" (UID: \"6e0ba187-0ec6-40e7-bd83-771510a29a5b\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646034 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6136a605-ff46-4462-808b-cc8d2c28faea-utilities\") pod \"6136a605-ff46-4462-808b-cc8d2c28faea\" (UID: \"6136a605-ff46-4462-808b-cc8d2c28faea\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646067 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2865e3-5706-4a03-8529-571895dde1ea-catalog-content\") pod \"9a2865e3-5706-4a03-8529-571895dde1ea\" (UID: \"9a2865e3-5706-4a03-8529-571895dde1ea\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646120 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-catalog-content\") pod \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\" (UID: \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646182 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psklv\" (UniqueName: \"kubernetes.io/projected/1b43815a-969e-432e-ac57-843bee51860c-kube-api-access-psklv\") pod \"1b43815a-969e-432e-ac57-843bee51860c\" (UID: \"1b43815a-969e-432e-ac57-843bee51860c\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646227 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-utilities\") pod \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\" (UID: \"ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646260 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s7ml\" (UniqueName: \"kubernetes.io/projected/9a2865e3-5706-4a03-8529-571895dde1ea-kube-api-access-7s7ml\") pod \"9a2865e3-5706-4a03-8529-571895dde1ea\" (UID: \"9a2865e3-5706-4a03-8529-571895dde1ea\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646293 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2865e3-5706-4a03-8529-571895dde1ea-utilities\") pod \"9a2865e3-5706-4a03-8529-571895dde1ea\" (UID: \"9a2865e3-5706-4a03-8529-571895dde1ea\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646358 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr6sk\" (UniqueName: \"kubernetes.io/projected/6136a605-ff46-4462-808b-cc8d2c28faea-kube-api-access-kr6sk\") pod \"6136a605-ff46-4462-808b-cc8d2c28faea\" (UID: \"6136a605-ff46-4462-808b-cc8d2c28faea\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646390 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/882787b1-4df4-446b-972f-8a07c4eb5782-utilities\") pod \"882787b1-4df4-446b-972f-8a07c4eb5782\" (UID: \"882787b1-4df4-446b-972f-8a07c4eb5782\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646423 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6136a605-ff46-4462-808b-cc8d2c28faea-catalog-content\") pod \"6136a605-ff46-4462-808b-cc8d2c28faea\" (UID: \"6136a605-ff46-4462-808b-cc8d2c28faea\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646460 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b43815a-969e-432e-ac57-843bee51860c-catalog-content\") pod \"1b43815a-969e-432e-ac57-843bee51860c\" (UID: \"1b43815a-969e-432e-ac57-843bee51860c\") " Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646823 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13ef553c-f6bd-4af2-9c0e-643cd14f9290-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646882 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13ef553c-f6bd-4af2-9c0e-643cd14f9290-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.646913 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvfxg\" (UniqueName: \"kubernetes.io/projected/13ef553c-f6bd-4af2-9c0e-643cd14f9290-kube-api-access-gvfxg\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.647022 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6136a605-ff46-4462-808b-cc8d2c28faea-utilities" (OuterVolumeSpecName: "utilities") pod "6136a605-ff46-4462-808b-cc8d2c28faea" (UID: "6136a605-ff46-4462-808b-cc8d2c28faea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.647732 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b43815a-969e-432e-ac57-843bee51860c-utilities" (OuterVolumeSpecName: "utilities") pod "1b43815a-969e-432e-ac57-843bee51860c" (UID: "1b43815a-969e-432e-ac57-843bee51860c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.648486 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e0ba187-0ec6-40e7-bd83-771510a29a5b-utilities" (OuterVolumeSpecName: "utilities") pod "6e0ba187-0ec6-40e7-bd83-771510a29a5b" (UID: "6e0ba187-0ec6-40e7-bd83-771510a29a5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.648808 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pnsjx_6136a605-ff46-4462-808b-cc8d2c28faea/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.650378 4972 scope.go:117] "RemoveContainer" containerID="149f468f66ca305b117eb4dc829ecbfd08e9841d5f6a3297ea586cc71b9fb585" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.651048 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-kube-api-access-zvczx" (OuterVolumeSpecName: "kube-api-access-zvczx") pod "ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" (UID: "ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827"). InnerVolumeSpecName "kube-api-access-zvczx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.651236 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882787b1-4df4-446b-972f-8a07c4eb5782-kube-api-access-42ww7" (OuterVolumeSpecName: "kube-api-access-42ww7") pod "882787b1-4df4-446b-972f-8a07c4eb5782" (UID: "882787b1-4df4-446b-972f-8a07c4eb5782"). InnerVolumeSpecName "kube-api-access-42ww7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.652374 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/882787b1-4df4-446b-972f-8a07c4eb5782-utilities" (OuterVolumeSpecName: "utilities") pod "882787b1-4df4-446b-972f-8a07c4eb5782" (UID: "882787b1-4df4-446b-972f-8a07c4eb5782"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.653844 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a2865e3-5706-4a03-8529-571895dde1ea-utilities" (OuterVolumeSpecName: "utilities") pod "9a2865e3-5706-4a03-8529-571895dde1ea" (UID: "9a2865e3-5706-4a03-8529-571895dde1ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.653976 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6136a605-ff46-4462-808b-cc8d2c28faea-kube-api-access-kr6sk" (OuterVolumeSpecName: "kube-api-access-kr6sk") pod "6136a605-ff46-4462-808b-cc8d2c28faea" (UID: "6136a605-ff46-4462-808b-cc8d2c28faea"). InnerVolumeSpecName "kube-api-access-kr6sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.658209 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-utilities" (OuterVolumeSpecName: "utilities") pod "ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" (UID: "ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.658949 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2865e3-5706-4a03-8529-571895dde1ea-kube-api-access-7s7ml" (OuterVolumeSpecName: "kube-api-access-7s7ml") pod "9a2865e3-5706-4a03-8529-571895dde1ea" (UID: "9a2865e3-5706-4a03-8529-571895dde1ea"). InnerVolumeSpecName "kube-api-access-7s7ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.658994 4972 generic.go:334] "Generic (PLEG): container finished" podID="6136a605-ff46-4462-808b-cc8d2c28faea" containerID="61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd" exitCode=1 Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.659144 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnsjx" event={"ID":"6136a605-ff46-4462-808b-cc8d2c28faea","Type":"ContainerDied","Data":"61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.659205 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnsjx" event={"ID":"6136a605-ff46-4462-808b-cc8d2c28faea","Type":"ContainerDied","Data":"bfe80c59b5e6d7be3d314228eee18e046c1dae87b08de0707b6fd74753b2dbdc"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.659308 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pnsjx" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.666763 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4t9l_13ef553c-f6bd-4af2-9c0e-643cd14f9290/registry-server/0.log" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.669591 4972 generic.go:334] "Generic (PLEG): container finished" podID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" containerID="336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4" exitCode=1 Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.670080 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4t9l" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.670355 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t9l" event={"ID":"13ef553c-f6bd-4af2-9c0e-643cd14f9290","Type":"ContainerDied","Data":"336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.670413 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t9l" event={"ID":"13ef553c-f6bd-4af2-9c0e-643cd14f9290","Type":"ContainerDied","Data":"312e2f5f576f6348b8d342fbd9cd617a12125dd520678f5db0be33b2c5bf8bb4"} Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.671042 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b43815a-969e-432e-ac57-843bee51860c-kube-api-access-psklv" (OuterVolumeSpecName: "kube-api-access-psklv") pod "1b43815a-969e-432e-ac57-843bee51860c" (UID: "1b43815a-969e-432e-ac57-843bee51860c"). InnerVolumeSpecName "kube-api-access-psklv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.671653 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e0ba187-0ec6-40e7-bd83-771510a29a5b-kube-api-access-z9s4p" (OuterVolumeSpecName: "kube-api-access-z9s4p") pod "6e0ba187-0ec6-40e7-bd83-771510a29a5b" (UID: "6e0ba187-0ec6-40e7-bd83-771510a29a5b"). InnerVolumeSpecName "kube-api-access-z9s4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.674278 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" (UID: "ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.676414 4972 scope.go:117] "RemoveContainer" containerID="c65c64ce17dad3c7a3bc611725befe031f878b67bff913288074d28cb2ca45df" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.707635 4972 scope.go:117] "RemoveContainer" containerID="ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.710357 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d\": container with ID starting with ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d not found: ID does not exist" containerID="ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.710596 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d"} err="failed to get container status \"ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d\": rpc error: code = NotFound desc = could not find container \"ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d\": container with ID starting with ae74f40aef5a28372e3ea75e39de7a7973701c1e4c9b4e59c1aea07cb7a7e92d not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.710872 4972 scope.go:117] "RemoveContainer" containerID="149f468f66ca305b117eb4dc829ecbfd08e9841d5f6a3297ea586cc71b9fb585" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.711435 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"149f468f66ca305b117eb4dc829ecbfd08e9841d5f6a3297ea586cc71b9fb585\": container with ID starting with 149f468f66ca305b117eb4dc829ecbfd08e9841d5f6a3297ea586cc71b9fb585 not found: ID does not exist" containerID="149f468f66ca305b117eb4dc829ecbfd08e9841d5f6a3297ea586cc71b9fb585" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.712307 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"149f468f66ca305b117eb4dc829ecbfd08e9841d5f6a3297ea586cc71b9fb585"} err="failed to get container status \"149f468f66ca305b117eb4dc829ecbfd08e9841d5f6a3297ea586cc71b9fb585\": rpc error: code = NotFound desc = could not find container \"149f468f66ca305b117eb4dc829ecbfd08e9841d5f6a3297ea586cc71b9fb585\": container with ID starting with 149f468f66ca305b117eb4dc829ecbfd08e9841d5f6a3297ea586cc71b9fb585 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.712325 4972 scope.go:117] "RemoveContainer" containerID="c65c64ce17dad3c7a3bc611725befe031f878b67bff913288074d28cb2ca45df" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.714096 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65c64ce17dad3c7a3bc611725befe031f878b67bff913288074d28cb2ca45df\": container with ID starting with c65c64ce17dad3c7a3bc611725befe031f878b67bff913288074d28cb2ca45df not found: ID does not exist" containerID="c65c64ce17dad3c7a3bc611725befe031f878b67bff913288074d28cb2ca45df" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.714168 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65c64ce17dad3c7a3bc611725befe031f878b67bff913288074d28cb2ca45df"} err="failed to get container status \"c65c64ce17dad3c7a3bc611725befe031f878b67bff913288074d28cb2ca45df\": rpc error: code = NotFound desc = could not find container \"c65c64ce17dad3c7a3bc611725befe031f878b67bff913288074d28cb2ca45df\": container with ID starting with c65c64ce17dad3c7a3bc611725befe031f878b67bff913288074d28cb2ca45df not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.714205 4972 scope.go:117] "RemoveContainer" containerID="3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.728503 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4t9l"] Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.731452 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b4t9l"] Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.732475 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e0ba187-0ec6-40e7-bd83-771510a29a5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e0ba187-0ec6-40e7-bd83-771510a29a5b" (UID: "6e0ba187-0ec6-40e7-bd83-771510a29a5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.734224 4972 scope.go:117] "RemoveContainer" containerID="446681fd97cb1d29fadfa51b3195e8d5d9fd353ada4a8d24982aaa75f673adc2" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751683 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751704 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b43815a-969e-432e-ac57-843bee51860c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b43815a-969e-432e-ac57-843bee51860c" (UID: "1b43815a-969e-432e-ac57-843bee51860c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751719 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s7ml\" (UniqueName: \"kubernetes.io/projected/9a2865e3-5706-4a03-8529-571895dde1ea-kube-api-access-7s7ml\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751730 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2865e3-5706-4a03-8529-571895dde1ea-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751738 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr6sk\" (UniqueName: \"kubernetes.io/projected/6136a605-ff46-4462-808b-cc8d2c28faea-kube-api-access-kr6sk\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751747 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/882787b1-4df4-446b-972f-8a07c4eb5782-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751755 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e0ba187-0ec6-40e7-bd83-771510a29a5b-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751763 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9s4p\" (UniqueName: \"kubernetes.io/projected/6e0ba187-0ec6-40e7-bd83-771510a29a5b-kube-api-access-z9s4p\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751771 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42ww7\" (UniqueName: \"kubernetes.io/projected/882787b1-4df4-446b-972f-8a07c4eb5782-kube-api-access-42ww7\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751781 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b43815a-969e-432e-ac57-843bee51860c-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751791 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvczx\" (UniqueName: \"kubernetes.io/projected/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-kube-api-access-zvczx\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751801 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e0ba187-0ec6-40e7-bd83-771510a29a5b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751810 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6136a605-ff46-4462-808b-cc8d2c28faea-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751817 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.751839 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psklv\" (UniqueName: \"kubernetes.io/projected/1b43815a-969e-432e-ac57-843bee51860c-kube-api-access-psklv\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.752246 4972 scope.go:117] "RemoveContainer" containerID="156961e40bf26f82a583f0e2dcaee19f8282962279c498b17d1ce0543b0c5ae2" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.755573 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6136a605-ff46-4462-808b-cc8d2c28faea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6136a605-ff46-4462-808b-cc8d2c28faea" (UID: "6136a605-ff46-4462-808b-cc8d2c28faea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.765467 4972 scope.go:117] "RemoveContainer" containerID="3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.766200 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4\": container with ID starting with 3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4 not found: ID does not exist" containerID="3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.766250 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4"} err="failed to get container status \"3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4\": rpc error: code = NotFound desc = could not find container \"3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4\": container with ID starting with 3ba9cab0e0d49d8d58436c60ec9780352df540eabf1e86071b0f68565277a2f4 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.766274 4972 scope.go:117] "RemoveContainer" containerID="446681fd97cb1d29fadfa51b3195e8d5d9fd353ada4a8d24982aaa75f673adc2" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.766669 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"446681fd97cb1d29fadfa51b3195e8d5d9fd353ada4a8d24982aaa75f673adc2\": container with ID starting with 446681fd97cb1d29fadfa51b3195e8d5d9fd353ada4a8d24982aaa75f673adc2 not found: ID does not exist" containerID="446681fd97cb1d29fadfa51b3195e8d5d9fd353ada4a8d24982aaa75f673adc2" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.766697 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"446681fd97cb1d29fadfa51b3195e8d5d9fd353ada4a8d24982aaa75f673adc2"} err="failed to get container status \"446681fd97cb1d29fadfa51b3195e8d5d9fd353ada4a8d24982aaa75f673adc2\": rpc error: code = NotFound desc = could not find container \"446681fd97cb1d29fadfa51b3195e8d5d9fd353ada4a8d24982aaa75f673adc2\": container with ID starting with 446681fd97cb1d29fadfa51b3195e8d5d9fd353ada4a8d24982aaa75f673adc2 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.766718 4972 scope.go:117] "RemoveContainer" containerID="156961e40bf26f82a583f0e2dcaee19f8282962279c498b17d1ce0543b0c5ae2" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.767015 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"156961e40bf26f82a583f0e2dcaee19f8282962279c498b17d1ce0543b0c5ae2\": container with ID starting with 156961e40bf26f82a583f0e2dcaee19f8282962279c498b17d1ce0543b0c5ae2 not found: ID does not exist" containerID="156961e40bf26f82a583f0e2dcaee19f8282962279c498b17d1ce0543b0c5ae2" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.767048 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"156961e40bf26f82a583f0e2dcaee19f8282962279c498b17d1ce0543b0c5ae2"} err="failed to get container status \"156961e40bf26f82a583f0e2dcaee19f8282962279c498b17d1ce0543b0c5ae2\": rpc error: code = NotFound desc = could not find container \"156961e40bf26f82a583f0e2dcaee19f8282962279c498b17d1ce0543b0c5ae2\": container with ID starting with 156961e40bf26f82a583f0e2dcaee19f8282962279c498b17d1ce0543b0c5ae2 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.767068 4972 scope.go:117] "RemoveContainer" containerID="f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.773902 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/882787b1-4df4-446b-972f-8a07c4eb5782-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "882787b1-4df4-446b-972f-8a07c4eb5782" (UID: "882787b1-4df4-446b-972f-8a07c4eb5782"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.779332 4972 scope.go:117] "RemoveContainer" containerID="4f9f79ece83905f2497dd6f333a2e3e52e9b1474af918d6db528c5f9458fd61c" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.792080 4972 scope.go:117] "RemoveContainer" containerID="f10a3889b250afdd80c638147321ba259cd479521fa0c8021b35eebd1311edc7" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.802246 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a2865e3-5706-4a03-8529-571895dde1ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a2865e3-5706-4a03-8529-571895dde1ea" (UID: "9a2865e3-5706-4a03-8529-571895dde1ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.806696 4972 scope.go:117] "RemoveContainer" containerID="f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.807271 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9\": container with ID starting with f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9 not found: ID does not exist" containerID="f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.807331 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9"} err="failed to get container status \"f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9\": rpc error: code = NotFound desc = could not find container \"f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9\": container with ID starting with f3dfeb82b874afa5fcfd17b14dace21c9100d50a33d489a28705a76813e5c4a9 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.807374 4972 scope.go:117] "RemoveContainer" containerID="4f9f79ece83905f2497dd6f333a2e3e52e9b1474af918d6db528c5f9458fd61c" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.807784 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f9f79ece83905f2497dd6f333a2e3e52e9b1474af918d6db528c5f9458fd61c\": container with ID starting with 4f9f79ece83905f2497dd6f333a2e3e52e9b1474af918d6db528c5f9458fd61c not found: ID does not exist" containerID="4f9f79ece83905f2497dd6f333a2e3e52e9b1474af918d6db528c5f9458fd61c" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.807856 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f9f79ece83905f2497dd6f333a2e3e52e9b1474af918d6db528c5f9458fd61c"} err="failed to get container status \"4f9f79ece83905f2497dd6f333a2e3e52e9b1474af918d6db528c5f9458fd61c\": rpc error: code = NotFound desc = could not find container \"4f9f79ece83905f2497dd6f333a2e3e52e9b1474af918d6db528c5f9458fd61c\": container with ID starting with 4f9f79ece83905f2497dd6f333a2e3e52e9b1474af918d6db528c5f9458fd61c not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.807890 4972 scope.go:117] "RemoveContainer" containerID="f10a3889b250afdd80c638147321ba259cd479521fa0c8021b35eebd1311edc7" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.808186 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f10a3889b250afdd80c638147321ba259cd479521fa0c8021b35eebd1311edc7\": container with ID starting with f10a3889b250afdd80c638147321ba259cd479521fa0c8021b35eebd1311edc7 not found: ID does not exist" containerID="f10a3889b250afdd80c638147321ba259cd479521fa0c8021b35eebd1311edc7" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.808228 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f10a3889b250afdd80c638147321ba259cd479521fa0c8021b35eebd1311edc7"} err="failed to get container status \"f10a3889b250afdd80c638147321ba259cd479521fa0c8021b35eebd1311edc7\": rpc error: code = NotFound desc = could not find container \"f10a3889b250afdd80c638147321ba259cd479521fa0c8021b35eebd1311edc7\": container with ID starting with f10a3889b250afdd80c638147321ba259cd479521fa0c8021b35eebd1311edc7 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.808247 4972 scope.go:117] "RemoveContainer" containerID="2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.818524 4972 scope.go:117] "RemoveContainer" containerID="76085ae4fc2113abe48f13834dd3fd4070d5afc04394a60ae0d424426f729c36" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.828220 4972 scope.go:117] "RemoveContainer" containerID="f1d840d59ae61eabc8be3b62b0f1fe3ff491c3d0fc8fb30da87081a30c8748d9" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.838356 4972 scope.go:117] "RemoveContainer" containerID="2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.838639 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694\": container with ID starting with 2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694 not found: ID does not exist" containerID="2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.838689 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694"} err="failed to get container status \"2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694\": rpc error: code = NotFound desc = could not find container \"2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694\": container with ID starting with 2d24e87558f213839190230bf4712239f7892f4f751dcf9e23f602f2f8801694 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.838712 4972 scope.go:117] "RemoveContainer" containerID="76085ae4fc2113abe48f13834dd3fd4070d5afc04394a60ae0d424426f729c36" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.839043 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76085ae4fc2113abe48f13834dd3fd4070d5afc04394a60ae0d424426f729c36\": container with ID starting with 76085ae4fc2113abe48f13834dd3fd4070d5afc04394a60ae0d424426f729c36 not found: ID does not exist" containerID="76085ae4fc2113abe48f13834dd3fd4070d5afc04394a60ae0d424426f729c36" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.839093 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76085ae4fc2113abe48f13834dd3fd4070d5afc04394a60ae0d424426f729c36"} err="failed to get container status \"76085ae4fc2113abe48f13834dd3fd4070d5afc04394a60ae0d424426f729c36\": rpc error: code = NotFound desc = could not find container \"76085ae4fc2113abe48f13834dd3fd4070d5afc04394a60ae0d424426f729c36\": container with ID starting with 76085ae4fc2113abe48f13834dd3fd4070d5afc04394a60ae0d424426f729c36 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.839124 4972 scope.go:117] "RemoveContainer" containerID="f1d840d59ae61eabc8be3b62b0f1fe3ff491c3d0fc8fb30da87081a30c8748d9" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.839454 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d840d59ae61eabc8be3b62b0f1fe3ff491c3d0fc8fb30da87081a30c8748d9\": container with ID starting with f1d840d59ae61eabc8be3b62b0f1fe3ff491c3d0fc8fb30da87081a30c8748d9 not found: ID does not exist" containerID="f1d840d59ae61eabc8be3b62b0f1fe3ff491c3d0fc8fb30da87081a30c8748d9" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.839504 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d840d59ae61eabc8be3b62b0f1fe3ff491c3d0fc8fb30da87081a30c8748d9"} err="failed to get container status \"f1d840d59ae61eabc8be3b62b0f1fe3ff491c3d0fc8fb30da87081a30c8748d9\": rpc error: code = NotFound desc = could not find container \"f1d840d59ae61eabc8be3b62b0f1fe3ff491c3d0fc8fb30da87081a30c8748d9\": container with ID starting with f1d840d59ae61eabc8be3b62b0f1fe3ff491c3d0fc8fb30da87081a30c8748d9 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.839539 4972 scope.go:117] "RemoveContainer" containerID="727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.849437 4972 scope.go:117] "RemoveContainer" containerID="fc0bfeb89144b4d6afeeccff9278f47f3531b230319a6c8361078dd65e24f163" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.853226 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6136a605-ff46-4462-808b-cc8d2c28faea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.853246 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b43815a-969e-432e-ac57-843bee51860c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.853255 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/882787b1-4df4-446b-972f-8a07c4eb5782-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.853264 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2865e3-5706-4a03-8529-571895dde1ea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.862964 4972 scope.go:117] "RemoveContainer" containerID="b4d0280b35c91aebc5337859ed711f8af1c27f1505945c5773fad8e46ccf98fb" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.873812 4972 scope.go:117] "RemoveContainer" containerID="727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.874126 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3\": container with ID starting with 727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3 not found: ID does not exist" containerID="727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.874157 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3"} err="failed to get container status \"727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3\": rpc error: code = NotFound desc = could not find container \"727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3\": container with ID starting with 727654dca27e94324c5e2a1baa12f4173176df23a68e0e51726eed8b2474d5a3 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.874182 4972 scope.go:117] "RemoveContainer" containerID="fc0bfeb89144b4d6afeeccff9278f47f3531b230319a6c8361078dd65e24f163" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.874415 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc0bfeb89144b4d6afeeccff9278f47f3531b230319a6c8361078dd65e24f163\": container with ID starting with fc0bfeb89144b4d6afeeccff9278f47f3531b230319a6c8361078dd65e24f163 not found: ID does not exist" containerID="fc0bfeb89144b4d6afeeccff9278f47f3531b230319a6c8361078dd65e24f163" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.874432 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0bfeb89144b4d6afeeccff9278f47f3531b230319a6c8361078dd65e24f163"} err="failed to get container status \"fc0bfeb89144b4d6afeeccff9278f47f3531b230319a6c8361078dd65e24f163\": rpc error: code = NotFound desc = could not find container \"fc0bfeb89144b4d6afeeccff9278f47f3531b230319a6c8361078dd65e24f163\": container with ID starting with fc0bfeb89144b4d6afeeccff9278f47f3531b230319a6c8361078dd65e24f163 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.874446 4972 scope.go:117] "RemoveContainer" containerID="b4d0280b35c91aebc5337859ed711f8af1c27f1505945c5773fad8e46ccf98fb" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.874627 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4d0280b35c91aebc5337859ed711f8af1c27f1505945c5773fad8e46ccf98fb\": container with ID starting with b4d0280b35c91aebc5337859ed711f8af1c27f1505945c5773fad8e46ccf98fb not found: ID does not exist" containerID="b4d0280b35c91aebc5337859ed711f8af1c27f1505945c5773fad8e46ccf98fb" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.874642 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d0280b35c91aebc5337859ed711f8af1c27f1505945c5773fad8e46ccf98fb"} err="failed to get container status \"b4d0280b35c91aebc5337859ed711f8af1c27f1505945c5773fad8e46ccf98fb\": rpc error: code = NotFound desc = could not find container \"b4d0280b35c91aebc5337859ed711f8af1c27f1505945c5773fad8e46ccf98fb\": container with ID starting with b4d0280b35c91aebc5337859ed711f8af1c27f1505945c5773fad8e46ccf98fb not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.874657 4972 scope.go:117] "RemoveContainer" containerID="61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.886727 4972 scope.go:117] "RemoveContainer" containerID="b25837c6f832307eae17e525b0876de7db2f37014a64397f6b2f211e33c846e8" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.899457 4972 scope.go:117] "RemoveContainer" containerID="ef8104784c6b32be85ee29952b39fb16f9ed8f2029c963adf7df2aa8818736bf" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.911052 4972 scope.go:117] "RemoveContainer" containerID="61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.911607 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd\": container with ID starting with 61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd not found: ID does not exist" containerID="61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.911662 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd"} err="failed to get container status \"61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd\": rpc error: code = NotFound desc = could not find container \"61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd\": container with ID starting with 61f505a38b20b103d6bd9886add10f601fd1849473fae15927043c048c5562fd not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.911705 4972 scope.go:117] "RemoveContainer" containerID="b25837c6f832307eae17e525b0876de7db2f37014a64397f6b2f211e33c846e8" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.912102 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b25837c6f832307eae17e525b0876de7db2f37014a64397f6b2f211e33c846e8\": container with ID starting with b25837c6f832307eae17e525b0876de7db2f37014a64397f6b2f211e33c846e8 not found: ID does not exist" containerID="b25837c6f832307eae17e525b0876de7db2f37014a64397f6b2f211e33c846e8" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.912141 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b25837c6f832307eae17e525b0876de7db2f37014a64397f6b2f211e33c846e8"} err="failed to get container status \"b25837c6f832307eae17e525b0876de7db2f37014a64397f6b2f211e33c846e8\": rpc error: code = NotFound desc = could not find container \"b25837c6f832307eae17e525b0876de7db2f37014a64397f6b2f211e33c846e8\": container with ID starting with b25837c6f832307eae17e525b0876de7db2f37014a64397f6b2f211e33c846e8 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.912164 4972 scope.go:117] "RemoveContainer" containerID="ef8104784c6b32be85ee29952b39fb16f9ed8f2029c963adf7df2aa8818736bf" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.912441 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8104784c6b32be85ee29952b39fb16f9ed8f2029c963adf7df2aa8818736bf\": container with ID starting with ef8104784c6b32be85ee29952b39fb16f9ed8f2029c963adf7df2aa8818736bf not found: ID does not exist" containerID="ef8104784c6b32be85ee29952b39fb16f9ed8f2029c963adf7df2aa8818736bf" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.912472 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8104784c6b32be85ee29952b39fb16f9ed8f2029c963adf7df2aa8818736bf"} err="failed to get container status \"ef8104784c6b32be85ee29952b39fb16f9ed8f2029c963adf7df2aa8818736bf\": rpc error: code = NotFound desc = could not find container \"ef8104784c6b32be85ee29952b39fb16f9ed8f2029c963adf7df2aa8818736bf\": container with ID starting with ef8104784c6b32be85ee29952b39fb16f9ed8f2029c963adf7df2aa8818736bf not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.912498 4972 scope.go:117] "RemoveContainer" containerID="336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.923763 4972 scope.go:117] "RemoveContainer" containerID="3395a5058580693508b56211e0518080e51302f98c0a98f51e073d0c60f46f53" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.935674 4972 scope.go:117] "RemoveContainer" containerID="d352d776ec765c02e42de25bbda5bb47ea680ea3a972d677d2599215a29d098f" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.951048 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lqggx"] Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.955402 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lqggx"] Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.961079 4972 scope.go:117] "RemoveContainer" containerID="336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.966584 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4\": container with ID starting with 336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4 not found: ID does not exist" containerID="336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.966673 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4"} err="failed to get container status \"336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4\": rpc error: code = NotFound desc = could not find container \"336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4\": container with ID starting with 336b35d75264864b407f4dfed054df53c10d5d6ea44d677a933a934e958a3fc4 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.966733 4972 scope.go:117] "RemoveContainer" containerID="3395a5058580693508b56211e0518080e51302f98c0a98f51e073d0c60f46f53" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.967314 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3395a5058580693508b56211e0518080e51302f98c0a98f51e073d0c60f46f53\": container with ID starting with 3395a5058580693508b56211e0518080e51302f98c0a98f51e073d0c60f46f53 not found: ID does not exist" containerID="3395a5058580693508b56211e0518080e51302f98c0a98f51e073d0c60f46f53" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.967349 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3395a5058580693508b56211e0518080e51302f98c0a98f51e073d0c60f46f53"} err="failed to get container status \"3395a5058580693508b56211e0518080e51302f98c0a98f51e073d0c60f46f53\": rpc error: code = NotFound desc = could not find container \"3395a5058580693508b56211e0518080e51302f98c0a98f51e073d0c60f46f53\": container with ID starting with 3395a5058580693508b56211e0518080e51302f98c0a98f51e073d0c60f46f53 not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.967397 4972 scope.go:117] "RemoveContainer" containerID="d352d776ec765c02e42de25bbda5bb47ea680ea3a972d677d2599215a29d098f" Nov 21 09:47:16 crc kubenswrapper[4972]: E1121 09:47:16.967817 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d352d776ec765c02e42de25bbda5bb47ea680ea3a972d677d2599215a29d098f\": container with ID starting with d352d776ec765c02e42de25bbda5bb47ea680ea3a972d677d2599215a29d098f not found: ID does not exist" containerID="d352d776ec765c02e42de25bbda5bb47ea680ea3a972d677d2599215a29d098f" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.967863 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d352d776ec765c02e42de25bbda5bb47ea680ea3a972d677d2599215a29d098f"} err="failed to get container status \"d352d776ec765c02e42de25bbda5bb47ea680ea3a972d677d2599215a29d098f\": rpc error: code = NotFound desc = could not find container \"d352d776ec765c02e42de25bbda5bb47ea680ea3a972d677d2599215a29d098f\": container with ID starting with d352d776ec765c02e42de25bbda5bb47ea680ea3a972d677d2599215a29d098f not found: ID does not exist" Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.974007 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rc758"] Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.976936 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rc758"] Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.988243 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pnsjx"] Nov 21 09:47:16 crc kubenswrapper[4972]: I1121 09:47:16.991216 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pnsjx"] Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.684216 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqvm8" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.687683 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58kms" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.688868 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hh4hc" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.739420 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58kms"] Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.742575 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-58kms"] Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.750774 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sqvm8"] Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.757113 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sqvm8"] Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.773566 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" path="/var/lib/kubelet/pods/13ef553c-f6bd-4af2-9c0e-643cd14f9290/volumes" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.774672 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b43815a-969e-432e-ac57-843bee51860c" path="/var/lib/kubelet/pods/1b43815a-969e-432e-ac57-843bee51860c/volumes" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.775587 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6136a605-ff46-4462-808b-cc8d2c28faea" path="/var/lib/kubelet/pods/6136a605-ff46-4462-808b-cc8d2c28faea/volumes" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.777361 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" path="/var/lib/kubelet/pods/6e0ba187-0ec6-40e7-bd83-771510a29a5b/volumes" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.778566 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="882787b1-4df4-446b-972f-8a07c4eb5782" path="/var/lib/kubelet/pods/882787b1-4df4-446b-972f-8a07c4eb5782/volumes" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.780457 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2865e3-5706-4a03-8529-571895dde1ea" path="/var/lib/kubelet/pods/9a2865e3-5706-4a03-8529-571895dde1ea/volumes" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.781659 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hh4hc"] Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.781797 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hh4hc"] Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.819999 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vj4wg"] Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820311 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b43815a-969e-432e-ac57-843bee51860c" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820329 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b43815a-969e-432e-ac57-843bee51860c" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820344 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820352 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820363 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2865e3-5706-4a03-8529-571895dde1ea" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820372 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2865e3-5706-4a03-8529-571895dde1ea" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820383 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882787b1-4df4-446b-972f-8a07c4eb5782" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820389 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="882787b1-4df4-446b-972f-8a07c4eb5782" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820400 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882787b1-4df4-446b-972f-8a07c4eb5782" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820407 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="882787b1-4df4-446b-972f-8a07c4eb5782" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820416 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820424 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820432 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6136a605-ff46-4462-808b-cc8d2c28faea" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820439 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6136a605-ff46-4462-808b-cc8d2c28faea" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820446 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2865e3-5706-4a03-8529-571895dde1ea" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820453 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2865e3-5706-4a03-8529-571895dde1ea" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820463 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882787b1-4df4-446b-972f-8a07c4eb5782" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820496 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="882787b1-4df4-446b-972f-8a07c4eb5782" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820505 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b866e2-c40e-4b45-acfc-965161cabf5c" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820511 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b866e2-c40e-4b45-acfc-965161cabf5c" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820522 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820529 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820539 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b43815a-969e-432e-ac57-843bee51860c" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820546 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b43815a-969e-432e-ac57-843bee51860c" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820554 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820560 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820568 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6136a605-ff46-4462-808b-cc8d2c28faea" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820574 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6136a605-ff46-4462-808b-cc8d2c28faea" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820581 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f03066-ed74-40ad-ac94-c9c2d83f648e" containerName="marketplace-operator" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820587 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f03066-ed74-40ad-ac94-c9c2d83f648e" containerName="marketplace-operator" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820597 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820604 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820613 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820619 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820629 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b43815a-969e-432e-ac57-843bee51860c" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820636 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b43815a-969e-432e-ac57-843bee51860c" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820642 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b866e2-c40e-4b45-acfc-965161cabf5c" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820648 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b866e2-c40e-4b45-acfc-965161cabf5c" containerName="extract-content" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820656 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2865e3-5706-4a03-8529-571895dde1ea" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820662 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2865e3-5706-4a03-8529-571895dde1ea" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820670 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820676 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820682 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b866e2-c40e-4b45-acfc-965161cabf5c" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820688 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b866e2-c40e-4b45-acfc-965161cabf5c" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820697 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820703 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820711 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6136a605-ff46-4462-808b-cc8d2c28faea" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820718 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6136a605-ff46-4462-808b-cc8d2c28faea" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: E1121 09:47:17.820727 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820733 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" containerName="extract-utilities" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820852 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820870 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4f03066-ed74-40ad-ac94-c9c2d83f648e" containerName="marketplace-operator" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820883 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2865e3-5706-4a03-8529-571895dde1ea" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820892 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e0ba187-0ec6-40e7-bd83-771510a29a5b" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820901 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="6136a605-ff46-4462-808b-cc8d2c28faea" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820908 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b43815a-969e-432e-ac57-843bee51860c" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820917 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="882787b1-4df4-446b-972f-8a07c4eb5782" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820924 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b866e2-c40e-4b45-acfc-965161cabf5c" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.820933 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ef553c-f6bd-4af2-9c0e-643cd14f9290" containerName="registry-server" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.821768 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.825132 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.830201 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vj4wg"] Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.879913 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5w8r\" (UniqueName: \"kubernetes.io/projected/efbc5e5e-d261-4f3b-b90b-febd39de0327-kube-api-access-q5w8r\") pod \"community-operators-vj4wg\" (UID: \"efbc5e5e-d261-4f3b-b90b-febd39de0327\") " pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.879964 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efbc5e5e-d261-4f3b-b90b-febd39de0327-utilities\") pod \"community-operators-vj4wg\" (UID: \"efbc5e5e-d261-4f3b-b90b-febd39de0327\") " pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.880014 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efbc5e5e-d261-4f3b-b90b-febd39de0327-catalog-content\") pod \"community-operators-vj4wg\" (UID: \"efbc5e5e-d261-4f3b-b90b-febd39de0327\") " pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.980949 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efbc5e5e-d261-4f3b-b90b-febd39de0327-catalog-content\") pod \"community-operators-vj4wg\" (UID: \"efbc5e5e-d261-4f3b-b90b-febd39de0327\") " pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.981043 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5w8r\" (UniqueName: \"kubernetes.io/projected/efbc5e5e-d261-4f3b-b90b-febd39de0327-kube-api-access-q5w8r\") pod \"community-operators-vj4wg\" (UID: \"efbc5e5e-d261-4f3b-b90b-febd39de0327\") " pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.981067 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efbc5e5e-d261-4f3b-b90b-febd39de0327-utilities\") pod \"community-operators-vj4wg\" (UID: \"efbc5e5e-d261-4f3b-b90b-febd39de0327\") " pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.981584 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efbc5e5e-d261-4f3b-b90b-febd39de0327-utilities\") pod \"community-operators-vj4wg\" (UID: \"efbc5e5e-d261-4f3b-b90b-febd39de0327\") " pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:17 crc kubenswrapper[4972]: I1121 09:47:17.982434 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efbc5e5e-d261-4f3b-b90b-febd39de0327-catalog-content\") pod \"community-operators-vj4wg\" (UID: \"efbc5e5e-d261-4f3b-b90b-febd39de0327\") " pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:18 crc kubenswrapper[4972]: I1121 09:47:18.003922 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5w8r\" (UniqueName: \"kubernetes.io/projected/efbc5e5e-d261-4f3b-b90b-febd39de0327-kube-api-access-q5w8r\") pod \"community-operators-vj4wg\" (UID: \"efbc5e5e-d261-4f3b-b90b-febd39de0327\") " pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:18 crc kubenswrapper[4972]: I1121 09:47:18.146693 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:18 crc kubenswrapper[4972]: I1121 09:47:18.544040 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vj4wg"] Nov 21 09:47:18 crc kubenswrapper[4972]: W1121 09:47:18.552983 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefbc5e5e_d261_4f3b_b90b_febd39de0327.slice/crio-b27a28329a62f4c06e4c1f4e58f62a8971ae0460c3528a9a1bf5add021266771 WatchSource:0}: Error finding container b27a28329a62f4c06e4c1f4e58f62a8971ae0460c3528a9a1bf5add021266771: Status 404 returned error can't find the container with id b27a28329a62f4c06e4c1f4e58f62a8971ae0460c3528a9a1bf5add021266771 Nov 21 09:47:18 crc kubenswrapper[4972]: I1121 09:47:18.699706 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj4wg" event={"ID":"efbc5e5e-d261-4f3b-b90b-febd39de0327","Type":"ContainerStarted","Data":"16f24b4aaafd96da6cc8c7d41e8bfe31f25673b06e6ae30d236d6341b6acb3ba"} Nov 21 09:47:18 crc kubenswrapper[4972]: I1121 09:47:18.699755 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj4wg" event={"ID":"efbc5e5e-d261-4f3b-b90b-febd39de0327","Type":"ContainerStarted","Data":"b27a28329a62f4c06e4c1f4e58f62a8971ae0460c3528a9a1bf5add021266771"} Nov 21 09:47:19 crc kubenswrapper[4972]: I1121 09:47:19.706810 4972 generic.go:334] "Generic (PLEG): container finished" podID="efbc5e5e-d261-4f3b-b90b-febd39de0327" containerID="16f24b4aaafd96da6cc8c7d41e8bfe31f25673b06e6ae30d236d6341b6acb3ba" exitCode=0 Nov 21 09:47:19 crc kubenswrapper[4972]: I1121 09:47:19.706867 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj4wg" event={"ID":"efbc5e5e-d261-4f3b-b90b-febd39de0327","Type":"ContainerDied","Data":"16f24b4aaafd96da6cc8c7d41e8bfe31f25673b06e6ae30d236d6341b6acb3ba"} Nov 21 09:47:19 crc kubenswrapper[4972]: I1121 09:47:19.789590 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827" path="/var/lib/kubelet/pods/ba1bcbe7-d01d-4c81-b3f6-12bcf6d73827/volumes" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.014261 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-85d8m"] Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.015972 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.021811 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.026386 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-85d8m"] Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.133990 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-utilities\") pod \"certified-operators-85d8m\" (UID: \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\") " pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.134553 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7x9\" (UniqueName: \"kubernetes.io/projected/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-kube-api-access-9t7x9\") pod \"certified-operators-85d8m\" (UID: \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\") " pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.134602 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-catalog-content\") pod \"certified-operators-85d8m\" (UID: \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\") " pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.208097 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n9p8z"] Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.209257 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.213717 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.220898 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9p8z"] Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.236230 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t7x9\" (UniqueName: \"kubernetes.io/projected/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-kube-api-access-9t7x9\") pod \"certified-operators-85d8m\" (UID: \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\") " pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.236307 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-catalog-content\") pod \"certified-operators-85d8m\" (UID: \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\") " pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.236369 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-utilities\") pod \"certified-operators-85d8m\" (UID: \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\") " pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.237089 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-utilities\") pod \"certified-operators-85d8m\" (UID: \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\") " pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.237516 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-catalog-content\") pod \"certified-operators-85d8m\" (UID: \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\") " pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.257135 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t7x9\" (UniqueName: \"kubernetes.io/projected/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-kube-api-access-9t7x9\") pod \"certified-operators-85d8m\" (UID: \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\") " pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.337637 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4btg\" (UniqueName: \"kubernetes.io/projected/30ee0fd4-14ae-4119-8c87-0ac7e529630a-kube-api-access-t4btg\") pod \"redhat-operators-n9p8z\" (UID: \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\") " pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.337759 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ee0fd4-14ae-4119-8c87-0ac7e529630a-catalog-content\") pod \"redhat-operators-n9p8z\" (UID: \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\") " pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.337900 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ee0fd4-14ae-4119-8c87-0ac7e529630a-utilities\") pod \"redhat-operators-n9p8z\" (UID: \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\") " pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.340741 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.439709 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ee0fd4-14ae-4119-8c87-0ac7e529630a-catalog-content\") pod \"redhat-operators-n9p8z\" (UID: \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\") " pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.439810 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ee0fd4-14ae-4119-8c87-0ac7e529630a-utilities\") pod \"redhat-operators-n9p8z\" (UID: \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\") " pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.439904 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4btg\" (UniqueName: \"kubernetes.io/projected/30ee0fd4-14ae-4119-8c87-0ac7e529630a-kube-api-access-t4btg\") pod \"redhat-operators-n9p8z\" (UID: \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\") " pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.440224 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ee0fd4-14ae-4119-8c87-0ac7e529630a-catalog-content\") pod \"redhat-operators-n9p8z\" (UID: \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\") " pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.440469 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ee0fd4-14ae-4119-8c87-0ac7e529630a-utilities\") pod \"redhat-operators-n9p8z\" (UID: \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\") " pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.475034 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4btg\" (UniqueName: \"kubernetes.io/projected/30ee0fd4-14ae-4119-8c87-0ac7e529630a-kube-api-access-t4btg\") pod \"redhat-operators-n9p8z\" (UID: \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\") " pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.527929 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.596922 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-85d8m"] Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.715672 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85d8m" event={"ID":"f6f0334e-e5ea-429a-9b63-7a178a3d7c64","Type":"ContainerStarted","Data":"21bd5c2d73507b7c7f89ad95c3b7b99bdf53a668d3b5cf2adc85a888f958a093"} Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.720538 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj4wg" event={"ID":"efbc5e5e-d261-4f3b-b90b-febd39de0327","Type":"ContainerStarted","Data":"5fc381b92d5ff8f7e518e54ab65458e963f37662fd0657d3b81fd72896c18023"} Nov 21 09:47:20 crc kubenswrapper[4972]: I1121 09:47:20.721543 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9p8z"] Nov 21 09:47:21 crc kubenswrapper[4972]: I1121 09:47:21.727515 4972 generic.go:334] "Generic (PLEG): container finished" podID="30ee0fd4-14ae-4119-8c87-0ac7e529630a" containerID="aaaba8d2f8c9a49e68fa20e547c582ed289f0a44f78922a49ccb6c0332cb22f3" exitCode=0 Nov 21 09:47:21 crc kubenswrapper[4972]: I1121 09:47:21.727623 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9p8z" event={"ID":"30ee0fd4-14ae-4119-8c87-0ac7e529630a","Type":"ContainerDied","Data":"aaaba8d2f8c9a49e68fa20e547c582ed289f0a44f78922a49ccb6c0332cb22f3"} Nov 21 09:47:21 crc kubenswrapper[4972]: I1121 09:47:21.728087 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9p8z" event={"ID":"30ee0fd4-14ae-4119-8c87-0ac7e529630a","Type":"ContainerStarted","Data":"756b300567d5572e643b1d7afbe89322d486ac0104863032fd93446caa5ab0c0"} Nov 21 09:47:21 crc kubenswrapper[4972]: I1121 09:47:21.735158 4972 generic.go:334] "Generic (PLEG): container finished" podID="f6f0334e-e5ea-429a-9b63-7a178a3d7c64" containerID="fea3def9036bef3c9517ae6b38511b1ea9cd3074521717918a4ac7d6695b3ad0" exitCode=0 Nov 21 09:47:21 crc kubenswrapper[4972]: I1121 09:47:21.735419 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85d8m" event={"ID":"f6f0334e-e5ea-429a-9b63-7a178a3d7c64","Type":"ContainerDied","Data":"fea3def9036bef3c9517ae6b38511b1ea9cd3074521717918a4ac7d6695b3ad0"} Nov 21 09:47:21 crc kubenswrapper[4972]: I1121 09:47:21.740104 4972 generic.go:334] "Generic (PLEG): container finished" podID="efbc5e5e-d261-4f3b-b90b-febd39de0327" containerID="5fc381b92d5ff8f7e518e54ab65458e963f37662fd0657d3b81fd72896c18023" exitCode=0 Nov 21 09:47:21 crc kubenswrapper[4972]: I1121 09:47:21.740504 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj4wg" event={"ID":"efbc5e5e-d261-4f3b-b90b-febd39de0327","Type":"ContainerDied","Data":"5fc381b92d5ff8f7e518e54ab65458e963f37662fd0657d3b81fd72896c18023"} Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.415519 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-675hp"] Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.417371 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.424006 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.428022 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-675hp"] Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.567877 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608f224f-2fba-44cb-a254-54e0bf1b64ee-catalog-content\") pod \"redhat-marketplace-675hp\" (UID: \"608f224f-2fba-44cb-a254-54e0bf1b64ee\") " pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.568243 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608f224f-2fba-44cb-a254-54e0bf1b64ee-utilities\") pod \"redhat-marketplace-675hp\" (UID: \"608f224f-2fba-44cb-a254-54e0bf1b64ee\") " pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.568265 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcdmp\" (UniqueName: \"kubernetes.io/projected/608f224f-2fba-44cb-a254-54e0bf1b64ee-kube-api-access-fcdmp\") pod \"redhat-marketplace-675hp\" (UID: \"608f224f-2fba-44cb-a254-54e0bf1b64ee\") " pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.669715 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608f224f-2fba-44cb-a254-54e0bf1b64ee-catalog-content\") pod \"redhat-marketplace-675hp\" (UID: \"608f224f-2fba-44cb-a254-54e0bf1b64ee\") " pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.669762 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608f224f-2fba-44cb-a254-54e0bf1b64ee-utilities\") pod \"redhat-marketplace-675hp\" (UID: \"608f224f-2fba-44cb-a254-54e0bf1b64ee\") " pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.669786 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcdmp\" (UniqueName: \"kubernetes.io/projected/608f224f-2fba-44cb-a254-54e0bf1b64ee-kube-api-access-fcdmp\") pod \"redhat-marketplace-675hp\" (UID: \"608f224f-2fba-44cb-a254-54e0bf1b64ee\") " pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.670291 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608f224f-2fba-44cb-a254-54e0bf1b64ee-catalog-content\") pod \"redhat-marketplace-675hp\" (UID: \"608f224f-2fba-44cb-a254-54e0bf1b64ee\") " pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.670565 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608f224f-2fba-44cb-a254-54e0bf1b64ee-utilities\") pod \"redhat-marketplace-675hp\" (UID: \"608f224f-2fba-44cb-a254-54e0bf1b64ee\") " pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.689160 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcdmp\" (UniqueName: \"kubernetes.io/projected/608f224f-2fba-44cb-a254-54e0bf1b64ee-kube-api-access-fcdmp\") pod \"redhat-marketplace-675hp\" (UID: \"608f224f-2fba-44cb-a254-54e0bf1b64ee\") " pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.751025 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj4wg" event={"ID":"efbc5e5e-d261-4f3b-b90b-febd39de0327","Type":"ContainerStarted","Data":"d7427fc0cb0a7be625f24adebdc2cdb70039b44b47b9e999acf4ef3e9445e021"} Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.756317 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9p8z" event={"ID":"30ee0fd4-14ae-4119-8c87-0ac7e529630a","Type":"ContainerStarted","Data":"8dbe9e7a2bbd27c912cf804f8d8a11bf3282f37eb448e30e34374680542563f0"} Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.759407 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85d8m" event={"ID":"f6f0334e-e5ea-429a-9b63-7a178a3d7c64","Type":"ContainerStarted","Data":"14a2a328902c330a11e4e467d2f5d8479dbd2fe605b22b12bb9bd9a662ee979d"} Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.780417 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vj4wg" podStartSLOduration=3.189028447 podStartE2EDuration="5.780385173s" podCreationTimestamp="2025-11-21 09:47:17 +0000 UTC" firstStartedPulling="2025-11-21 09:47:19.711257006 +0000 UTC m=+384.820399504" lastFinishedPulling="2025-11-21 09:47:22.302613732 +0000 UTC m=+387.411756230" observedRunningTime="2025-11-21 09:47:22.775327446 +0000 UTC m=+387.884469964" watchObservedRunningTime="2025-11-21 09:47:22.780385173 +0000 UTC m=+387.889527671" Nov 21 09:47:22 crc kubenswrapper[4972]: I1121 09:47:22.803191 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:23 crc kubenswrapper[4972]: I1121 09:47:23.081930 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-675hp"] Nov 21 09:47:23 crc kubenswrapper[4972]: I1121 09:47:23.769938 4972 generic.go:334] "Generic (PLEG): container finished" podID="30ee0fd4-14ae-4119-8c87-0ac7e529630a" containerID="8dbe9e7a2bbd27c912cf804f8d8a11bf3282f37eb448e30e34374680542563f0" exitCode=0 Nov 21 09:47:23 crc kubenswrapper[4972]: I1121 09:47:23.769993 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9p8z" event={"ID":"30ee0fd4-14ae-4119-8c87-0ac7e529630a","Type":"ContainerDied","Data":"8dbe9e7a2bbd27c912cf804f8d8a11bf3282f37eb448e30e34374680542563f0"} Nov 21 09:47:23 crc kubenswrapper[4972]: I1121 09:47:23.773905 4972 generic.go:334] "Generic (PLEG): container finished" podID="608f224f-2fba-44cb-a254-54e0bf1b64ee" containerID="654633c4c1d491bcd03ae118fca3940aa4d22444b34fe3f2c41672f5beaf07b3" exitCode=0 Nov 21 09:47:23 crc kubenswrapper[4972]: I1121 09:47:23.773983 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-675hp" event={"ID":"608f224f-2fba-44cb-a254-54e0bf1b64ee","Type":"ContainerDied","Data":"654633c4c1d491bcd03ae118fca3940aa4d22444b34fe3f2c41672f5beaf07b3"} Nov 21 09:47:23 crc kubenswrapper[4972]: I1121 09:47:23.774019 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-675hp" event={"ID":"608f224f-2fba-44cb-a254-54e0bf1b64ee","Type":"ContainerStarted","Data":"df32493a87fcd64512ad98ba58bf470a00a4499bddacd2cc10af85457ecaaf1e"} Nov 21 09:47:23 crc kubenswrapper[4972]: I1121 09:47:23.778977 4972 generic.go:334] "Generic (PLEG): container finished" podID="f6f0334e-e5ea-429a-9b63-7a178a3d7c64" containerID="14a2a328902c330a11e4e467d2f5d8479dbd2fe605b22b12bb9bd9a662ee979d" exitCode=0 Nov 21 09:47:23 crc kubenswrapper[4972]: I1121 09:47:23.779915 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85d8m" event={"ID":"f6f0334e-e5ea-429a-9b63-7a178a3d7c64","Type":"ContainerDied","Data":"14a2a328902c330a11e4e467d2f5d8479dbd2fe605b22b12bb9bd9a662ee979d"} Nov 21 09:47:24 crc kubenswrapper[4972]: I1121 09:47:24.787577 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85d8m" event={"ID":"f6f0334e-e5ea-429a-9b63-7a178a3d7c64","Type":"ContainerStarted","Data":"d2a2b2b39ddbfed79adfee3dd01e5ff9bd7723375a5aabf76dbffcecd95ee69a"} Nov 21 09:47:24 crc kubenswrapper[4972]: I1121 09:47:24.791220 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9p8z" event={"ID":"30ee0fd4-14ae-4119-8c87-0ac7e529630a","Type":"ContainerStarted","Data":"6ef5309fc1625c7e7db65fa613e4e7874aeb4c2c0df70f5961c0ca3f968ae722"} Nov 21 09:47:24 crc kubenswrapper[4972]: I1121 09:47:24.793437 4972 generic.go:334] "Generic (PLEG): container finished" podID="608f224f-2fba-44cb-a254-54e0bf1b64ee" containerID="94b1d97c84c9ee803ad51c051ebda5421b2005cf15277b6212a5b5412d55777f" exitCode=0 Nov 21 09:47:24 crc kubenswrapper[4972]: I1121 09:47:24.793494 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-675hp" event={"ID":"608f224f-2fba-44cb-a254-54e0bf1b64ee","Type":"ContainerDied","Data":"94b1d97c84c9ee803ad51c051ebda5421b2005cf15277b6212a5b5412d55777f"} Nov 21 09:47:24 crc kubenswrapper[4972]: I1121 09:47:24.804726 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-85d8m" podStartSLOduration=3.273785802 podStartE2EDuration="5.804701569s" podCreationTimestamp="2025-11-21 09:47:19 +0000 UTC" firstStartedPulling="2025-11-21 09:47:21.737432651 +0000 UTC m=+386.846575149" lastFinishedPulling="2025-11-21 09:47:24.268348418 +0000 UTC m=+389.377490916" observedRunningTime="2025-11-21 09:47:24.803355842 +0000 UTC m=+389.912498360" watchObservedRunningTime="2025-11-21 09:47:24.804701569 +0000 UTC m=+389.913844067" Nov 21 09:47:24 crc kubenswrapper[4972]: I1121 09:47:24.839900 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n9p8z" podStartSLOduration=2.371600525 podStartE2EDuration="4.839881373s" podCreationTimestamp="2025-11-21 09:47:20 +0000 UTC" firstStartedPulling="2025-11-21 09:47:21.73074211 +0000 UTC m=+386.839884608" lastFinishedPulling="2025-11-21 09:47:24.199022958 +0000 UTC m=+389.308165456" observedRunningTime="2025-11-21 09:47:24.836288275 +0000 UTC m=+389.945430773" watchObservedRunningTime="2025-11-21 09:47:24.839881373 +0000 UTC m=+389.949023881" Nov 21 09:47:26 crc kubenswrapper[4972]: I1121 09:47:26.805345 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-675hp" event={"ID":"608f224f-2fba-44cb-a254-54e0bf1b64ee","Type":"ContainerStarted","Data":"5acdee8f981fac5dc192acdfb2c97c99a59bd050734e91a71bdd2134df96052c"} Nov 21 09:47:28 crc kubenswrapper[4972]: I1121 09:47:28.147453 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:28 crc kubenswrapper[4972]: I1121 09:47:28.147535 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:28 crc kubenswrapper[4972]: I1121 09:47:28.501361 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:28 crc kubenswrapper[4972]: I1121 09:47:28.527121 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-675hp" podStartSLOduration=5.001086175 podStartE2EDuration="6.527086731s" podCreationTimestamp="2025-11-21 09:47:22 +0000 UTC" firstStartedPulling="2025-11-21 09:47:23.777586265 +0000 UTC m=+388.886728763" lastFinishedPulling="2025-11-21 09:47:25.303586821 +0000 UTC m=+390.412729319" observedRunningTime="2025-11-21 09:47:26.825461152 +0000 UTC m=+391.934603650" watchObservedRunningTime="2025-11-21 09:47:28.527086731 +0000 UTC m=+393.636229239" Nov 21 09:47:28 crc kubenswrapper[4972]: I1121 09:47:28.862492 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:47:30 crc kubenswrapper[4972]: I1121 09:47:30.341686 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:30 crc kubenswrapper[4972]: I1121 09:47:30.342054 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:30 crc kubenswrapper[4972]: I1121 09:47:30.390228 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:30 crc kubenswrapper[4972]: I1121 09:47:30.528882 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:30 crc kubenswrapper[4972]: I1121 09:47:30.528950 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:30 crc kubenswrapper[4972]: I1121 09:47:30.568608 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:30 crc kubenswrapper[4972]: I1121 09:47:30.876318 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:47:30 crc kubenswrapper[4972]: I1121 09:47:30.888145 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:47:32 crc kubenswrapper[4972]: I1121 09:47:32.804221 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:32 crc kubenswrapper[4972]: I1121 09:47:32.804292 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:32 crc kubenswrapper[4972]: I1121 09:47:32.853644 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:32 crc kubenswrapper[4972]: I1121 09:47:32.894328 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:47:56 crc kubenswrapper[4972]: I1121 09:47:56.179355 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:47:56 crc kubenswrapper[4972]: I1121 09:47:56.179942 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:48:26 crc kubenswrapper[4972]: I1121 09:48:26.179107 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:48:26 crc kubenswrapper[4972]: I1121 09:48:26.179799 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:48:56 crc kubenswrapper[4972]: I1121 09:48:56.179337 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:48:56 crc kubenswrapper[4972]: I1121 09:48:56.180788 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:48:56 crc kubenswrapper[4972]: I1121 09:48:56.180932 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:48:56 crc kubenswrapper[4972]: I1121 09:48:56.181531 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d6ef47244cd721a2f376762ea2eeca1f7022ab7431ea40b087c23a5af7850eb"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 09:48:56 crc kubenswrapper[4972]: I1121 09:48:56.181742 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://4d6ef47244cd721a2f376762ea2eeca1f7022ab7431ea40b087c23a5af7850eb" gracePeriod=600 Nov 21 09:48:56 crc kubenswrapper[4972]: I1121 09:48:56.394867 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="4d6ef47244cd721a2f376762ea2eeca1f7022ab7431ea40b087c23a5af7850eb" exitCode=0 Nov 21 09:48:56 crc kubenswrapper[4972]: I1121 09:48:56.394927 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"4d6ef47244cd721a2f376762ea2eeca1f7022ab7431ea40b087c23a5af7850eb"} Nov 21 09:48:56 crc kubenswrapper[4972]: I1121 09:48:56.395011 4972 scope.go:117] "RemoveContainer" containerID="ae01d11bf108fd06905f0f9b12de600c1c509caab8d40929d5b6981236ec0d0b" Nov 21 09:48:57 crc kubenswrapper[4972]: I1121 09:48:57.405164 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"7154c6aa5daea51259af1d22c16b90973a0b20ad287437956fb86e8298c8b683"} Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.630154 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-krkh4"] Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.632051 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.656990 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-krkh4"] Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.756544 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e91701ae-52d7-463a-a891-c2c82f662a5f-registry-tls\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.756659 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.756738 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e91701ae-52d7-463a-a891-c2c82f662a5f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.756787 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d57n5\" (UniqueName: \"kubernetes.io/projected/e91701ae-52d7-463a-a891-c2c82f662a5f-kube-api-access-d57n5\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.756862 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e91701ae-52d7-463a-a891-c2c82f662a5f-registry-certificates\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.756904 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e91701ae-52d7-463a-a891-c2c82f662a5f-trusted-ca\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.756941 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e91701ae-52d7-463a-a891-c2c82f662a5f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.757009 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e91701ae-52d7-463a-a891-c2c82f662a5f-bound-sa-token\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.784178 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.858115 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e91701ae-52d7-463a-a891-c2c82f662a5f-registry-tls\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.859291 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e91701ae-52d7-463a-a891-c2c82f662a5f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.859657 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d57n5\" (UniqueName: \"kubernetes.io/projected/e91701ae-52d7-463a-a891-c2c82f662a5f-kube-api-access-d57n5\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.859998 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e91701ae-52d7-463a-a891-c2c82f662a5f-registry-certificates\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.860120 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e91701ae-52d7-463a-a891-c2c82f662a5f-trusted-ca\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.860639 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e91701ae-52d7-463a-a891-c2c82f662a5f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.860895 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e91701ae-52d7-463a-a891-c2c82f662a5f-registry-certificates\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.861215 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e91701ae-52d7-463a-a891-c2c82f662a5f-trusted-ca\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.861257 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e91701ae-52d7-463a-a891-c2c82f662a5f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.861323 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e91701ae-52d7-463a-a891-c2c82f662a5f-bound-sa-token\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.864881 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e91701ae-52d7-463a-a891-c2c82f662a5f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.864982 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e91701ae-52d7-463a-a891-c2c82f662a5f-registry-tls\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.876654 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d57n5\" (UniqueName: \"kubernetes.io/projected/e91701ae-52d7-463a-a891-c2c82f662a5f-kube-api-access-d57n5\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.882290 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e91701ae-52d7-463a-a891-c2c82f662a5f-bound-sa-token\") pod \"image-registry-66df7c8f76-krkh4\" (UID: \"e91701ae-52d7-463a-a891-c2c82f662a5f\") " pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:22 crc kubenswrapper[4972]: I1121 09:49:22.954687 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:23 crc kubenswrapper[4972]: I1121 09:49:23.191492 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-krkh4"] Nov 21 09:49:23 crc kubenswrapper[4972]: W1121 09:49:23.198898 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode91701ae_52d7_463a_a891_c2c82f662a5f.slice/crio-faa011300820efe37871c502c4bc9f6d0e6cc4719f5a6c0236c40d51d038f60b WatchSource:0}: Error finding container faa011300820efe37871c502c4bc9f6d0e6cc4719f5a6c0236c40d51d038f60b: Status 404 returned error can't find the container with id faa011300820efe37871c502c4bc9f6d0e6cc4719f5a6c0236c40d51d038f60b Nov 21 09:49:23 crc kubenswrapper[4972]: I1121 09:49:23.790354 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" event={"ID":"e91701ae-52d7-463a-a891-c2c82f662a5f","Type":"ContainerStarted","Data":"a1684336e665c6d80137df1af54ec57136f14911115cc6608a7fc674e81de833"} Nov 21 09:49:23 crc kubenswrapper[4972]: I1121 09:49:23.790678 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" event={"ID":"e91701ae-52d7-463a-a891-c2c82f662a5f","Type":"ContainerStarted","Data":"faa011300820efe37871c502c4bc9f6d0e6cc4719f5a6c0236c40d51d038f60b"} Nov 21 09:49:23 crc kubenswrapper[4972]: I1121 09:49:23.790701 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:23 crc kubenswrapper[4972]: I1121 09:49:23.818908 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" podStartSLOduration=1.818880176 podStartE2EDuration="1.818880176s" podCreationTimestamp="2025-11-21 09:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:49:23.81386743 +0000 UTC m=+508.923009968" watchObservedRunningTime="2025-11-21 09:49:23.818880176 +0000 UTC m=+508.928022724" Nov 21 09:49:42 crc kubenswrapper[4972]: I1121 09:49:42.961633 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-krkh4" Nov 21 09:49:43 crc kubenswrapper[4972]: I1121 09:49:43.046173 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5s9h7"] Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.090043 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" podUID="6e05e924-7aac-419c-82a7-0d9b9592b39f" containerName="registry" containerID="cri-o://77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0" gracePeriod=30 Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.427573 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.553732 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6e05e924-7aac-419c-82a7-0d9b9592b39f-ca-trust-extracted\") pod \"6e05e924-7aac-419c-82a7-0d9b9592b39f\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.553777 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e05e924-7aac-419c-82a7-0d9b9592b39f-trusted-ca\") pod \"6e05e924-7aac-419c-82a7-0d9b9592b39f\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.553817 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6e05e924-7aac-419c-82a7-0d9b9592b39f-installation-pull-secrets\") pod \"6e05e924-7aac-419c-82a7-0d9b9592b39f\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.553892 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-registry-tls\") pod \"6e05e924-7aac-419c-82a7-0d9b9592b39f\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.553943 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-bound-sa-token\") pod \"6e05e924-7aac-419c-82a7-0d9b9592b39f\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.553983 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-485b8\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-kube-api-access-485b8\") pod \"6e05e924-7aac-419c-82a7-0d9b9592b39f\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.554028 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6e05e924-7aac-419c-82a7-0d9b9592b39f-registry-certificates\") pod \"6e05e924-7aac-419c-82a7-0d9b9592b39f\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.554155 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"6e05e924-7aac-419c-82a7-0d9b9592b39f\" (UID: \"6e05e924-7aac-419c-82a7-0d9b9592b39f\") " Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.554721 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e05e924-7aac-419c-82a7-0d9b9592b39f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6e05e924-7aac-419c-82a7-0d9b9592b39f" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.555152 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e05e924-7aac-419c-82a7-0d9b9592b39f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "6e05e924-7aac-419c-82a7-0d9b9592b39f" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.564089 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-kube-api-access-485b8" (OuterVolumeSpecName: "kube-api-access-485b8") pod "6e05e924-7aac-419c-82a7-0d9b9592b39f" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f"). InnerVolumeSpecName "kube-api-access-485b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.564250 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e05e924-7aac-419c-82a7-0d9b9592b39f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "6e05e924-7aac-419c-82a7-0d9b9592b39f" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.565067 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "6e05e924-7aac-419c-82a7-0d9b9592b39f" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.565256 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "6e05e924-7aac-419c-82a7-0d9b9592b39f" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.567093 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "6e05e924-7aac-419c-82a7-0d9b9592b39f" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.576617 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e05e924-7aac-419c-82a7-0d9b9592b39f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "6e05e924-7aac-419c-82a7-0d9b9592b39f" (UID: "6e05e924-7aac-419c-82a7-0d9b9592b39f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.655412 4972 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6e05e924-7aac-419c-82a7-0d9b9592b39f-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.655454 4972 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6e05e924-7aac-419c-82a7-0d9b9592b39f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.655465 4972 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e05e924-7aac-419c-82a7-0d9b9592b39f-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.655475 4972 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6e05e924-7aac-419c-82a7-0d9b9592b39f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.655484 4972 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.655491 4972 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 09:50:08 crc kubenswrapper[4972]: I1121 09:50:08.655499 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-485b8\" (UniqueName: \"kubernetes.io/projected/6e05e924-7aac-419c-82a7-0d9b9592b39f-kube-api-access-485b8\") on node \"crc\" DevicePath \"\"" Nov 21 09:50:09 crc kubenswrapper[4972]: I1121 09:50:09.084378 4972 generic.go:334] "Generic (PLEG): container finished" podID="6e05e924-7aac-419c-82a7-0d9b9592b39f" containerID="77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0" exitCode=0 Nov 21 09:50:09 crc kubenswrapper[4972]: I1121 09:50:09.084430 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" event={"ID":"6e05e924-7aac-419c-82a7-0d9b9592b39f","Type":"ContainerDied","Data":"77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0"} Nov 21 09:50:09 crc kubenswrapper[4972]: I1121 09:50:09.084470 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" event={"ID":"6e05e924-7aac-419c-82a7-0d9b9592b39f","Type":"ContainerDied","Data":"0ab29f002debf56eff0738f7cd665ea0f6e68c3cd4068c01015efb813b970404"} Nov 21 09:50:09 crc kubenswrapper[4972]: I1121 09:50:09.084495 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5s9h7" Nov 21 09:50:09 crc kubenswrapper[4972]: I1121 09:50:09.084502 4972 scope.go:117] "RemoveContainer" containerID="77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0" Nov 21 09:50:09 crc kubenswrapper[4972]: I1121 09:50:09.110538 4972 scope.go:117] "RemoveContainer" containerID="77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0" Nov 21 09:50:09 crc kubenswrapper[4972]: E1121 09:50:09.112103 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0\": container with ID starting with 77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0 not found: ID does not exist" containerID="77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0" Nov 21 09:50:09 crc kubenswrapper[4972]: I1121 09:50:09.112152 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0"} err="failed to get container status \"77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0\": rpc error: code = NotFound desc = could not find container \"77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0\": container with ID starting with 77a2ddf75739b1046570462a786af260121fb70fc6510f4c5e9c7f0b7358aac0 not found: ID does not exist" Nov 21 09:50:09 crc kubenswrapper[4972]: I1121 09:50:09.122445 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5s9h7"] Nov 21 09:50:09 crc kubenswrapper[4972]: I1121 09:50:09.123750 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5s9h7"] Nov 21 09:50:09 crc kubenswrapper[4972]: I1121 09:50:09.768567 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e05e924-7aac-419c-82a7-0d9b9592b39f" path="/var/lib/kubelet/pods/6e05e924-7aac-419c-82a7-0d9b9592b39f/volumes" Nov 21 09:50:56 crc kubenswrapper[4972]: I1121 09:50:56.178969 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:50:56 crc kubenswrapper[4972]: I1121 09:50:56.179710 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:51:26 crc kubenswrapper[4972]: I1121 09:51:26.179567 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:51:26 crc kubenswrapper[4972]: I1121 09:51:26.180325 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:51:56 crc kubenswrapper[4972]: I1121 09:51:56.179027 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:51:56 crc kubenswrapper[4972]: I1121 09:51:56.179890 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:51:56 crc kubenswrapper[4972]: I1121 09:51:56.179963 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:51:56 crc kubenswrapper[4972]: I1121 09:51:56.180946 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7154c6aa5daea51259af1d22c16b90973a0b20ad287437956fb86e8298c8b683"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 09:51:56 crc kubenswrapper[4972]: I1121 09:51:56.181068 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://7154c6aa5daea51259af1d22c16b90973a0b20ad287437956fb86e8298c8b683" gracePeriod=600 Nov 21 09:51:56 crc kubenswrapper[4972]: I1121 09:51:56.879657 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="7154c6aa5daea51259af1d22c16b90973a0b20ad287437956fb86e8298c8b683" exitCode=0 Nov 21 09:51:56 crc kubenswrapper[4972]: I1121 09:51:56.879737 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"7154c6aa5daea51259af1d22c16b90973a0b20ad287437956fb86e8298c8b683"} Nov 21 09:51:56 crc kubenswrapper[4972]: I1121 09:51:56.879980 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"291ebe608526f7ac9a64156ae1087bae54f85cc7ee3e395ff3ac3ef42a7a5a21"} Nov 21 09:51:56 crc kubenswrapper[4972]: I1121 09:51:56.880004 4972 scope.go:117] "RemoveContainer" containerID="4d6ef47244cd721a2f376762ea2eeca1f7022ab7431ea40b087c23a5af7850eb" Nov 21 09:53:39 crc kubenswrapper[4972]: I1121 09:53:39.424080 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kwmvg"] Nov 21 09:53:39 crc kubenswrapper[4972]: I1121 09:53:39.424940 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" podUID="c6ff0ba3-a662-4497-a3f1-70ea785beb6e" containerName="controller-manager" containerID="cri-o://f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42" gracePeriod=30 Nov 21 09:53:39 crc kubenswrapper[4972]: I1121 09:53:39.516874 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw"] Nov 21 09:53:39 crc kubenswrapper[4972]: I1121 09:53:39.517084 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" podUID="a33da252-8a42-4fb1-8663-b4046881cae0" containerName="route-controller-manager" containerID="cri-o://796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de" gracePeriod=30 Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.318608 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.379121 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.466944 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-serving-cert\") pod \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.467041 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-proxy-ca-bundles\") pod \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.467187 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-client-ca\") pod \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.467235 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zww24\" (UniqueName: \"kubernetes.io/projected/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-kube-api-access-zww24\") pod \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.467287 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-config\") pod \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\" (UID: \"c6ff0ba3-a662-4497-a3f1-70ea785beb6e\") " Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.468079 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c6ff0ba3-a662-4497-a3f1-70ea785beb6e" (UID: "c6ff0ba3-a662-4497-a3f1-70ea785beb6e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.468091 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-client-ca" (OuterVolumeSpecName: "client-ca") pod "c6ff0ba3-a662-4497-a3f1-70ea785beb6e" (UID: "c6ff0ba3-a662-4497-a3f1-70ea785beb6e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.468363 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-config" (OuterVolumeSpecName: "config") pod "c6ff0ba3-a662-4497-a3f1-70ea785beb6e" (UID: "c6ff0ba3-a662-4497-a3f1-70ea785beb6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.471876 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c6ff0ba3-a662-4497-a3f1-70ea785beb6e" (UID: "c6ff0ba3-a662-4497-a3f1-70ea785beb6e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.472073 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-kube-api-access-zww24" (OuterVolumeSpecName: "kube-api-access-zww24") pod "c6ff0ba3-a662-4497-a3f1-70ea785beb6e" (UID: "c6ff0ba3-a662-4497-a3f1-70ea785beb6e"). InnerVolumeSpecName "kube-api-access-zww24". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.568331 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a33da252-8a42-4fb1-8663-b4046881cae0-serving-cert\") pod \"a33da252-8a42-4fb1-8663-b4046881cae0\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.568441 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33da252-8a42-4fb1-8663-b4046881cae0-config\") pod \"a33da252-8a42-4fb1-8663-b4046881cae0\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.568499 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwz9w\" (UniqueName: \"kubernetes.io/projected/a33da252-8a42-4fb1-8663-b4046881cae0-kube-api-access-hwz9w\") pod \"a33da252-8a42-4fb1-8663-b4046881cae0\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.568540 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a33da252-8a42-4fb1-8663-b4046881cae0-client-ca\") pod \"a33da252-8a42-4fb1-8663-b4046881cae0\" (UID: \"a33da252-8a42-4fb1-8663-b4046881cae0\") " Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.568864 4972 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.568887 4972 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.568899 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zww24\" (UniqueName: \"kubernetes.io/projected/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-kube-api-access-zww24\") on node \"crc\" DevicePath \"\"" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.568913 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.568924 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ff0ba3-a662-4497-a3f1-70ea785beb6e-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.569663 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a33da252-8a42-4fb1-8663-b4046881cae0-client-ca" (OuterVolumeSpecName: "client-ca") pod "a33da252-8a42-4fb1-8663-b4046881cae0" (UID: "a33da252-8a42-4fb1-8663-b4046881cae0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.571299 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a33da252-8a42-4fb1-8663-b4046881cae0-config" (OuterVolumeSpecName: "config") pod "a33da252-8a42-4fb1-8663-b4046881cae0" (UID: "a33da252-8a42-4fb1-8663-b4046881cae0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.573681 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a33da252-8a42-4fb1-8663-b4046881cae0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a33da252-8a42-4fb1-8663-b4046881cae0" (UID: "a33da252-8a42-4fb1-8663-b4046881cae0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.574565 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a33da252-8a42-4fb1-8663-b4046881cae0-kube-api-access-hwz9w" (OuterVolumeSpecName: "kube-api-access-hwz9w") pod "a33da252-8a42-4fb1-8663-b4046881cae0" (UID: "a33da252-8a42-4fb1-8663-b4046881cae0"). InnerVolumeSpecName "kube-api-access-hwz9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.575175 4972 generic.go:334] "Generic (PLEG): container finished" podID="a33da252-8a42-4fb1-8663-b4046881cae0" containerID="796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de" exitCode=0 Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.575256 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" event={"ID":"a33da252-8a42-4fb1-8663-b4046881cae0","Type":"ContainerDied","Data":"796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de"} Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.575284 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.575312 4972 scope.go:117] "RemoveContainer" containerID="796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.575297 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw" event={"ID":"a33da252-8a42-4fb1-8663-b4046881cae0","Type":"ContainerDied","Data":"d1f8aea372e176dd30d28ef28de414ddf1e3a26c82b72b42bcd5f2ff94a6b008"} Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.577645 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.579165 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" event={"ID":"c6ff0ba3-a662-4497-a3f1-70ea785beb6e","Type":"ContainerDied","Data":"f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42"} Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.576899 4972 generic.go:334] "Generic (PLEG): container finished" podID="c6ff0ba3-a662-4497-a3f1-70ea785beb6e" containerID="f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42" exitCode=0 Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.580957 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kwmvg" event={"ID":"c6ff0ba3-a662-4497-a3f1-70ea785beb6e","Type":"ContainerDied","Data":"9e14cda0b3f6af96548d29be5ced0c27311ed48f51f249dda8da4cfdd47b2a28"} Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.625251 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw"] Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.626621 4972 scope.go:117] "RemoveContainer" containerID="796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de" Nov 21 09:53:40 crc kubenswrapper[4972]: E1121 09:53:40.627011 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de\": container with ID starting with 796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de not found: ID does not exist" containerID="796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.627048 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de"} err="failed to get container status \"796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de\": rpc error: code = NotFound desc = could not find container \"796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de\": container with ID starting with 796a430639b333770dd8d8076dfec81c7d24d42f73a29c8c890d938c7596f4de not found: ID does not exist" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.627075 4972 scope.go:117] "RemoveContainer" containerID="f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.635756 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hrxkw"] Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.641130 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kwmvg"] Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.643215 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kwmvg"] Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.645752 4972 scope.go:117] "RemoveContainer" containerID="f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42" Nov 21 09:53:40 crc kubenswrapper[4972]: E1121 09:53:40.646145 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42\": container with ID starting with f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42 not found: ID does not exist" containerID="f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.646172 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42"} err="failed to get container status \"f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42\": rpc error: code = NotFound desc = could not find container \"f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42\": container with ID starting with f11e43c32d1978308afe6f68349193122edb70d34cb59223cb484d732d9bbc42 not found: ID does not exist" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.669957 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33da252-8a42-4fb1-8663-b4046881cae0-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.669989 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwz9w\" (UniqueName: \"kubernetes.io/projected/a33da252-8a42-4fb1-8663-b4046881cae0-kube-api-access-hwz9w\") on node \"crc\" DevicePath \"\"" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.670001 4972 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a33da252-8a42-4fb1-8663-b4046881cae0-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.670014 4972 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a33da252-8a42-4fb1-8663-b4046881cae0-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.717324 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-655cfb5696-jx6t4"] Nov 21 09:53:40 crc kubenswrapper[4972]: E1121 09:53:40.717783 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ff0ba3-a662-4497-a3f1-70ea785beb6e" containerName="controller-manager" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.717812 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ff0ba3-a662-4497-a3f1-70ea785beb6e" containerName="controller-manager" Nov 21 09:53:40 crc kubenswrapper[4972]: E1121 09:53:40.717845 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e05e924-7aac-419c-82a7-0d9b9592b39f" containerName="registry" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.717879 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e05e924-7aac-419c-82a7-0d9b9592b39f" containerName="registry" Nov 21 09:53:40 crc kubenswrapper[4972]: E1121 09:53:40.717905 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a33da252-8a42-4fb1-8663-b4046881cae0" containerName="route-controller-manager" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.717915 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a33da252-8a42-4fb1-8663-b4046881cae0" containerName="route-controller-manager" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.718042 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e05e924-7aac-419c-82a7-0d9b9592b39f" containerName="registry" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.718073 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ff0ba3-a662-4497-a3f1-70ea785beb6e" containerName="controller-manager" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.718086 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a33da252-8a42-4fb1-8663-b4046881cae0" containerName="route-controller-manager" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.721699 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.726214 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.726452 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.726601 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.726709 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.726806 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.729214 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n"] Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.730063 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.732501 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.734482 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.736184 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.736514 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.736662 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.736899 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.737063 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.739096 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-655cfb5696-jx6t4"] Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.744607 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.746182 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n"] Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.872360 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-config\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.872444 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-serving-cert\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.872560 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4mjf\" (UniqueName: \"kubernetes.io/projected/031e1601-2864-4840-90d5-4b26a54663f6-kube-api-access-c4mjf\") pod \"route-controller-manager-649b577d55-jsf7n\" (UID: \"031e1601-2864-4840-90d5-4b26a54663f6\") " pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.872655 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/031e1601-2864-4840-90d5-4b26a54663f6-client-ca\") pod \"route-controller-manager-649b577d55-jsf7n\" (UID: \"031e1601-2864-4840-90d5-4b26a54663f6\") " pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.872692 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-proxy-ca-bundles\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.872731 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/031e1601-2864-4840-90d5-4b26a54663f6-serving-cert\") pod \"route-controller-manager-649b577d55-jsf7n\" (UID: \"031e1601-2864-4840-90d5-4b26a54663f6\") " pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.872807 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031e1601-2864-4840-90d5-4b26a54663f6-config\") pod \"route-controller-manager-649b577d55-jsf7n\" (UID: \"031e1601-2864-4840-90d5-4b26a54663f6\") " pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.872855 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-client-ca\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.872891 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68fs5\" (UniqueName: \"kubernetes.io/projected/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-kube-api-access-68fs5\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.973725 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-serving-cert\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.974555 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4mjf\" (UniqueName: \"kubernetes.io/projected/031e1601-2864-4840-90d5-4b26a54663f6-kube-api-access-c4mjf\") pod \"route-controller-manager-649b577d55-jsf7n\" (UID: \"031e1601-2864-4840-90d5-4b26a54663f6\") " pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.974774 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/031e1601-2864-4840-90d5-4b26a54663f6-client-ca\") pod \"route-controller-manager-649b577d55-jsf7n\" (UID: \"031e1601-2864-4840-90d5-4b26a54663f6\") " pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.974956 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-proxy-ca-bundles\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.975102 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/031e1601-2864-4840-90d5-4b26a54663f6-serving-cert\") pod \"route-controller-manager-649b577d55-jsf7n\" (UID: \"031e1601-2864-4840-90d5-4b26a54663f6\") " pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.975255 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-client-ca\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.975375 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031e1601-2864-4840-90d5-4b26a54663f6-config\") pod \"route-controller-manager-649b577d55-jsf7n\" (UID: \"031e1601-2864-4840-90d5-4b26a54663f6\") " pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.975499 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68fs5\" (UniqueName: \"kubernetes.io/projected/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-kube-api-access-68fs5\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.975641 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-config\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.978070 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-config\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.980072 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-client-ca\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.980144 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/031e1601-2864-4840-90d5-4b26a54663f6-client-ca\") pod \"route-controller-manager-649b577d55-jsf7n\" (UID: \"031e1601-2864-4840-90d5-4b26a54663f6\") " pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.980544 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-proxy-ca-bundles\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.981922 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031e1601-2864-4840-90d5-4b26a54663f6-config\") pod \"route-controller-manager-649b577d55-jsf7n\" (UID: \"031e1601-2864-4840-90d5-4b26a54663f6\") " pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.991335 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/031e1601-2864-4840-90d5-4b26a54663f6-serving-cert\") pod \"route-controller-manager-649b577d55-jsf7n\" (UID: \"031e1601-2864-4840-90d5-4b26a54663f6\") " pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.992107 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-serving-cert\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.995718 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68fs5\" (UniqueName: \"kubernetes.io/projected/43ad47ef-23c3-4bc1-86dd-172fd5d50a5a-kube-api-access-68fs5\") pod \"controller-manager-655cfb5696-jx6t4\" (UID: \"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a\") " pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:40 crc kubenswrapper[4972]: I1121 09:53:40.996433 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4mjf\" (UniqueName: \"kubernetes.io/projected/031e1601-2864-4840-90d5-4b26a54663f6-kube-api-access-c4mjf\") pod \"route-controller-manager-649b577d55-jsf7n\" (UID: \"031e1601-2864-4840-90d5-4b26a54663f6\") " pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:41 crc kubenswrapper[4972]: I1121 09:53:41.045798 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:41 crc kubenswrapper[4972]: I1121 09:53:41.053686 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:41 crc kubenswrapper[4972]: I1121 09:53:41.463434 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n"] Nov 21 09:53:41 crc kubenswrapper[4972]: I1121 09:53:41.500113 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-655cfb5696-jx6t4"] Nov 21 09:53:41 crc kubenswrapper[4972]: W1121 09:53:41.505256 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43ad47ef_23c3_4bc1_86dd_172fd5d50a5a.slice/crio-ed030ca19adf2b83be39b98bd4a77d134150be08ab4157979c5767d7b70c328c WatchSource:0}: Error finding container ed030ca19adf2b83be39b98bd4a77d134150be08ab4157979c5767d7b70c328c: Status 404 returned error can't find the container with id ed030ca19adf2b83be39b98bd4a77d134150be08ab4157979c5767d7b70c328c Nov 21 09:53:41 crc kubenswrapper[4972]: I1121 09:53:41.589973 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" event={"ID":"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a","Type":"ContainerStarted","Data":"ed030ca19adf2b83be39b98bd4a77d134150be08ab4157979c5767d7b70c328c"} Nov 21 09:53:41 crc kubenswrapper[4972]: I1121 09:53:41.591391 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" event={"ID":"031e1601-2864-4840-90d5-4b26a54663f6","Type":"ContainerStarted","Data":"40cda6f6854df12f227a568c7be575b722c2face6f250527e15a3e2eeeefa87b"} Nov 21 09:53:41 crc kubenswrapper[4972]: I1121 09:53:41.779853 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a33da252-8a42-4fb1-8663-b4046881cae0" path="/var/lib/kubelet/pods/a33da252-8a42-4fb1-8663-b4046881cae0/volumes" Nov 21 09:53:41 crc kubenswrapper[4972]: I1121 09:53:41.781070 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ff0ba3-a662-4497-a3f1-70ea785beb6e" path="/var/lib/kubelet/pods/c6ff0ba3-a662-4497-a3f1-70ea785beb6e/volumes" Nov 21 09:53:42 crc kubenswrapper[4972]: I1121 09:53:42.604726 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" event={"ID":"031e1601-2864-4840-90d5-4b26a54663f6","Type":"ContainerStarted","Data":"31a7d206d1951fd85b79c9774afb9876076925784dbcf845f593680d39861418"} Nov 21 09:53:42 crc kubenswrapper[4972]: I1121 09:53:42.605150 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:42 crc kubenswrapper[4972]: I1121 09:53:42.608505 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" event={"ID":"43ad47ef-23c3-4bc1-86dd-172fd5d50a5a","Type":"ContainerStarted","Data":"f786e0ceeabf0126a5da08d882a6cb2b1c7e91c0a483ab668bd35a79542eeeb7"} Nov 21 09:53:42 crc kubenswrapper[4972]: I1121 09:53:42.609283 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:42 crc kubenswrapper[4972]: I1121 09:53:42.612332 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" Nov 21 09:53:42 crc kubenswrapper[4972]: I1121 09:53:42.625163 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" Nov 21 09:53:42 crc kubenswrapper[4972]: I1121 09:53:42.627081 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-649b577d55-jsf7n" podStartSLOduration=3.627053204 podStartE2EDuration="3.627053204s" podCreationTimestamp="2025-11-21 09:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:53:42.620401114 +0000 UTC m=+767.729543652" watchObservedRunningTime="2025-11-21 09:53:42.627053204 +0000 UTC m=+767.736195732" Nov 21 09:53:42 crc kubenswrapper[4972]: I1121 09:53:42.673296 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-655cfb5696-jx6t4" podStartSLOduration=3.673273984 podStartE2EDuration="3.673273984s" podCreationTimestamp="2025-11-21 09:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:53:42.671493246 +0000 UTC m=+767.780635754" watchObservedRunningTime="2025-11-21 09:53:42.673273984 +0000 UTC m=+767.782416502" Nov 21 09:53:45 crc kubenswrapper[4972]: I1121 09:53:45.528180 4972 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 21 09:53:56 crc kubenswrapper[4972]: I1121 09:53:56.178395 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:53:56 crc kubenswrapper[4972]: I1121 09:53:56.178878 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:54:26 crc kubenswrapper[4972]: I1121 09:54:26.179473 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:54:26 crc kubenswrapper[4972]: I1121 09:54:26.180191 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:54:56 crc kubenswrapper[4972]: I1121 09:54:56.179257 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:54:56 crc kubenswrapper[4972]: I1121 09:54:56.180045 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:54:56 crc kubenswrapper[4972]: I1121 09:54:56.180143 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:54:56 crc kubenswrapper[4972]: I1121 09:54:56.180914 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"291ebe608526f7ac9a64156ae1087bae54f85cc7ee3e395ff3ac3ef42a7a5a21"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 09:54:56 crc kubenswrapper[4972]: I1121 09:54:56.181010 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://291ebe608526f7ac9a64156ae1087bae54f85cc7ee3e395ff3ac3ef42a7a5a21" gracePeriod=600 Nov 21 09:54:57 crc kubenswrapper[4972]: I1121 09:54:57.066679 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="291ebe608526f7ac9a64156ae1087bae54f85cc7ee3e395ff3ac3ef42a7a5a21" exitCode=0 Nov 21 09:54:57 crc kubenswrapper[4972]: I1121 09:54:57.066743 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"291ebe608526f7ac9a64156ae1087bae54f85cc7ee3e395ff3ac3ef42a7a5a21"} Nov 21 09:54:57 crc kubenswrapper[4972]: I1121 09:54:57.067114 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"918ebedd08e1b9dafe5e4f67da03ac43cd2232ecc2da24b0d75e1131226f344f"} Nov 21 09:54:57 crc kubenswrapper[4972]: I1121 09:54:57.067138 4972 scope.go:117] "RemoveContainer" containerID="7154c6aa5daea51259af1d22c16b90973a0b20ad287437956fb86e8298c8b683" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.557336 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-7j7h4"] Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.560174 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.565136 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.565419 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.565470 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.565676 4972 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-5sjhw" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.571950 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-7j7h4"] Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.737586 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f981c5ab-a86c-463f-9e6c-9e463aa4defd-node-mnt\") pod \"crc-storage-crc-7j7h4\" (UID: \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\") " pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.737708 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mksk\" (UniqueName: \"kubernetes.io/projected/f981c5ab-a86c-463f-9e6c-9e463aa4defd-kube-api-access-7mksk\") pod \"crc-storage-crc-7j7h4\" (UID: \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\") " pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.737735 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f981c5ab-a86c-463f-9e6c-9e463aa4defd-crc-storage\") pod \"crc-storage-crc-7j7h4\" (UID: \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\") " pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.838682 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f981c5ab-a86c-463f-9e6c-9e463aa4defd-node-mnt\") pod \"crc-storage-crc-7j7h4\" (UID: \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\") " pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.838742 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mksk\" (UniqueName: \"kubernetes.io/projected/f981c5ab-a86c-463f-9e6c-9e463aa4defd-kube-api-access-7mksk\") pod \"crc-storage-crc-7j7h4\" (UID: \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\") " pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.838759 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f981c5ab-a86c-463f-9e6c-9e463aa4defd-crc-storage\") pod \"crc-storage-crc-7j7h4\" (UID: \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\") " pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.839107 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f981c5ab-a86c-463f-9e6c-9e463aa4defd-node-mnt\") pod \"crc-storage-crc-7j7h4\" (UID: \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\") " pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.839452 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f981c5ab-a86c-463f-9e6c-9e463aa4defd-crc-storage\") pod \"crc-storage-crc-7j7h4\" (UID: \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\") " pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.861662 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mksk\" (UniqueName: \"kubernetes.io/projected/f981c5ab-a86c-463f-9e6c-9e463aa4defd-kube-api-access-7mksk\") pod \"crc-storage-crc-7j7h4\" (UID: \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\") " pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:40 crc kubenswrapper[4972]: I1121 09:55:40.895223 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:41 crc kubenswrapper[4972]: I1121 09:55:41.338911 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-7j7h4"] Nov 21 09:55:41 crc kubenswrapper[4972]: I1121 09:55:41.353654 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 09:55:41 crc kubenswrapper[4972]: I1121 09:55:41.367073 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-7j7h4" event={"ID":"f981c5ab-a86c-463f-9e6c-9e463aa4defd","Type":"ContainerStarted","Data":"637b22c48af81a4990412dd9d3cda89d9932f45783cfd3718db2369e65a63223"} Nov 21 09:55:45 crc kubenswrapper[4972]: I1121 09:55:45.390225 4972 generic.go:334] "Generic (PLEG): container finished" podID="f981c5ab-a86c-463f-9e6c-9e463aa4defd" containerID="0915aa76e6131d641353a0050c95e39dea2149fec20aa1d558232ea365ab6cc2" exitCode=0 Nov 21 09:55:45 crc kubenswrapper[4972]: I1121 09:55:45.390299 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-7j7h4" event={"ID":"f981c5ab-a86c-463f-9e6c-9e463aa4defd","Type":"ContainerDied","Data":"0915aa76e6131d641353a0050c95e39dea2149fec20aa1d558232ea365ab6cc2"} Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.220531 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxwhb"] Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.221531 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovn-controller" containerID="cri-o://bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0" gracePeriod=30 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.221609 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="nbdb" containerID="cri-o://d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c" gracePeriod=30 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.221538 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="sbdb" containerID="cri-o://58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd" gracePeriod=30 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.221759 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovn-acl-logging" containerID="cri-o://7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343" gracePeriod=30 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.221660 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="kube-rbac-proxy-node" containerID="cri-o://338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c" gracePeriod=30 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.221866 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3" gracePeriod=30 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.221867 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="northd" containerID="cri-o://5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3" gracePeriod=30 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.258043 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" containerID="cri-o://500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346" gracePeriod=30 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.404094 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovnkube-controller/3.log" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.406420 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovn-acl-logging/0.log" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.407112 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovn-controller/0.log" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.407556 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346" exitCode=0 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.407585 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3" exitCode=0 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.407593 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c" exitCode=0 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.407600 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343" exitCode=143 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.407607 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0" exitCode=143 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.407638 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346"} Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.407690 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3"} Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.407706 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c"} Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.407720 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343"} Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.407734 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0"} Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.407754 4972 scope.go:117] "RemoveContainer" containerID="6e23d6219850069f682ce4b9af445532fdaaeb189b232f8e72a0d92b53c755ff" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.411385 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bgtmb_ff4929f7-ed2f-4332-af3c-31b2333bda3d/kube-multus/2.log" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.412106 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bgtmb_ff4929f7-ed2f-4332-af3c-31b2333bda3d/kube-multus/1.log" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.412165 4972 generic.go:334] "Generic (PLEG): container finished" podID="ff4929f7-ed2f-4332-af3c-31b2333bda3d" containerID="3969f599ad79be6d19471b6566e5f9148e3b59684d5ab5f5dd36490f3ad850ce" exitCode=2 Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.412249 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bgtmb" event={"ID":"ff4929f7-ed2f-4332-af3c-31b2333bda3d","Type":"ContainerDied","Data":"3969f599ad79be6d19471b6566e5f9148e3b59684d5ab5f5dd36490f3ad850ce"} Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.412876 4972 scope.go:117] "RemoveContainer" containerID="3969f599ad79be6d19471b6566e5f9148e3b59684d5ab5f5dd36490f3ad850ce" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.472017 4972 scope.go:117] "RemoveContainer" containerID="cb23c96662a648e35c4f92c6c695ad3b57dc5fb40f72efdad7a6a2910907a9ce" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.522720 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.560496 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovn-acl-logging/0.log" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.561250 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovn-controller/0.log" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.561588 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617220 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gkmfx"] Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.617545 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617568 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.617582 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="kube-rbac-proxy-ovn-metrics" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617590 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="kube-rbac-proxy-ovn-metrics" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.617603 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="kubecfg-setup" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617611 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="kubecfg-setup" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.617660 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovn-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617669 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovn-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.617680 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617687 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.617696 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617703 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.617715 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="northd" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617722 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="northd" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.617734 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="sbdb" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617741 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="sbdb" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.617752 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f981c5ab-a86c-463f-9e6c-9e463aa4defd" containerName="storage" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617760 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f981c5ab-a86c-463f-9e6c-9e463aa4defd" containerName="storage" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.617770 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="nbdb" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617777 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="nbdb" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.617789 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="kube-rbac-proxy-node" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617797 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="kube-rbac-proxy-node" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.617807 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovn-acl-logging" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617813 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovn-acl-logging" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617950 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="northd" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617964 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f981c5ab-a86c-463f-9e6c-9e463aa4defd" containerName="storage" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617973 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovn-acl-logging" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617985 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="sbdb" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.617995 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.618003 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovn-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.618011 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="nbdb" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.618020 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.618029 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.618038 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="kube-rbac-proxy-node" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.618048 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="kube-rbac-proxy-ovn-metrics" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.618058 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.618166 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.618177 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: E1121 09:55:46.618186 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.618193 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.618312 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" containerName="ovnkube-controller" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.620199 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715041 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-openvswitch\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715112 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-kubelet\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715136 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-run-netns\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715160 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f981c5ab-a86c-463f-9e6c-9e463aa4defd-node-mnt\") pod \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\" (UID: \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715179 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-slash\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715227 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-ovn\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715269 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-log-socket\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715297 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715319 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-systemd-units\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715347 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-run-ovn-kubernetes\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715367 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-cni-bin\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715392 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-var-lib-openvswitch\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715428 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mksk\" (UniqueName: \"kubernetes.io/projected/f981c5ab-a86c-463f-9e6c-9e463aa4defd-kube-api-access-7mksk\") pod \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\" (UID: \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715454 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f981c5ab-a86c-463f-9e6c-9e463aa4defd-crc-storage\") pod \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\" (UID: \"f981c5ab-a86c-463f-9e6c-9e463aa4defd\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715484 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-ovnkube-config\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715510 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-env-overrides\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715533 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-systemd\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715560 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-etc-openvswitch\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715588 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c159725e-4c82-4474-96d9-211f7d8db47f-ovn-node-metrics-cert\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715616 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8k7\" (UniqueName: \"kubernetes.io/projected/c159725e-4c82-4474-96d9-211f7d8db47f-kube-api-access-zg8k7\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715639 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-ovnkube-script-lib\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715675 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-cni-netd\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715778 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715821 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715861 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715881 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715906 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f981c5ab-a86c-463f-9e6c-9e463aa4defd-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "f981c5ab-a86c-463f-9e6c-9e463aa4defd" (UID: "f981c5ab-a86c-463f-9e6c-9e463aa4defd"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715929 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-slash" (OuterVolumeSpecName: "host-slash") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715950 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715971 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-log-socket" (OuterVolumeSpecName: "log-socket") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.715995 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.716021 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.716042 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.716067 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.716091 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.717336 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-node-log\") pod \"c159725e-4c82-4474-96d9-211f7d8db47f\" (UID: \"c159725e-4c82-4474-96d9-211f7d8db47f\") " Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.717791 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718027 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-node-log" (OuterVolumeSpecName: "node-log") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718287 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718314 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718344 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718807 4972 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718825 4972 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718848 4972 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718856 4972 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718865 4972 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718873 4972 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718880 4972 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718888 4972 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c159725e-4c82-4474-96d9-211f7d8db47f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718895 4972 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718904 4972 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-node-log\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718912 4972 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718919 4972 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718928 4972 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718936 4972 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f981c5ab-a86c-463f-9e6c-9e463aa4defd-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718943 4972 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-slash\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718952 4972 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718961 4972 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-log-socket\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.718970 4972 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.722488 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c159725e-4c82-4474-96d9-211f7d8db47f-kube-api-access-zg8k7" (OuterVolumeSpecName: "kube-api-access-zg8k7") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "kube-api-access-zg8k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.722679 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f981c5ab-a86c-463f-9e6c-9e463aa4defd-kube-api-access-7mksk" (OuterVolumeSpecName: "kube-api-access-7mksk") pod "f981c5ab-a86c-463f-9e6c-9e463aa4defd" (UID: "f981c5ab-a86c-463f-9e6c-9e463aa4defd"). InnerVolumeSpecName "kube-api-access-7mksk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.723305 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c159725e-4c82-4474-96d9-211f7d8db47f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.729914 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f981c5ab-a86c-463f-9e6c-9e463aa4defd-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "f981c5ab-a86c-463f-9e6c-9e463aa4defd" (UID: "f981c5ab-a86c-463f-9e6c-9e463aa4defd"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.731771 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c159725e-4c82-4474-96d9-211f7d8db47f" (UID: "c159725e-4c82-4474-96d9-211f7d8db47f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.819894 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-slash\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.819992 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-cni-netd\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820023 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-systemd-units\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820064 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820089 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-log-socket\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820114 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdrtr\" (UniqueName: \"kubernetes.io/projected/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-kube-api-access-hdrtr\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820152 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-var-lib-openvswitch\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820197 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-ovnkube-script-lib\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820295 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-env-overrides\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820330 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-run-systemd\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820354 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-cni-bin\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820380 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-run-ovn-kubernetes\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820402 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-kubelet\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820424 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-etc-openvswitch\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820450 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-node-log\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820511 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-run-netns\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820558 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-ovn-node-metrics-cert\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820593 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-run-openvswitch\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820620 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-ovnkube-config\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820651 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-run-ovn\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820763 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mksk\" (UniqueName: \"kubernetes.io/projected/f981c5ab-a86c-463f-9e6c-9e463aa4defd-kube-api-access-7mksk\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820782 4972 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f981c5ab-a86c-463f-9e6c-9e463aa4defd-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820791 4972 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c159725e-4c82-4474-96d9-211f7d8db47f-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820802 4972 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c159725e-4c82-4474-96d9-211f7d8db47f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.820812 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg8k7\" (UniqueName: \"kubernetes.io/projected/c159725e-4c82-4474-96d9-211f7d8db47f-kube-api-access-zg8k7\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921167 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-var-lib-openvswitch\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921213 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-ovnkube-script-lib\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921233 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-env-overrides\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921253 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-run-systemd\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921268 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-cni-bin\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921283 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-run-ovn-kubernetes\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921288 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-var-lib-openvswitch\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921320 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-kubelet\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921297 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-kubelet\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921453 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-run-systemd\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921482 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-cni-bin\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921528 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-run-ovn-kubernetes\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921642 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-node-log\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921684 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-node-log\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921735 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-etc-openvswitch\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922323 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-ovn-node-metrics-cert\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922348 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-run-netns\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922369 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-run-openvswitch\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922390 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-ovnkube-config\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922412 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-run-ovn\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922447 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-slash\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922468 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-cni-netd\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922491 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-systemd-units\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922527 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922548 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-log-socket\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922569 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdrtr\" (UniqueName: \"kubernetes.io/projected/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-kube-api-access-hdrtr\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.921852 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-etc-openvswitch\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922950 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-run-ovn\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922981 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-slash\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.923012 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-cni-netd\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.923038 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-systemd-units\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.923063 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.923123 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-log-socket\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.923155 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-host-run-netns\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922288 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-ovnkube-script-lib\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.923610 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-ovnkube-config\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.922017 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-env-overrides\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.923660 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-run-openvswitch\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.926994 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-ovn-node-metrics-cert\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:46 crc kubenswrapper[4972]: I1121 09:55:46.944062 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdrtr\" (UniqueName: \"kubernetes.io/projected/bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4-kube-api-access-hdrtr\") pod \"ovnkube-node-gkmfx\" (UID: \"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4\") " pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.235579 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:47 crc kubenswrapper[4972]: W1121 09:55:47.259158 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbcb552c_2818_4dd6_a4ff_4a7a9c5339f4.slice/crio-e1d3b245711eee092de1865fd29aeefdd04d88202bd51b94327f32be18b1c932 WatchSource:0}: Error finding container e1d3b245711eee092de1865fd29aeefdd04d88202bd51b94327f32be18b1c932: Status 404 returned error can't find the container with id e1d3b245711eee092de1865fd29aeefdd04d88202bd51b94327f32be18b1c932 Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.423208 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bgtmb_ff4929f7-ed2f-4332-af3c-31b2333bda3d/kube-multus/2.log" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.423625 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bgtmb" event={"ID":"ff4929f7-ed2f-4332-af3c-31b2333bda3d","Type":"ContainerStarted","Data":"7f2ae200ae115937f3f256affab1b71559f6dc0785c1bb5be9feee86e7430db4"} Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.426487 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" event={"ID":"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4","Type":"ContainerStarted","Data":"e1d3b245711eee092de1865fd29aeefdd04d88202bd51b94327f32be18b1c932"} Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.431039 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-7j7h4" event={"ID":"f981c5ab-a86c-463f-9e6c-9e463aa4defd","Type":"ContainerDied","Data":"637b22c48af81a4990412dd9d3cda89d9932f45783cfd3718db2369e65a63223"} Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.431083 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="637b22c48af81a4990412dd9d3cda89d9932f45783cfd3718db2369e65a63223" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.431109 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-7j7h4" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.438950 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovn-acl-logging/0.log" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.440407 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxwhb_c159725e-4c82-4474-96d9-211f7d8db47f/ovn-controller/0.log" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.441261 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd" exitCode=0 Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.441318 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.441352 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd"} Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.441320 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c" exitCode=0 Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.441450 4972 scope.go:117] "RemoveContainer" containerID="500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.441427 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c"} Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.441592 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3"} Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.450691 4972 generic.go:334] "Generic (PLEG): container finished" podID="c159725e-4c82-4474-96d9-211f7d8db47f" containerID="5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3" exitCode=0 Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.450764 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxwhb" event={"ID":"c159725e-4c82-4474-96d9-211f7d8db47f","Type":"ContainerDied","Data":"71839c1a071543e2eebc4b541e695c9c12c5c08ecd82d0aa48cbe9ac34b02581"} Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.495559 4972 scope.go:117] "RemoveContainer" containerID="58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.508763 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxwhb"] Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.513415 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxwhb"] Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.522820 4972 scope.go:117] "RemoveContainer" containerID="d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.537207 4972 scope.go:117] "RemoveContainer" containerID="5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.555412 4972 scope.go:117] "RemoveContainer" containerID="1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.571530 4972 scope.go:117] "RemoveContainer" containerID="338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.583919 4972 scope.go:117] "RemoveContainer" containerID="7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.609294 4972 scope.go:117] "RemoveContainer" containerID="bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.630441 4972 scope.go:117] "RemoveContainer" containerID="93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.656147 4972 scope.go:117] "RemoveContainer" containerID="500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346" Nov 21 09:55:47 crc kubenswrapper[4972]: E1121 09:55:47.656735 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346\": container with ID starting with 500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346 not found: ID does not exist" containerID="500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.656782 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346"} err="failed to get container status \"500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346\": rpc error: code = NotFound desc = could not find container \"500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346\": container with ID starting with 500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.656816 4972 scope.go:117] "RemoveContainer" containerID="58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd" Nov 21 09:55:47 crc kubenswrapper[4972]: E1121 09:55:47.657229 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\": container with ID starting with 58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd not found: ID does not exist" containerID="58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.657271 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd"} err="failed to get container status \"58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\": rpc error: code = NotFound desc = could not find container \"58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\": container with ID starting with 58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.657303 4972 scope.go:117] "RemoveContainer" containerID="d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c" Nov 21 09:55:47 crc kubenswrapper[4972]: E1121 09:55:47.657590 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\": container with ID starting with d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c not found: ID does not exist" containerID="d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.657631 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c"} err="failed to get container status \"d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\": rpc error: code = NotFound desc = could not find container \"d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\": container with ID starting with d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.657655 4972 scope.go:117] "RemoveContainer" containerID="5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3" Nov 21 09:55:47 crc kubenswrapper[4972]: E1121 09:55:47.658302 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\": container with ID starting with 5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3 not found: ID does not exist" containerID="5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.658385 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3"} err="failed to get container status \"5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\": rpc error: code = NotFound desc = could not find container \"5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\": container with ID starting with 5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.658420 4972 scope.go:117] "RemoveContainer" containerID="1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3" Nov 21 09:55:47 crc kubenswrapper[4972]: E1121 09:55:47.658992 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\": container with ID starting with 1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3 not found: ID does not exist" containerID="1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.659047 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3"} err="failed to get container status \"1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\": rpc error: code = NotFound desc = could not find container \"1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\": container with ID starting with 1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.659096 4972 scope.go:117] "RemoveContainer" containerID="338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c" Nov 21 09:55:47 crc kubenswrapper[4972]: E1121 09:55:47.659605 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\": container with ID starting with 338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c not found: ID does not exist" containerID="338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.659668 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c"} err="failed to get container status \"338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\": rpc error: code = NotFound desc = could not find container \"338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\": container with ID starting with 338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.659705 4972 scope.go:117] "RemoveContainer" containerID="7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343" Nov 21 09:55:47 crc kubenswrapper[4972]: E1121 09:55:47.660300 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\": container with ID starting with 7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343 not found: ID does not exist" containerID="7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.660357 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343"} err="failed to get container status \"7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\": rpc error: code = NotFound desc = could not find container \"7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\": container with ID starting with 7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.660394 4972 scope.go:117] "RemoveContainer" containerID="bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0" Nov 21 09:55:47 crc kubenswrapper[4972]: E1121 09:55:47.660935 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\": container with ID starting with bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0 not found: ID does not exist" containerID="bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.660987 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0"} err="failed to get container status \"bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\": rpc error: code = NotFound desc = could not find container \"bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\": container with ID starting with bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.661022 4972 scope.go:117] "RemoveContainer" containerID="93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903" Nov 21 09:55:47 crc kubenswrapper[4972]: E1121 09:55:47.661529 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\": container with ID starting with 93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903 not found: ID does not exist" containerID="93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.661571 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903"} err="failed to get container status \"93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\": rpc error: code = NotFound desc = could not find container \"93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\": container with ID starting with 93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.661604 4972 scope.go:117] "RemoveContainer" containerID="500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.662118 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346"} err="failed to get container status \"500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346\": rpc error: code = NotFound desc = could not find container \"500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346\": container with ID starting with 500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.662181 4972 scope.go:117] "RemoveContainer" containerID="58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.662771 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd"} err="failed to get container status \"58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\": rpc error: code = NotFound desc = could not find container \"58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\": container with ID starting with 58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.662821 4972 scope.go:117] "RemoveContainer" containerID="d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.663269 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c"} err="failed to get container status \"d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\": rpc error: code = NotFound desc = could not find container \"d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\": container with ID starting with d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.663305 4972 scope.go:117] "RemoveContainer" containerID="5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.663802 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3"} err="failed to get container status \"5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\": rpc error: code = NotFound desc = could not find container \"5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\": container with ID starting with 5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.663863 4972 scope.go:117] "RemoveContainer" containerID="1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.664248 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3"} err="failed to get container status \"1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\": rpc error: code = NotFound desc = could not find container \"1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\": container with ID starting with 1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.664289 4972 scope.go:117] "RemoveContainer" containerID="338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.664618 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c"} err="failed to get container status \"338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\": rpc error: code = NotFound desc = could not find container \"338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\": container with ID starting with 338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.664658 4972 scope.go:117] "RemoveContainer" containerID="7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.665130 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343"} err="failed to get container status \"7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\": rpc error: code = NotFound desc = could not find container \"7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\": container with ID starting with 7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.665164 4972 scope.go:117] "RemoveContainer" containerID="bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.665447 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0"} err="failed to get container status \"bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\": rpc error: code = NotFound desc = could not find container \"bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\": container with ID starting with bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.665480 4972 scope.go:117] "RemoveContainer" containerID="93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.665966 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903"} err="failed to get container status \"93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\": rpc error: code = NotFound desc = could not find container \"93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\": container with ID starting with 93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.666017 4972 scope.go:117] "RemoveContainer" containerID="500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.666321 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346"} err="failed to get container status \"500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346\": rpc error: code = NotFound desc = could not find container \"500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346\": container with ID starting with 500dbdac7adb5ebcf15fdeab4d5d8d8a811c43e0c442fc20606534f7a1151346 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.666357 4972 scope.go:117] "RemoveContainer" containerID="58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.666680 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd"} err="failed to get container status \"58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\": rpc error: code = NotFound desc = could not find container \"58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd\": container with ID starting with 58d36866ccda51d1e158488051b23d0b87a9a551763f2e321ae25ebaa64975dd not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.666750 4972 scope.go:117] "RemoveContainer" containerID="d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.667126 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c"} err="failed to get container status \"d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\": rpc error: code = NotFound desc = could not find container \"d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c\": container with ID starting with d757bdf5005192d3e2a1eccf4ef4467058b65a210ecac17dd46945f5a408275c not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.667160 4972 scope.go:117] "RemoveContainer" containerID="5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.667573 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3"} err="failed to get container status \"5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\": rpc error: code = NotFound desc = could not find container \"5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3\": container with ID starting with 5536e3730b059f16c61833761992978257e2677e240d1282c3e2e73e2166a7f3 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.667605 4972 scope.go:117] "RemoveContainer" containerID="1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.667986 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3"} err="failed to get container status \"1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\": rpc error: code = NotFound desc = could not find container \"1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3\": container with ID starting with 1fe1c828cefbe5268c3bdfa85e1dc901f04fe1df73e30262f94325e67540aeb3 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.668025 4972 scope.go:117] "RemoveContainer" containerID="338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.668404 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c"} err="failed to get container status \"338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\": rpc error: code = NotFound desc = could not find container \"338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c\": container with ID starting with 338acd6e113f6dc8a50742d161a3b6a2986ce06de1648b7d5311563b0f8c4e7c not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.668526 4972 scope.go:117] "RemoveContainer" containerID="7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.668851 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343"} err="failed to get container status \"7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\": rpc error: code = NotFound desc = could not find container \"7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343\": container with ID starting with 7202ef58a10d88c1c060a8f6b0f7df122a9160807bbf9b83b21fd677e87ae343 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.668886 4972 scope.go:117] "RemoveContainer" containerID="bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.669184 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0"} err="failed to get container status \"bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\": rpc error: code = NotFound desc = could not find container \"bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0\": container with ID starting with bf46d4a04bc635d1429212316c48049ee14d0cb50afde6dcb2014774a048d0f0 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.669218 4972 scope.go:117] "RemoveContainer" containerID="93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.669507 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903"} err="failed to get container status \"93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\": rpc error: code = NotFound desc = could not find container \"93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903\": container with ID starting with 93d8f3d3d08a58d2db61d328e01afb7d9d79d7a67c877b1a2b117adcb3d64903 not found: ID does not exist" Nov 21 09:55:47 crc kubenswrapper[4972]: I1121 09:55:47.771485 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c159725e-4c82-4474-96d9-211f7d8db47f" path="/var/lib/kubelet/pods/c159725e-4c82-4474-96d9-211f7d8db47f/volumes" Nov 21 09:55:48 crc kubenswrapper[4972]: I1121 09:55:48.458352 4972 generic.go:334] "Generic (PLEG): container finished" podID="bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4" containerID="37f3586abf57ca2c269dcfc895f831272113c4ecf8baede0de0d00597dc76fa3" exitCode=0 Nov 21 09:55:48 crc kubenswrapper[4972]: I1121 09:55:48.458399 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" event={"ID":"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4","Type":"ContainerDied","Data":"37f3586abf57ca2c269dcfc895f831272113c4ecf8baede0de0d00597dc76fa3"} Nov 21 09:55:49 crc kubenswrapper[4972]: I1121 09:55:49.471531 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" event={"ID":"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4","Type":"ContainerStarted","Data":"5781c6665274ea1664815df5bb297d24538c0c6616016c8ad88c05da64ca6acc"} Nov 21 09:55:49 crc kubenswrapper[4972]: I1121 09:55:49.472178 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" event={"ID":"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4","Type":"ContainerStarted","Data":"8518ad28dd7e742b1dd701bbb9ff9cd82a81fab97a319b161114c35dc3e3c729"} Nov 21 09:55:49 crc kubenswrapper[4972]: I1121 09:55:49.472199 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" event={"ID":"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4","Type":"ContainerStarted","Data":"a1eb36eee364775726257405e925928b55377c820392ad1bbdf03e121822db1a"} Nov 21 09:55:49 crc kubenswrapper[4972]: I1121 09:55:49.472218 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" event={"ID":"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4","Type":"ContainerStarted","Data":"9d7d995a69bc548bee5589f8ac6b6f34729b50575bdd9df37a00a6a1927c9bf1"} Nov 21 09:55:49 crc kubenswrapper[4972]: I1121 09:55:49.472235 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" event={"ID":"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4","Type":"ContainerStarted","Data":"fd6a6e4c152cb920ab438892e10188f044dfcf58cd19bc4937748616ef661887"} Nov 21 09:55:49 crc kubenswrapper[4972]: I1121 09:55:49.472251 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" event={"ID":"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4","Type":"ContainerStarted","Data":"ca09a0fcfc3a111d2c2481eef59ccd52630dd13f4c131e9212af785f49f36c98"} Nov 21 09:55:49 crc kubenswrapper[4972]: I1121 09:55:49.877207 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gctmq"] Nov 21 09:55:49 crc kubenswrapper[4972]: I1121 09:55:49.878428 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:49 crc kubenswrapper[4972]: I1121 09:55:49.957212 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61f4897-9d60-4d41-aa1a-8b16d272309b-catalog-content\") pod \"certified-operators-gctmq\" (UID: \"d61f4897-9d60-4d41-aa1a-8b16d272309b\") " pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:49 crc kubenswrapper[4972]: I1121 09:55:49.957321 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdwx7\" (UniqueName: \"kubernetes.io/projected/d61f4897-9d60-4d41-aa1a-8b16d272309b-kube-api-access-vdwx7\") pod \"certified-operators-gctmq\" (UID: \"d61f4897-9d60-4d41-aa1a-8b16d272309b\") " pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:49 crc kubenswrapper[4972]: I1121 09:55:49.957365 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61f4897-9d60-4d41-aa1a-8b16d272309b-utilities\") pod \"certified-operators-gctmq\" (UID: \"d61f4897-9d60-4d41-aa1a-8b16d272309b\") " pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.058194 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdwx7\" (UniqueName: \"kubernetes.io/projected/d61f4897-9d60-4d41-aa1a-8b16d272309b-kube-api-access-vdwx7\") pod \"certified-operators-gctmq\" (UID: \"d61f4897-9d60-4d41-aa1a-8b16d272309b\") " pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.058247 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61f4897-9d60-4d41-aa1a-8b16d272309b-utilities\") pod \"certified-operators-gctmq\" (UID: \"d61f4897-9d60-4d41-aa1a-8b16d272309b\") " pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.058304 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61f4897-9d60-4d41-aa1a-8b16d272309b-catalog-content\") pod \"certified-operators-gctmq\" (UID: \"d61f4897-9d60-4d41-aa1a-8b16d272309b\") " pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.058775 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61f4897-9d60-4d41-aa1a-8b16d272309b-utilities\") pod \"certified-operators-gctmq\" (UID: \"d61f4897-9d60-4d41-aa1a-8b16d272309b\") " pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.058795 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61f4897-9d60-4d41-aa1a-8b16d272309b-catalog-content\") pod \"certified-operators-gctmq\" (UID: \"d61f4897-9d60-4d41-aa1a-8b16d272309b\") " pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.075326 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6fm7r"] Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.076471 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.092445 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdwx7\" (UniqueName: \"kubernetes.io/projected/d61f4897-9d60-4d41-aa1a-8b16d272309b-kube-api-access-vdwx7\") pod \"certified-operators-gctmq\" (UID: \"d61f4897-9d60-4d41-aa1a-8b16d272309b\") " pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.159421 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b8a22-cc08-40e6-9b8e-78414812c493-utilities\") pod \"community-operators-6fm7r\" (UID: \"936b8a22-cc08-40e6-9b8e-78414812c493\") " pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.159515 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b8a22-cc08-40e6-9b8e-78414812c493-catalog-content\") pod \"community-operators-6fm7r\" (UID: \"936b8a22-cc08-40e6-9b8e-78414812c493\") " pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.159614 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpcpp\" (UniqueName: \"kubernetes.io/projected/936b8a22-cc08-40e6-9b8e-78414812c493-kube-api-access-gpcpp\") pod \"community-operators-6fm7r\" (UID: \"936b8a22-cc08-40e6-9b8e-78414812c493\") " pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.201757 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:50 crc kubenswrapper[4972]: E1121 09:55:50.230983 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-gctmq_openshift-marketplace_d61f4897-9d60-4d41-aa1a-8b16d272309b_0(ee1df18465012f84aa0c7c33feda4de2dc07031b213880818105a24741eb7564): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:50 crc kubenswrapper[4972]: E1121 09:55:50.231062 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-gctmq_openshift-marketplace_d61f4897-9d60-4d41-aa1a-8b16d272309b_0(ee1df18465012f84aa0c7c33feda4de2dc07031b213880818105a24741eb7564): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:50 crc kubenswrapper[4972]: E1121 09:55:50.231086 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-gctmq_openshift-marketplace_d61f4897-9d60-4d41-aa1a-8b16d272309b_0(ee1df18465012f84aa0c7c33feda4de2dc07031b213880818105a24741eb7564): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:50 crc kubenswrapper[4972]: E1121 09:55:50.231143 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-gctmq_openshift-marketplace(d61f4897-9d60-4d41-aa1a-8b16d272309b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-gctmq_openshift-marketplace(d61f4897-9d60-4d41-aa1a-8b16d272309b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-gctmq_openshift-marketplace_d61f4897-9d60-4d41-aa1a-8b16d272309b_0(ee1df18465012f84aa0c7c33feda4de2dc07031b213880818105a24741eb7564): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-gctmq" podUID="d61f4897-9d60-4d41-aa1a-8b16d272309b" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.260334 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpcpp\" (UniqueName: \"kubernetes.io/projected/936b8a22-cc08-40e6-9b8e-78414812c493-kube-api-access-gpcpp\") pod \"community-operators-6fm7r\" (UID: \"936b8a22-cc08-40e6-9b8e-78414812c493\") " pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.260407 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b8a22-cc08-40e6-9b8e-78414812c493-utilities\") pod \"community-operators-6fm7r\" (UID: \"936b8a22-cc08-40e6-9b8e-78414812c493\") " pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.260445 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b8a22-cc08-40e6-9b8e-78414812c493-catalog-content\") pod \"community-operators-6fm7r\" (UID: \"936b8a22-cc08-40e6-9b8e-78414812c493\") " pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.260969 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b8a22-cc08-40e6-9b8e-78414812c493-utilities\") pod \"community-operators-6fm7r\" (UID: \"936b8a22-cc08-40e6-9b8e-78414812c493\") " pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.261000 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b8a22-cc08-40e6-9b8e-78414812c493-catalog-content\") pod \"community-operators-6fm7r\" (UID: \"936b8a22-cc08-40e6-9b8e-78414812c493\") " pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.279583 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpcpp\" (UniqueName: \"kubernetes.io/projected/936b8a22-cc08-40e6-9b8e-78414812c493-kube-api-access-gpcpp\") pod \"community-operators-6fm7r\" (UID: \"936b8a22-cc08-40e6-9b8e-78414812c493\") " pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: I1121 09:55:50.414337 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: E1121 09:55:50.435978 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6fm7r_openshift-marketplace_936b8a22-cc08-40e6-9b8e-78414812c493_0(cb146777ee8119a3a9591ee0ddece10847a40c401e7d2404df257d1984304940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:50 crc kubenswrapper[4972]: E1121 09:55:50.436044 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6fm7r_openshift-marketplace_936b8a22-cc08-40e6-9b8e-78414812c493_0(cb146777ee8119a3a9591ee0ddece10847a40c401e7d2404df257d1984304940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: E1121 09:55:50.436065 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6fm7r_openshift-marketplace_936b8a22-cc08-40e6-9b8e-78414812c493_0(cb146777ee8119a3a9591ee0ddece10847a40c401e7d2404df257d1984304940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:50 crc kubenswrapper[4972]: E1121 09:55:50.436112 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-6fm7r_openshift-marketplace(936b8a22-cc08-40e6-9b8e-78414812c493)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-6fm7r_openshift-marketplace(936b8a22-cc08-40e6-9b8e-78414812c493)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6fm7r_openshift-marketplace_936b8a22-cc08-40e6-9b8e-78414812c493_0(cb146777ee8119a3a9591ee0ddece10847a40c401e7d2404df257d1984304940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/community-operators-6fm7r" podUID="936b8a22-cc08-40e6-9b8e-78414812c493" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.033367 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-85d8m"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.034047 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-85d8m" podUID="f6f0334e-e5ea-429a-9b63-7a178a3d7c64" containerName="registry-server" containerID="cri-o://d2a2b2b39ddbfed79adfee3dd01e5ff9bd7723375a5aabf76dbffcecd95ee69a" gracePeriod=30 Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.043044 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gctmq"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.043156 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.050530 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6fm7r"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.050640 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.055257 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vj4wg"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.055477 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vj4wg" podUID="efbc5e5e-d261-4f3b-b90b-febd39de0327" containerName="registry-server" containerID="cri-o://d7427fc0cb0a7be625f24adebdc2cdb70039b44b47b9e999acf4ef3e9445e021" gracePeriod=30 Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.056882 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.063575 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.071211 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tlh2t"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.071645 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" podUID="a2f7c374-4f03-452f-aaa2-a3ded791d552" containerName="marketplace-operator" containerID="cri-o://309db296985e6199894874dbdf99862f1b166a00f6c9201bcbb35d7864adfdb8" gracePeriod=30 Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.080238 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61f4897-9d60-4d41-aa1a-8b16d272309b-utilities\") pod \"d61f4897-9d60-4d41-aa1a-8b16d272309b\" (UID: \"d61f4897-9d60-4d41-aa1a-8b16d272309b\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.080293 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdwx7\" (UniqueName: \"kubernetes.io/projected/d61f4897-9d60-4d41-aa1a-8b16d272309b-kube-api-access-vdwx7\") pod \"d61f4897-9d60-4d41-aa1a-8b16d272309b\" (UID: \"d61f4897-9d60-4d41-aa1a-8b16d272309b\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.080247 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-675hp"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.080330 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61f4897-9d60-4d41-aa1a-8b16d272309b-catalog-content\") pod \"d61f4897-9d60-4d41-aa1a-8b16d272309b\" (UID: \"d61f4897-9d60-4d41-aa1a-8b16d272309b\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.080358 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b8a22-cc08-40e6-9b8e-78414812c493-catalog-content\") pod \"936b8a22-cc08-40e6-9b8e-78414812c493\" (UID: \"936b8a22-cc08-40e6-9b8e-78414812c493\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.080410 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b8a22-cc08-40e6-9b8e-78414812c493-utilities\") pod \"936b8a22-cc08-40e6-9b8e-78414812c493\" (UID: \"936b8a22-cc08-40e6-9b8e-78414812c493\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.080475 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpcpp\" (UniqueName: \"kubernetes.io/projected/936b8a22-cc08-40e6-9b8e-78414812c493-kube-api-access-gpcpp\") pod \"936b8a22-cc08-40e6-9b8e-78414812c493\" (UID: \"936b8a22-cc08-40e6-9b8e-78414812c493\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.080586 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d61f4897-9d60-4d41-aa1a-8b16d272309b-utilities" (OuterVolumeSpecName: "utilities") pod "d61f4897-9d60-4d41-aa1a-8b16d272309b" (UID: "d61f4897-9d60-4d41-aa1a-8b16d272309b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.080735 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61f4897-9d60-4d41-aa1a-8b16d272309b-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.080786 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-675hp" podUID="608f224f-2fba-44cb-a254-54e0bf1b64ee" containerName="registry-server" containerID="cri-o://5acdee8f981fac5dc192acdfb2c97c99a59bd050734e91a71bdd2134df96052c" gracePeriod=30 Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.081059 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/936b8a22-cc08-40e6-9b8e-78414812c493-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "936b8a22-cc08-40e6-9b8e-78414812c493" (UID: "936b8a22-cc08-40e6-9b8e-78414812c493"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.081258 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/936b8a22-cc08-40e6-9b8e-78414812c493-utilities" (OuterVolumeSpecName: "utilities") pod "936b8a22-cc08-40e6-9b8e-78414812c493" (UID: "936b8a22-cc08-40e6-9b8e-78414812c493"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.081467 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d61f4897-9d60-4d41-aa1a-8b16d272309b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d61f4897-9d60-4d41-aa1a-8b16d272309b" (UID: "d61f4897-9d60-4d41-aa1a-8b16d272309b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.088513 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9p8z"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.088817 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n9p8z" podUID="30ee0fd4-14ae-4119-8c87-0ac7e529630a" containerName="registry-server" containerID="cri-o://6ef5309fc1625c7e7db65fa613e4e7874aeb4c2c0df70f5961c0ca3f968ae722" gracePeriod=30 Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.095077 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2wpdc"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.100106 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.101096 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/936b8a22-cc08-40e6-9b8e-78414812c493-kube-api-access-gpcpp" (OuterVolumeSpecName: "kube-api-access-gpcpp") pod "936b8a22-cc08-40e6-9b8e-78414812c493" (UID: "936b8a22-cc08-40e6-9b8e-78414812c493"). InnerVolumeSpecName "kube-api-access-gpcpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.111217 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61f4897-9d60-4d41-aa1a-8b16d272309b-kube-api-access-vdwx7" (OuterVolumeSpecName: "kube-api-access-vdwx7") pod "d61f4897-9d60-4d41-aa1a-8b16d272309b" (UID: "d61f4897-9d60-4d41-aa1a-8b16d272309b"). InnerVolumeSpecName "kube-api-access-vdwx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.181821 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdwx7\" (UniqueName: \"kubernetes.io/projected/d61f4897-9d60-4d41-aa1a-8b16d272309b-kube-api-access-vdwx7\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.182134 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61f4897-9d60-4d41-aa1a-8b16d272309b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.182145 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b8a22-cc08-40e6-9b8e-78414812c493-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.182154 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b8a22-cc08-40e6-9b8e-78414812c493-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.182163 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpcpp\" (UniqueName: \"kubernetes.io/projected/936b8a22-cc08-40e6-9b8e-78414812c493-kube-api-access-gpcpp\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.273997 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ggnjc"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.277874 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.283783 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlfpd\" (UniqueName: \"kubernetes.io/projected/586db52b-8a9d-4052-a47c-bea7e440b977-kube-api-access-vlfpd\") pod \"redhat-operators-ggnjc\" (UID: \"586db52b-8a9d-4052-a47c-bea7e440b977\") " pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.283884 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e85a9ad-624e-40f7-9084-3be164ba8fb2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2wpdc\" (UID: \"6e85a9ad-624e-40f7-9084-3be164ba8fb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.283998 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586db52b-8a9d-4052-a47c-bea7e440b977-utilities\") pod \"redhat-operators-ggnjc\" (UID: \"586db52b-8a9d-4052-a47c-bea7e440b977\") " pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.284048 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586db52b-8a9d-4052-a47c-bea7e440b977-catalog-content\") pod \"redhat-operators-ggnjc\" (UID: \"586db52b-8a9d-4052-a47c-bea7e440b977\") " pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.284291 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjj27\" (UniqueName: \"kubernetes.io/projected/6e85a9ad-624e-40f7-9084-3be164ba8fb2-kube-api-access-xjj27\") pod \"marketplace-operator-79b997595-2wpdc\" (UID: \"6e85a9ad-624e-40f7-9084-3be164ba8fb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.284336 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e85a9ad-624e-40f7-9084-3be164ba8fb2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2wpdc\" (UID: \"6e85a9ad-624e-40f7-9084-3be164ba8fb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.384960 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586db52b-8a9d-4052-a47c-bea7e440b977-utilities\") pod \"redhat-operators-ggnjc\" (UID: \"586db52b-8a9d-4052-a47c-bea7e440b977\") " pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.385006 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586db52b-8a9d-4052-a47c-bea7e440b977-catalog-content\") pod \"redhat-operators-ggnjc\" (UID: \"586db52b-8a9d-4052-a47c-bea7e440b977\") " pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.385048 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjj27\" (UniqueName: \"kubernetes.io/projected/6e85a9ad-624e-40f7-9084-3be164ba8fb2-kube-api-access-xjj27\") pod \"marketplace-operator-79b997595-2wpdc\" (UID: \"6e85a9ad-624e-40f7-9084-3be164ba8fb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.385071 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e85a9ad-624e-40f7-9084-3be164ba8fb2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2wpdc\" (UID: \"6e85a9ad-624e-40f7-9084-3be164ba8fb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.385097 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlfpd\" (UniqueName: \"kubernetes.io/projected/586db52b-8a9d-4052-a47c-bea7e440b977-kube-api-access-vlfpd\") pod \"redhat-operators-ggnjc\" (UID: \"586db52b-8a9d-4052-a47c-bea7e440b977\") " pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.385121 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e85a9ad-624e-40f7-9084-3be164ba8fb2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2wpdc\" (UID: \"6e85a9ad-624e-40f7-9084-3be164ba8fb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.385537 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586db52b-8a9d-4052-a47c-bea7e440b977-utilities\") pod \"redhat-operators-ggnjc\" (UID: \"586db52b-8a9d-4052-a47c-bea7e440b977\") " pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.385883 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586db52b-8a9d-4052-a47c-bea7e440b977-catalog-content\") pod \"redhat-operators-ggnjc\" (UID: \"586db52b-8a9d-4052-a47c-bea7e440b977\") " pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.386387 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e85a9ad-624e-40f7-9084-3be164ba8fb2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2wpdc\" (UID: \"6e85a9ad-624e-40f7-9084-3be164ba8fb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.390986 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e85a9ad-624e-40f7-9084-3be164ba8fb2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2wpdc\" (UID: \"6e85a9ad-624e-40f7-9084-3be164ba8fb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.400383 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjj27\" (UniqueName: \"kubernetes.io/projected/6e85a9ad-624e-40f7-9084-3be164ba8fb2-kube-api-access-xjj27\") pod \"marketplace-operator-79b997595-2wpdc\" (UID: \"6e85a9ad-624e-40f7-9084-3be164ba8fb2\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.400652 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlfpd\" (UniqueName: \"kubernetes.io/projected/586db52b-8a9d-4052-a47c-bea7e440b977-kube-api-access-vlfpd\") pod \"redhat-operators-ggnjc\" (UID: \"586db52b-8a9d-4052-a47c-bea7e440b977\") " pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.443151 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: E1121 09:55:52.463803 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-2wpdc_openshift-marketplace_6e85a9ad-624e-40f7-9084-3be164ba8fb2_0(8b6160a7ad34a388fa2ba4e34fd21de28affcb2b9ab49263b855fe9da2b5dcfa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:52 crc kubenswrapper[4972]: E1121 09:55:52.463873 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-2wpdc_openshift-marketplace_6e85a9ad-624e-40f7-9084-3be164ba8fb2_0(8b6160a7ad34a388fa2ba4e34fd21de28affcb2b9ab49263b855fe9da2b5dcfa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: E1121 09:55:52.463893 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-2wpdc_openshift-marketplace_6e85a9ad-624e-40f7-9084-3be164ba8fb2_0(8b6160a7ad34a388fa2ba4e34fd21de28affcb2b9ab49263b855fe9da2b5dcfa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:52 crc kubenswrapper[4972]: E1121 09:55:52.463930 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-79b997595-2wpdc_openshift-marketplace(6e85a9ad-624e-40f7-9084-3be164ba8fb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-79b997595-2wpdc_openshift-marketplace(6e85a9ad-624e-40f7-9084-3be164ba8fb2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-2wpdc_openshift-marketplace_6e85a9ad-624e-40f7-9084-3be164ba8fb2_0(8b6160a7ad34a388fa2ba4e34fd21de28affcb2b9ab49263b855fe9da2b5dcfa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" podUID="6e85a9ad-624e-40f7-9084-3be164ba8fb2" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.474375 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tj4g9"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.475674 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.486822 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-catalog-content\") pod \"redhat-marketplace-tj4g9\" (UID: \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\") " pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.486915 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7q8b\" (UniqueName: \"kubernetes.io/projected/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-kube-api-access-q7q8b\") pod \"redhat-marketplace-tj4g9\" (UID: \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\") " pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.486969 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-utilities\") pod \"redhat-marketplace-tj4g9\" (UID: \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\") " pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.494011 4972 generic.go:334] "Generic (PLEG): container finished" podID="608f224f-2fba-44cb-a254-54e0bf1b64ee" containerID="5acdee8f981fac5dc192acdfb2c97c99a59bd050734e91a71bdd2134df96052c" exitCode=0 Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.494110 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-675hp" event={"ID":"608f224f-2fba-44cb-a254-54e0bf1b64ee","Type":"ContainerDied","Data":"5acdee8f981fac5dc192acdfb2c97c99a59bd050734e91a71bdd2134df96052c"} Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.494165 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-675hp" event={"ID":"608f224f-2fba-44cb-a254-54e0bf1b64ee","Type":"ContainerDied","Data":"df32493a87fcd64512ad98ba58bf470a00a4499bddacd2cc10af85457ecaaf1e"} Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.494180 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df32493a87fcd64512ad98ba58bf470a00a4499bddacd2cc10af85457ecaaf1e" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.497243 4972 generic.go:334] "Generic (PLEG): container finished" podID="f6f0334e-e5ea-429a-9b63-7a178a3d7c64" containerID="d2a2b2b39ddbfed79adfee3dd01e5ff9bd7723375a5aabf76dbffcecd95ee69a" exitCode=0 Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.497314 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85d8m" event={"ID":"f6f0334e-e5ea-429a-9b63-7a178a3d7c64","Type":"ContainerDied","Data":"d2a2b2b39ddbfed79adfee3dd01e5ff9bd7723375a5aabf76dbffcecd95ee69a"} Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.505775 4972 generic.go:334] "Generic (PLEG): container finished" podID="efbc5e5e-d261-4f3b-b90b-febd39de0327" containerID="d7427fc0cb0a7be625f24adebdc2cdb70039b44b47b9e999acf4ef3e9445e021" exitCode=0 Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.505873 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj4wg" event={"ID":"efbc5e5e-d261-4f3b-b90b-febd39de0327","Type":"ContainerDied","Data":"d7427fc0cb0a7be625f24adebdc2cdb70039b44b47b9e999acf4ef3e9445e021"} Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.505933 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vj4wg" event={"ID":"efbc5e5e-d261-4f3b-b90b-febd39de0327","Type":"ContainerDied","Data":"b27a28329a62f4c06e4c1f4e58f62a8971ae0460c3528a9a1bf5add021266771"} Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.505946 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b27a28329a62f4c06e4c1f4e58f62a8971ae0460c3528a9a1bf5add021266771" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.510578 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" event={"ID":"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4","Type":"ContainerStarted","Data":"7394ea99a518ee64733a151a7f3f63e116f4e7c2a8d32d1b2c7c142de9e6d65a"} Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.512782 4972 generic.go:334] "Generic (PLEG): container finished" podID="30ee0fd4-14ae-4119-8c87-0ac7e529630a" containerID="6ef5309fc1625c7e7db65fa613e4e7874aeb4c2c0df70f5961c0ca3f968ae722" exitCode=0 Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.512824 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9p8z" event={"ID":"30ee0fd4-14ae-4119-8c87-0ac7e529630a","Type":"ContainerDied","Data":"6ef5309fc1625c7e7db65fa613e4e7874aeb4c2c0df70f5961c0ca3f968ae722"} Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.512857 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9p8z" event={"ID":"30ee0fd4-14ae-4119-8c87-0ac7e529630a","Type":"ContainerDied","Data":"756b300567d5572e643b1d7afbe89322d486ac0104863032fd93446caa5ab0c0"} Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.512870 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="756b300567d5572e643b1d7afbe89322d486ac0104863032fd93446caa5ab0c0" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.514163 4972 generic.go:334] "Generic (PLEG): container finished" podID="a2f7c374-4f03-452f-aaa2-a3ded791d552" containerID="309db296985e6199894874dbdf99862f1b166a00f6c9201bcbb35d7864adfdb8" exitCode=0 Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.514227 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" event={"ID":"a2f7c374-4f03-452f-aaa2-a3ded791d552","Type":"ContainerDied","Data":"309db296985e6199894874dbdf99862f1b166a00f6c9201bcbb35d7864adfdb8"} Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.514253 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" event={"ID":"a2f7c374-4f03-452f-aaa2-a3ded791d552","Type":"ContainerDied","Data":"78e82612a1129618e9981989c9cb514a2e46d3d5ea188c31dd991c2777feb975"} Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.514266 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78e82612a1129618e9981989c9cb514a2e46d3d5ea188c31dd991c2777feb975" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.514237 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fm7r" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.514273 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gctmq" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.525166 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.531243 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.537109 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.548588 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.565007 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.570885 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gctmq"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.573616 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gctmq"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587238 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608f224f-2fba-44cb-a254-54e0bf1b64ee-catalog-content\") pod \"608f224f-2fba-44cb-a254-54e0bf1b64ee\" (UID: \"608f224f-2fba-44cb-a254-54e0bf1b64ee\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587282 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efbc5e5e-d261-4f3b-b90b-febd39de0327-utilities\") pod \"efbc5e5e-d261-4f3b-b90b-febd39de0327\" (UID: \"efbc5e5e-d261-4f3b-b90b-febd39de0327\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587311 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2f7c374-4f03-452f-aaa2-a3ded791d552-marketplace-trusted-ca\") pod \"a2f7c374-4f03-452f-aaa2-a3ded791d552\" (UID: \"a2f7c374-4f03-452f-aaa2-a3ded791d552\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587343 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5w8r\" (UniqueName: \"kubernetes.io/projected/efbc5e5e-d261-4f3b-b90b-febd39de0327-kube-api-access-q5w8r\") pod \"efbc5e5e-d261-4f3b-b90b-febd39de0327\" (UID: \"efbc5e5e-d261-4f3b-b90b-febd39de0327\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587394 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efbc5e5e-d261-4f3b-b90b-febd39de0327-catalog-content\") pod \"efbc5e5e-d261-4f3b-b90b-febd39de0327\" (UID: \"efbc5e5e-d261-4f3b-b90b-febd39de0327\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587412 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljhnb\" (UniqueName: \"kubernetes.io/projected/a2f7c374-4f03-452f-aaa2-a3ded791d552-kube-api-access-ljhnb\") pod \"a2f7c374-4f03-452f-aaa2-a3ded791d552\" (UID: \"a2f7c374-4f03-452f-aaa2-a3ded791d552\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587430 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4btg\" (UniqueName: \"kubernetes.io/projected/30ee0fd4-14ae-4119-8c87-0ac7e529630a-kube-api-access-t4btg\") pod \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\" (UID: \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587446 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ee0fd4-14ae-4119-8c87-0ac7e529630a-utilities\") pod \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\" (UID: \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587462 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608f224f-2fba-44cb-a254-54e0bf1b64ee-utilities\") pod \"608f224f-2fba-44cb-a254-54e0bf1b64ee\" (UID: \"608f224f-2fba-44cb-a254-54e0bf1b64ee\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587484 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ee0fd4-14ae-4119-8c87-0ac7e529630a-catalog-content\") pod \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\" (UID: \"30ee0fd4-14ae-4119-8c87-0ac7e529630a\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587503 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2f7c374-4f03-452f-aaa2-a3ded791d552-marketplace-operator-metrics\") pod \"a2f7c374-4f03-452f-aaa2-a3ded791d552\" (UID: \"a2f7c374-4f03-452f-aaa2-a3ded791d552\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587519 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcdmp\" (UniqueName: \"kubernetes.io/projected/608f224f-2fba-44cb-a254-54e0bf1b64ee-kube-api-access-fcdmp\") pod \"608f224f-2fba-44cb-a254-54e0bf1b64ee\" (UID: \"608f224f-2fba-44cb-a254-54e0bf1b64ee\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587595 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7q8b\" (UniqueName: \"kubernetes.io/projected/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-kube-api-access-q7q8b\") pod \"redhat-marketplace-tj4g9\" (UID: \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\") " pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587626 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-utilities\") pod \"redhat-marketplace-tj4g9\" (UID: \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\") " pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.587755 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-catalog-content\") pod \"redhat-marketplace-tj4g9\" (UID: \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\") " pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.589724 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ee0fd4-14ae-4119-8c87-0ac7e529630a-utilities" (OuterVolumeSpecName: "utilities") pod "30ee0fd4-14ae-4119-8c87-0ac7e529630a" (UID: "30ee0fd4-14ae-4119-8c87-0ac7e529630a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.590147 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efbc5e5e-d261-4f3b-b90b-febd39de0327-utilities" (OuterVolumeSpecName: "utilities") pod "efbc5e5e-d261-4f3b-b90b-febd39de0327" (UID: "efbc5e5e-d261-4f3b-b90b-febd39de0327"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.590554 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2f7c374-4f03-452f-aaa2-a3ded791d552-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a2f7c374-4f03-452f-aaa2-a3ded791d552" (UID: "a2f7c374-4f03-452f-aaa2-a3ded791d552"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.591184 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-catalog-content\") pod \"redhat-marketplace-tj4g9\" (UID: \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\") " pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.591267 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/608f224f-2fba-44cb-a254-54e0bf1b64ee-utilities" (OuterVolumeSpecName: "utilities") pod "608f224f-2fba-44cb-a254-54e0bf1b64ee" (UID: "608f224f-2fba-44cb-a254-54e0bf1b64ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.591339 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-utilities\") pod \"redhat-marketplace-tj4g9\" (UID: \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\") " pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.592315 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f7c374-4f03-452f-aaa2-a3ded791d552-kube-api-access-ljhnb" (OuterVolumeSpecName: "kube-api-access-ljhnb") pod "a2f7c374-4f03-452f-aaa2-a3ded791d552" (UID: "a2f7c374-4f03-452f-aaa2-a3ded791d552"). InnerVolumeSpecName "kube-api-access-ljhnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.593875 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608f224f-2fba-44cb-a254-54e0bf1b64ee-kube-api-access-fcdmp" (OuterVolumeSpecName: "kube-api-access-fcdmp") pod "608f224f-2fba-44cb-a254-54e0bf1b64ee" (UID: "608f224f-2fba-44cb-a254-54e0bf1b64ee"). InnerVolumeSpecName "kube-api-access-fcdmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.598325 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ee0fd4-14ae-4119-8c87-0ac7e529630a-kube-api-access-t4btg" (OuterVolumeSpecName: "kube-api-access-t4btg") pod "30ee0fd4-14ae-4119-8c87-0ac7e529630a" (UID: "30ee0fd4-14ae-4119-8c87-0ac7e529630a"). InnerVolumeSpecName "kube-api-access-t4btg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.600588 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbc5e5e-d261-4f3b-b90b-febd39de0327-kube-api-access-q5w8r" (OuterVolumeSpecName: "kube-api-access-q5w8r") pod "efbc5e5e-d261-4f3b-b90b-febd39de0327" (UID: "efbc5e5e-d261-4f3b-b90b-febd39de0327"). InnerVolumeSpecName "kube-api-access-q5w8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.602167 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6fm7r"] Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.604562 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f7c374-4f03-452f-aaa2-a3ded791d552-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a2f7c374-4f03-452f-aaa2-a3ded791d552" (UID: "a2f7c374-4f03-452f-aaa2-a3ded791d552"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.606006 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6fm7r"] Nov 21 09:55:52 crc kubenswrapper[4972]: E1121 09:55:52.610042 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-ggnjc_openshift-marketplace_586db52b-8a9d-4052-a47c-bea7e440b977_0(55c69ef50e5ee03a61e21f126ffa862716a1cfa8a55420bd02353ad86dcaa820): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:52 crc kubenswrapper[4972]: E1121 09:55:52.610170 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-ggnjc_openshift-marketplace_586db52b-8a9d-4052-a47c-bea7e440b977_0(55c69ef50e5ee03a61e21f126ffa862716a1cfa8a55420bd02353ad86dcaa820): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: E1121 09:55:52.610240 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-ggnjc_openshift-marketplace_586db52b-8a9d-4052-a47c-bea7e440b977_0(55c69ef50e5ee03a61e21f126ffa862716a1cfa8a55420bd02353ad86dcaa820): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:52 crc kubenswrapper[4972]: E1121 09:55:52.610332 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-operators-ggnjc_openshift-marketplace(586db52b-8a9d-4052-a47c-bea7e440b977)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-operators-ggnjc_openshift-marketplace(586db52b-8a9d-4052-a47c-bea7e440b977)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-ggnjc_openshift-marketplace_586db52b-8a9d-4052-a47c-bea7e440b977_0(55c69ef50e5ee03a61e21f126ffa862716a1cfa8a55420bd02353ad86dcaa820): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/redhat-operators-ggnjc" podUID="586db52b-8a9d-4052-a47c-bea7e440b977" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.612462 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/608f224f-2fba-44cb-a254-54e0bf1b64ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "608f224f-2fba-44cb-a254-54e0bf1b64ee" (UID: "608f224f-2fba-44cb-a254-54e0bf1b64ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.613304 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7q8b\" (UniqueName: \"kubernetes.io/projected/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-kube-api-access-q7q8b\") pod \"redhat-marketplace-tj4g9\" (UID: \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\") " pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.651020 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efbc5e5e-d261-4f3b-b90b-febd39de0327-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efbc5e5e-d261-4f3b-b90b-febd39de0327" (UID: "efbc5e5e-d261-4f3b-b90b-febd39de0327"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.686674 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ee0fd4-14ae-4119-8c87-0ac7e529630a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30ee0fd4-14ae-4119-8c87-0ac7e529630a" (UID: "30ee0fd4-14ae-4119-8c87-0ac7e529630a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.688445 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5w8r\" (UniqueName: \"kubernetes.io/projected/efbc5e5e-d261-4f3b-b90b-febd39de0327-kube-api-access-q5w8r\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.688497 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efbc5e5e-d261-4f3b-b90b-febd39de0327-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.688510 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljhnb\" (UniqueName: \"kubernetes.io/projected/a2f7c374-4f03-452f-aaa2-a3ded791d552-kube-api-access-ljhnb\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.688522 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4btg\" (UniqueName: \"kubernetes.io/projected/30ee0fd4-14ae-4119-8c87-0ac7e529630a-kube-api-access-t4btg\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.688535 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ee0fd4-14ae-4119-8c87-0ac7e529630a-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.688547 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608f224f-2fba-44cb-a254-54e0bf1b64ee-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.688558 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ee0fd4-14ae-4119-8c87-0ac7e529630a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.688569 4972 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2f7c374-4f03-452f-aaa2-a3ded791d552-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.688580 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcdmp\" (UniqueName: \"kubernetes.io/projected/608f224f-2fba-44cb-a254-54e0bf1b64ee-kube-api-access-fcdmp\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.688591 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608f224f-2fba-44cb-a254-54e0bf1b64ee-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.688601 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efbc5e5e-d261-4f3b-b90b-febd39de0327-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.688614 4972 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2f7c374-4f03-452f-aaa2-a3ded791d552-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.698456 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.875321 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.891001 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t7x9\" (UniqueName: \"kubernetes.io/projected/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-kube-api-access-9t7x9\") pod \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\" (UID: \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.891062 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-catalog-content\") pod \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\" (UID: \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.891151 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-utilities\") pod \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\" (UID: \"f6f0334e-e5ea-429a-9b63-7a178a3d7c64\") " Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.891922 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-utilities" (OuterVolumeSpecName: "utilities") pod "f6f0334e-e5ea-429a-9b63-7a178a3d7c64" (UID: "f6f0334e-e5ea-429a-9b63-7a178a3d7c64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.893794 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-kube-api-access-9t7x9" (OuterVolumeSpecName: "kube-api-access-9t7x9") pod "f6f0334e-e5ea-429a-9b63-7a178a3d7c64" (UID: "f6f0334e-e5ea-429a-9b63-7a178a3d7c64"). InnerVolumeSpecName "kube-api-access-9t7x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: E1121 09:55:52.896108 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-tj4g9_openshift-marketplace_7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8_0(f7acc3b745b9b37697e149e1e227c34863d18f471c4e92872788b424176a7284): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:52 crc kubenswrapper[4972]: E1121 09:55:52.896167 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-tj4g9_openshift-marketplace_7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8_0(f7acc3b745b9b37697e149e1e227c34863d18f471c4e92872788b424176a7284): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: E1121 09:55:52.896186 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-tj4g9_openshift-marketplace_7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8_0(f7acc3b745b9b37697e149e1e227c34863d18f471c4e92872788b424176a7284): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:52 crc kubenswrapper[4972]: E1121 09:55:52.896232 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-tj4g9_openshift-marketplace(7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-tj4g9_openshift-marketplace(7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-tj4g9_openshift-marketplace_7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8_0(f7acc3b745b9b37697e149e1e227c34863d18f471c4e92872788b424176a7284): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/redhat-marketplace-tj4g9" podUID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.933378 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6f0334e-e5ea-429a-9b63-7a178a3d7c64" (UID: "f6f0334e-e5ea-429a-9b63-7a178a3d7c64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.992598 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t7x9\" (UniqueName: \"kubernetes.io/projected/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-kube-api-access-9t7x9\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.992635 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:52 crc kubenswrapper[4972]: I1121 09:55:52.992644 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6f0334e-e5ea-429a-9b63-7a178a3d7c64-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.525456 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vj4wg" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.525446 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-675hp" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.525500 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85d8m" event={"ID":"f6f0334e-e5ea-429a-9b63-7a178a3d7c64","Type":"ContainerDied","Data":"21bd5c2d73507b7c7f89ad95c3b7b99bdf53a668d3b5cf2adc85a888f958a093"} Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.525530 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tlh2t" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.525587 4972 scope.go:117] "RemoveContainer" containerID="d2a2b2b39ddbfed79adfee3dd01e5ff9bd7723375a5aabf76dbffcecd95ee69a" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.525761 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9p8z" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.527156 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85d8m" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.545183 4972 scope.go:117] "RemoveContainer" containerID="14a2a328902c330a11e4e467d2f5d8479dbd2fe605b22b12bb9bd9a662ee979d" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.564438 4972 scope.go:117] "RemoveContainer" containerID="fea3def9036bef3c9517ae6b38511b1ea9cd3074521717918a4ac7d6695b3ad0" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.590747 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-675hp"] Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.594415 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-675hp"] Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.610923 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vj4wg"] Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.622909 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vj4wg"] Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.628636 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tlh2t"] Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.636390 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tlh2t"] Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.640063 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9p8z"] Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.645763 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n9p8z"] Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.648349 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-85d8m"] Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.650656 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-85d8m"] Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.770453 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ee0fd4-14ae-4119-8c87-0ac7e529630a" path="/var/lib/kubelet/pods/30ee0fd4-14ae-4119-8c87-0ac7e529630a/volumes" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.771299 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="608f224f-2fba-44cb-a254-54e0bf1b64ee" path="/var/lib/kubelet/pods/608f224f-2fba-44cb-a254-54e0bf1b64ee/volumes" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.772444 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="936b8a22-cc08-40e6-9b8e-78414812c493" path="/var/lib/kubelet/pods/936b8a22-cc08-40e6-9b8e-78414812c493/volumes" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.772921 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2f7c374-4f03-452f-aaa2-a3ded791d552" path="/var/lib/kubelet/pods/a2f7c374-4f03-452f-aaa2-a3ded791d552/volumes" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.773426 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d61f4897-9d60-4d41-aa1a-8b16d272309b" path="/var/lib/kubelet/pods/d61f4897-9d60-4d41-aa1a-8b16d272309b/volumes" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.773763 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efbc5e5e-d261-4f3b-b90b-febd39de0327" path="/var/lib/kubelet/pods/efbc5e5e-d261-4f3b-b90b-febd39de0327/volumes" Nov 21 09:55:53 crc kubenswrapper[4972]: I1121 09:55:53.774754 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6f0334e-e5ea-429a-9b63-7a178a3d7c64" path="/var/lib/kubelet/pods/f6f0334e-e5ea-429a-9b63-7a178a3d7c64/volumes" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.533152 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" event={"ID":"bbcb552c-2818-4dd6-a4ff-4a7a9c5339f4","Type":"ContainerStarted","Data":"724c101e2376ea63f630fbf42ecf94f7058dc0d5b6ab2484bb660582c3ddbaa3"} Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.534429 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.534453 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.534510 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.556454 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.559605 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" podStartSLOduration=8.559587419 podStartE2EDuration="8.559587419s" podCreationTimestamp="2025-11-21 09:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:55:54.557767601 +0000 UTC m=+899.666910119" watchObservedRunningTime="2025-11-21 09:55:54.559587419 +0000 UTC m=+899.668729917" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.561523 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674619 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-npssb"] Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.674816 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbc5e5e-d261-4f3b-b90b-febd39de0327" containerName="extract-content" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674845 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbc5e5e-d261-4f3b-b90b-febd39de0327" containerName="extract-content" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.674855 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608f224f-2fba-44cb-a254-54e0bf1b64ee" containerName="extract-utilities" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674861 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="608f224f-2fba-44cb-a254-54e0bf1b64ee" containerName="extract-utilities" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.674868 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ee0fd4-14ae-4119-8c87-0ac7e529630a" containerName="extract-utilities" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674874 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ee0fd4-14ae-4119-8c87-0ac7e529630a" containerName="extract-utilities" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.674883 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f0334e-e5ea-429a-9b63-7a178a3d7c64" containerName="registry-server" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674888 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f0334e-e5ea-429a-9b63-7a178a3d7c64" containerName="registry-server" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.674895 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f0334e-e5ea-429a-9b63-7a178a3d7c64" containerName="extract-utilities" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674901 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f0334e-e5ea-429a-9b63-7a178a3d7c64" containerName="extract-utilities" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.674909 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ee0fd4-14ae-4119-8c87-0ac7e529630a" containerName="registry-server" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674914 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ee0fd4-14ae-4119-8c87-0ac7e529630a" containerName="registry-server" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.674924 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f7c374-4f03-452f-aaa2-a3ded791d552" containerName="marketplace-operator" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674930 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f7c374-4f03-452f-aaa2-a3ded791d552" containerName="marketplace-operator" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.674939 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbc5e5e-d261-4f3b-b90b-febd39de0327" containerName="registry-server" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674945 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbc5e5e-d261-4f3b-b90b-febd39de0327" containerName="registry-server" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.674954 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f0334e-e5ea-429a-9b63-7a178a3d7c64" containerName="extract-content" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674960 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f0334e-e5ea-429a-9b63-7a178a3d7c64" containerName="extract-content" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.674968 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbc5e5e-d261-4f3b-b90b-febd39de0327" containerName="extract-utilities" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674973 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbc5e5e-d261-4f3b-b90b-febd39de0327" containerName="extract-utilities" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.674980 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ee0fd4-14ae-4119-8c87-0ac7e529630a" containerName="extract-content" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674986 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ee0fd4-14ae-4119-8c87-0ac7e529630a" containerName="extract-content" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.674993 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608f224f-2fba-44cb-a254-54e0bf1b64ee" containerName="registry-server" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.674999 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="608f224f-2fba-44cb-a254-54e0bf1b64ee" containerName="registry-server" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.675007 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608f224f-2fba-44cb-a254-54e0bf1b64ee" containerName="extract-content" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.675013 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="608f224f-2fba-44cb-a254-54e0bf1b64ee" containerName="extract-content" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.675091 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6f0334e-e5ea-429a-9b63-7a178a3d7c64" containerName="registry-server" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.675102 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="608f224f-2fba-44cb-a254-54e0bf1b64ee" containerName="registry-server" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.675109 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ee0fd4-14ae-4119-8c87-0ac7e529630a" containerName="registry-server" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.675118 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbc5e5e-d261-4f3b-b90b-febd39de0327" containerName="registry-server" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.675126 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f7c374-4f03-452f-aaa2-a3ded791d552" containerName="marketplace-operator" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.675768 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.677552 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.684930 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2wpdc"] Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.685066 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.685539 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.696965 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggnjc"] Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.697104 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.697575 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.713515 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tj4g9"] Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.713638 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.714076 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.715074 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-2wpdc_openshift-marketplace_6e85a9ad-624e-40f7-9084-3be164ba8fb2_0(c705d21350b92123876f80012388e12298430dc93c0583b9d8b833993fcb11d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.715175 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-2wpdc_openshift-marketplace_6e85a9ad-624e-40f7-9084-3be164ba8fb2_0(c705d21350b92123876f80012388e12298430dc93c0583b9d8b833993fcb11d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.715201 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-2wpdc_openshift-marketplace_6e85a9ad-624e-40f7-9084-3be164ba8fb2_0(c705d21350b92123876f80012388e12298430dc93c0583b9d8b833993fcb11d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.715248 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-79b997595-2wpdc_openshift-marketplace(6e85a9ad-624e-40f7-9084-3be164ba8fb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-79b997595-2wpdc_openshift-marketplace(6e85a9ad-624e-40f7-9084-3be164ba8fb2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-2wpdc_openshift-marketplace_6e85a9ad-624e-40f7-9084-3be164ba8fb2_0(c705d21350b92123876f80012388e12298430dc93c0583b9d8b833993fcb11d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" podUID="6e85a9ad-624e-40f7-9084-3be164ba8fb2" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.730488 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-ggnjc_openshift-marketplace_586db52b-8a9d-4052-a47c-bea7e440b977_0(7cee2cd949b4b8e3509d22d4916f30f9550e945fb21132d10a0c7fed2f67eecf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.730554 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-ggnjc_openshift-marketplace_586db52b-8a9d-4052-a47c-bea7e440b977_0(7cee2cd949b4b8e3509d22d4916f30f9550e945fb21132d10a0c7fed2f67eecf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.730576 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-ggnjc_openshift-marketplace_586db52b-8a9d-4052-a47c-bea7e440b977_0(7cee2cd949b4b8e3509d22d4916f30f9550e945fb21132d10a0c7fed2f67eecf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.730620 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-operators-ggnjc_openshift-marketplace(586db52b-8a9d-4052-a47c-bea7e440b977)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-operators-ggnjc_openshift-marketplace(586db52b-8a9d-4052-a47c-bea7e440b977)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-ggnjc_openshift-marketplace_586db52b-8a9d-4052-a47c-bea7e440b977_0(7cee2cd949b4b8e3509d22d4916f30f9550e945fb21132d10a0c7fed2f67eecf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/redhat-operators-ggnjc" podUID="586db52b-8a9d-4052-a47c-bea7e440b977" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.738862 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-npssb"] Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.746333 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-tj4g9_openshift-marketplace_7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8_0(b0d6334b04f81ecbf138b4976fb72ec3fe89583b6f25dac1760b33b6564cc144): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.746388 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-tj4g9_openshift-marketplace_7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8_0(b0d6334b04f81ecbf138b4976fb72ec3fe89583b6f25dac1760b33b6564cc144): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.746410 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-tj4g9_openshift-marketplace_7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8_0(b0d6334b04f81ecbf138b4976fb72ec3fe89583b6f25dac1760b33b6564cc144): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:55:54 crc kubenswrapper[4972]: E1121 09:55:54.746452 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-tj4g9_openshift-marketplace(7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-tj4g9_openshift-marketplace(7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-tj4g9_openshift-marketplace_7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8_0(b0d6334b04f81ecbf138b4976fb72ec3fe89583b6f25dac1760b33b6564cc144): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/redhat-marketplace-tj4g9" podUID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.812979 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad421b56-53fa-4ad2-9233-eb8e2d72be57-utilities\") pod \"certified-operators-npssb\" (UID: \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\") " pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.813054 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad421b56-53fa-4ad2-9233-eb8e2d72be57-catalog-content\") pod \"certified-operators-npssb\" (UID: \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\") " pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.813076 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ndtx\" (UniqueName: \"kubernetes.io/projected/ad421b56-53fa-4ad2-9233-eb8e2d72be57-kube-api-access-5ndtx\") pod \"certified-operators-npssb\" (UID: \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\") " pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.873783 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rttlt"] Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.875230 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.877135 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.886272 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rttlt"] Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.914962 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad421b56-53fa-4ad2-9233-eb8e2d72be57-utilities\") pod \"certified-operators-npssb\" (UID: \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\") " pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.915076 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad421b56-53fa-4ad2-9233-eb8e2d72be57-catalog-content\") pod \"certified-operators-npssb\" (UID: \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\") " pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.915117 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ndtx\" (UniqueName: \"kubernetes.io/projected/ad421b56-53fa-4ad2-9233-eb8e2d72be57-kube-api-access-5ndtx\") pod \"certified-operators-npssb\" (UID: \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\") " pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.915553 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad421b56-53fa-4ad2-9233-eb8e2d72be57-catalog-content\") pod \"certified-operators-npssb\" (UID: \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\") " pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.915675 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad421b56-53fa-4ad2-9233-eb8e2d72be57-utilities\") pod \"certified-operators-npssb\" (UID: \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\") " pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.937537 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ndtx\" (UniqueName: \"kubernetes.io/projected/ad421b56-53fa-4ad2-9233-eb8e2d72be57-kube-api-access-5ndtx\") pod \"certified-operators-npssb\" (UID: \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\") " pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:54 crc kubenswrapper[4972]: I1121 09:55:54.991356 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.013767 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-npssb_openshift-marketplace_ad421b56-53fa-4ad2-9233-eb8e2d72be57_0(7ed5a8c82ba6ac077f88a7ca87e30cd665b459b3d51cb03eeccc4afc3e58de54): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.013820 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-npssb_openshift-marketplace_ad421b56-53fa-4ad2-9233-eb8e2d72be57_0(7ed5a8c82ba6ac077f88a7ca87e30cd665b459b3d51cb03eeccc4afc3e58de54): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.013873 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-npssb_openshift-marketplace_ad421b56-53fa-4ad2-9233-eb8e2d72be57_0(7ed5a8c82ba6ac077f88a7ca87e30cd665b459b3d51cb03eeccc4afc3e58de54): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.013910 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-npssb_openshift-marketplace(ad421b56-53fa-4ad2-9233-eb8e2d72be57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-npssb_openshift-marketplace(ad421b56-53fa-4ad2-9233-eb8e2d72be57)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-npssb_openshift-marketplace_ad421b56-53fa-4ad2-9233-eb8e2d72be57_0(7ed5a8c82ba6ac077f88a7ca87e30cd665b459b3d51cb03eeccc4afc3e58de54): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-npssb" podUID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.016481 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/561f007e-bddb-4c6f-83a3-d9052a392b37-catalog-content\") pod \"community-operators-rttlt\" (UID: \"561f007e-bddb-4c6f-83a3-d9052a392b37\") " pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.016611 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fsrq\" (UniqueName: \"kubernetes.io/projected/561f007e-bddb-4c6f-83a3-d9052a392b37-kube-api-access-2fsrq\") pod \"community-operators-rttlt\" (UID: \"561f007e-bddb-4c6f-83a3-d9052a392b37\") " pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.016639 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/561f007e-bddb-4c6f-83a3-d9052a392b37-utilities\") pod \"community-operators-rttlt\" (UID: \"561f007e-bddb-4c6f-83a3-d9052a392b37\") " pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.117477 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/561f007e-bddb-4c6f-83a3-d9052a392b37-catalog-content\") pod \"community-operators-rttlt\" (UID: \"561f007e-bddb-4c6f-83a3-d9052a392b37\") " pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.117592 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fsrq\" (UniqueName: \"kubernetes.io/projected/561f007e-bddb-4c6f-83a3-d9052a392b37-kube-api-access-2fsrq\") pod \"community-operators-rttlt\" (UID: \"561f007e-bddb-4c6f-83a3-d9052a392b37\") " pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.117626 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/561f007e-bddb-4c6f-83a3-d9052a392b37-utilities\") pod \"community-operators-rttlt\" (UID: \"561f007e-bddb-4c6f-83a3-d9052a392b37\") " pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.118444 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/561f007e-bddb-4c6f-83a3-d9052a392b37-catalog-content\") pod \"community-operators-rttlt\" (UID: \"561f007e-bddb-4c6f-83a3-d9052a392b37\") " pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.118464 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/561f007e-bddb-4c6f-83a3-d9052a392b37-utilities\") pod \"community-operators-rttlt\" (UID: \"561f007e-bddb-4c6f-83a3-d9052a392b37\") " pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.136201 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fsrq\" (UniqueName: \"kubernetes.io/projected/561f007e-bddb-4c6f-83a3-d9052a392b37-kube-api-access-2fsrq\") pod \"community-operators-rttlt\" (UID: \"561f007e-bddb-4c6f-83a3-d9052a392b37\") " pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.201480 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.219227 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-rttlt_openshift-marketplace_561f007e-bddb-4c6f-83a3-d9052a392b37_0(d8b971f96122f91f07e8c7c6fcc9e74b1ea3fa3fcf342c86823de798e60df163): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.219304 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-rttlt_openshift-marketplace_561f007e-bddb-4c6f-83a3-d9052a392b37_0(d8b971f96122f91f07e8c7c6fcc9e74b1ea3fa3fcf342c86823de798e60df163): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.219325 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-rttlt_openshift-marketplace_561f007e-bddb-4c6f-83a3-d9052a392b37_0(d8b971f96122f91f07e8c7c6fcc9e74b1ea3fa3fcf342c86823de798e60df163): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.219381 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-rttlt_openshift-marketplace(561f007e-bddb-4c6f-83a3-d9052a392b37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-rttlt_openshift-marketplace(561f007e-bddb-4c6f-83a3-d9052a392b37)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-rttlt_openshift-marketplace_561f007e-bddb-4c6f-83a3-d9052a392b37_0(d8b971f96122f91f07e8c7c6fcc9e74b1ea3fa3fcf342c86823de798e60df163): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/community-operators-rttlt" podUID="561f007e-bddb-4c6f-83a3-d9052a392b37" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.540564 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.540646 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.541032 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: I1121 09:55:55.541720 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.575918 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-rttlt_openshift-marketplace_561f007e-bddb-4c6f-83a3-d9052a392b37_0(b005f63649142f6b74c1a09a698cc384bc94ea75c93c470832f462654ffb2e04): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.575991 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-rttlt_openshift-marketplace_561f007e-bddb-4c6f-83a3-d9052a392b37_0(b005f63649142f6b74c1a09a698cc384bc94ea75c93c470832f462654ffb2e04): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.576011 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-rttlt_openshift-marketplace_561f007e-bddb-4c6f-83a3-d9052a392b37_0(b005f63649142f6b74c1a09a698cc384bc94ea75c93c470832f462654ffb2e04): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.576060 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-rttlt_openshift-marketplace(561f007e-bddb-4c6f-83a3-d9052a392b37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-rttlt_openshift-marketplace(561f007e-bddb-4c6f-83a3-d9052a392b37)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-rttlt_openshift-marketplace_561f007e-bddb-4c6f-83a3-d9052a392b37_0(b005f63649142f6b74c1a09a698cc384bc94ea75c93c470832f462654ffb2e04): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/community-operators-rttlt" podUID="561f007e-bddb-4c6f-83a3-d9052a392b37" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.579488 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-npssb_openshift-marketplace_ad421b56-53fa-4ad2-9233-eb8e2d72be57_0(68149603207d54eaaf02c1a8c4eb05ee97b13db8453f9e43ac442ecc7885776d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.579548 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-npssb_openshift-marketplace_ad421b56-53fa-4ad2-9233-eb8e2d72be57_0(68149603207d54eaaf02c1a8c4eb05ee97b13db8453f9e43ac442ecc7885776d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.579574 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-npssb_openshift-marketplace_ad421b56-53fa-4ad2-9233-eb8e2d72be57_0(68149603207d54eaaf02c1a8c4eb05ee97b13db8453f9e43ac442ecc7885776d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:55:55 crc kubenswrapper[4972]: E1121 09:55:55.579628 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-npssb_openshift-marketplace(ad421b56-53fa-4ad2-9233-eb8e2d72be57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-npssb_openshift-marketplace(ad421b56-53fa-4ad2-9233-eb8e2d72be57)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-npssb_openshift-marketplace_ad421b56-53fa-4ad2-9233-eb8e2d72be57_0(68149603207d54eaaf02c1a8c4eb05ee97b13db8453f9e43ac442ecc7885776d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-npssb" podUID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.083472 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tp9g9"] Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.084891 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.089899 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tp9g9"] Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.144269 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4909061-0974-4269-bdb7-5617d42e01af-utilities\") pod \"redhat-operators-tp9g9\" (UID: \"e4909061-0974-4269-bdb7-5617d42e01af\") " pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.144320 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82hsx\" (UniqueName: \"kubernetes.io/projected/e4909061-0974-4269-bdb7-5617d42e01af-kube-api-access-82hsx\") pod \"redhat-operators-tp9g9\" (UID: \"e4909061-0974-4269-bdb7-5617d42e01af\") " pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.144419 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4909061-0974-4269-bdb7-5617d42e01af-catalog-content\") pod \"redhat-operators-tp9g9\" (UID: \"e4909061-0974-4269-bdb7-5617d42e01af\") " pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.247867 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4909061-0974-4269-bdb7-5617d42e01af-utilities\") pod \"redhat-operators-tp9g9\" (UID: \"e4909061-0974-4269-bdb7-5617d42e01af\") " pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.248346 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82hsx\" (UniqueName: \"kubernetes.io/projected/e4909061-0974-4269-bdb7-5617d42e01af-kube-api-access-82hsx\") pod \"redhat-operators-tp9g9\" (UID: \"e4909061-0974-4269-bdb7-5617d42e01af\") " pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.248201 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4909061-0974-4269-bdb7-5617d42e01af-utilities\") pod \"redhat-operators-tp9g9\" (UID: \"e4909061-0974-4269-bdb7-5617d42e01af\") " pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.248429 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4909061-0974-4269-bdb7-5617d42e01af-catalog-content\") pod \"redhat-operators-tp9g9\" (UID: \"e4909061-0974-4269-bdb7-5617d42e01af\") " pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.249141 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4909061-0974-4269-bdb7-5617d42e01af-catalog-content\") pod \"redhat-operators-tp9g9\" (UID: \"e4909061-0974-4269-bdb7-5617d42e01af\") " pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.273242 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82hsx\" (UniqueName: \"kubernetes.io/projected/e4909061-0974-4269-bdb7-5617d42e01af-kube-api-access-82hsx\") pod \"redhat-operators-tp9g9\" (UID: \"e4909061-0974-4269-bdb7-5617d42e01af\") " pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.280142 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m4qfs"] Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.282245 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.291030 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4qfs"] Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.409760 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: E1121 09:55:57.437505 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-tp9g9_openshift-marketplace_e4909061-0974-4269-bdb7-5617d42e01af_0(8f60c42f41352f6b3c33ac046950db17eae3b890eb5626d35a534b54c85c922d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:57 crc kubenswrapper[4972]: E1121 09:55:57.437584 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-tp9g9_openshift-marketplace_e4909061-0974-4269-bdb7-5617d42e01af_0(8f60c42f41352f6b3c33ac046950db17eae3b890eb5626d35a534b54c85c922d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: E1121 09:55:57.437605 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-tp9g9_openshift-marketplace_e4909061-0974-4269-bdb7-5617d42e01af_0(8f60c42f41352f6b3c33ac046950db17eae3b890eb5626d35a534b54c85c922d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: E1121 09:55:57.437659 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-operators-tp9g9_openshift-marketplace(e4909061-0974-4269-bdb7-5617d42e01af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-operators-tp9g9_openshift-marketplace(e4909061-0974-4269-bdb7-5617d42e01af)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-tp9g9_openshift-marketplace_e4909061-0974-4269-bdb7-5617d42e01af_0(8f60c42f41352f6b3c33ac046950db17eae3b890eb5626d35a534b54c85c922d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/redhat-operators-tp9g9" podUID="e4909061-0974-4269-bdb7-5617d42e01af" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.449905 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktsqn\" (UniqueName: \"kubernetes.io/projected/b698e278-793a-414e-9b74-54abd348e37a-kube-api-access-ktsqn\") pod \"redhat-marketplace-m4qfs\" (UID: \"b698e278-793a-414e-9b74-54abd348e37a\") " pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.449960 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b698e278-793a-414e-9b74-54abd348e37a-catalog-content\") pod \"redhat-marketplace-m4qfs\" (UID: \"b698e278-793a-414e-9b74-54abd348e37a\") " pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.449987 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b698e278-793a-414e-9b74-54abd348e37a-utilities\") pod \"redhat-marketplace-m4qfs\" (UID: \"b698e278-793a-414e-9b74-54abd348e37a\") " pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.551047 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.551530 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.551949 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktsqn\" (UniqueName: \"kubernetes.io/projected/b698e278-793a-414e-9b74-54abd348e37a-kube-api-access-ktsqn\") pod \"redhat-marketplace-m4qfs\" (UID: \"b698e278-793a-414e-9b74-54abd348e37a\") " pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.552003 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b698e278-793a-414e-9b74-54abd348e37a-catalog-content\") pod \"redhat-marketplace-m4qfs\" (UID: \"b698e278-793a-414e-9b74-54abd348e37a\") " pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.552031 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b698e278-793a-414e-9b74-54abd348e37a-utilities\") pod \"redhat-marketplace-m4qfs\" (UID: \"b698e278-793a-414e-9b74-54abd348e37a\") " pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.552388 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b698e278-793a-414e-9b74-54abd348e37a-utilities\") pod \"redhat-marketplace-m4qfs\" (UID: \"b698e278-793a-414e-9b74-54abd348e37a\") " pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.552511 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b698e278-793a-414e-9b74-54abd348e37a-catalog-content\") pod \"redhat-marketplace-m4qfs\" (UID: \"b698e278-793a-414e-9b74-54abd348e37a\") " pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: E1121 09:55:57.571071 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-tp9g9_openshift-marketplace_e4909061-0974-4269-bdb7-5617d42e01af_0(6b1ea982547343fbba42896dcdfb95745c7d25a97230a6928cd6c2f58007699f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:57 crc kubenswrapper[4972]: E1121 09:55:57.571158 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-tp9g9_openshift-marketplace_e4909061-0974-4269-bdb7-5617d42e01af_0(6b1ea982547343fbba42896dcdfb95745c7d25a97230a6928cd6c2f58007699f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: E1121 09:55:57.571186 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-tp9g9_openshift-marketplace_e4909061-0974-4269-bdb7-5617d42e01af_0(6b1ea982547343fbba42896dcdfb95745c7d25a97230a6928cd6c2f58007699f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:55:57 crc kubenswrapper[4972]: E1121 09:55:57.571246 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-operators-tp9g9_openshift-marketplace(e4909061-0974-4269-bdb7-5617d42e01af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-operators-tp9g9_openshift-marketplace(e4909061-0974-4269-bdb7-5617d42e01af)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-tp9g9_openshift-marketplace_e4909061-0974-4269-bdb7-5617d42e01af_0(6b1ea982547343fbba42896dcdfb95745c7d25a97230a6928cd6c2f58007699f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/redhat-operators-tp9g9" podUID="e4909061-0974-4269-bdb7-5617d42e01af" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.580481 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktsqn\" (UniqueName: \"kubernetes.io/projected/b698e278-793a-414e-9b74-54abd348e37a-kube-api-access-ktsqn\") pod \"redhat-marketplace-m4qfs\" (UID: \"b698e278-793a-414e-9b74-54abd348e37a\") " pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: I1121 09:55:57.612610 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: E1121 09:55:57.635411 4972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-m4qfs_openshift-marketplace_b698e278-793a-414e-9b74-54abd348e37a_0(a4c16098fcb640251bd21b1fd69d02bd5c8ee3918044a905b97fb1f569c624cb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 09:55:57 crc kubenswrapper[4972]: E1121 09:55:57.635478 4972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-m4qfs_openshift-marketplace_b698e278-793a-414e-9b74-54abd348e37a_0(a4c16098fcb640251bd21b1fd69d02bd5c8ee3918044a905b97fb1f569c624cb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: E1121 09:55:57.635508 4972 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-m4qfs_openshift-marketplace_b698e278-793a-414e-9b74-54abd348e37a_0(a4c16098fcb640251bd21b1fd69d02bd5c8ee3918044a905b97fb1f569c624cb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:57 crc kubenswrapper[4972]: E1121 09:55:57.635562 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-m4qfs_openshift-marketplace(b698e278-793a-414e-9b74-54abd348e37a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-m4qfs_openshift-marketplace(b698e278-793a-414e-9b74-54abd348e37a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-m4qfs_openshift-marketplace_b698e278-793a-414e-9b74-54abd348e37a_0(a4c16098fcb640251bd21b1fd69d02bd5c8ee3918044a905b97fb1f569c624cb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/redhat-marketplace-m4qfs" podUID="b698e278-793a-414e-9b74-54abd348e37a" Nov 21 09:55:58 crc kubenswrapper[4972]: I1121 09:55:58.557446 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:58 crc kubenswrapper[4972]: I1121 09:55:58.559080 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:55:58 crc kubenswrapper[4972]: I1121 09:55:58.836460 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4qfs"] Nov 21 09:55:59 crc kubenswrapper[4972]: I1121 09:55:59.564229 4972 generic.go:334] "Generic (PLEG): container finished" podID="b698e278-793a-414e-9b74-54abd348e37a" containerID="fe5b7a7fa564e1ac6a7d8013c706ba283b9f4ca45544c50dde807f6b0bbc6b3f" exitCode=0 Nov 21 09:55:59 crc kubenswrapper[4972]: I1121 09:55:59.564508 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4qfs" event={"ID":"b698e278-793a-414e-9b74-54abd348e37a","Type":"ContainerDied","Data":"fe5b7a7fa564e1ac6a7d8013c706ba283b9f4ca45544c50dde807f6b0bbc6b3f"} Nov 21 09:55:59 crc kubenswrapper[4972]: I1121 09:55:59.564533 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4qfs" event={"ID":"b698e278-793a-414e-9b74-54abd348e37a","Type":"ContainerStarted","Data":"8ab7fc0b810febbcb717c91d2aa88721cda45d5a02f321dd8a3d7e6efc6f540b"} Nov 21 09:56:00 crc kubenswrapper[4972]: I1121 09:56:00.572626 4972 generic.go:334] "Generic (PLEG): container finished" podID="b698e278-793a-414e-9b74-54abd348e37a" containerID="18f3ec01ff46657c0ff023db125c12d2cd4296bf36627f7cf9af44d5c0a2da28" exitCode=0 Nov 21 09:56:00 crc kubenswrapper[4972]: I1121 09:56:00.572678 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4qfs" event={"ID":"b698e278-793a-414e-9b74-54abd348e37a","Type":"ContainerDied","Data":"18f3ec01ff46657c0ff023db125c12d2cd4296bf36627f7cf9af44d5c0a2da28"} Nov 21 09:56:01 crc kubenswrapper[4972]: I1121 09:56:01.581338 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4qfs" event={"ID":"b698e278-793a-414e-9b74-54abd348e37a","Type":"ContainerStarted","Data":"081a90ed7afc5a1b93f58f73eabdf47fa1b52b1d36cb8702bc705f3c64d1c73a"} Nov 21 09:56:01 crc kubenswrapper[4972]: I1121 09:56:01.605222 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m4qfs" podStartSLOduration=3.214983601 podStartE2EDuration="4.605196122s" podCreationTimestamp="2025-11-21 09:55:57 +0000 UTC" firstStartedPulling="2025-11-21 09:55:59.565697374 +0000 UTC m=+904.674839872" lastFinishedPulling="2025-11-21 09:56:00.955909895 +0000 UTC m=+906.065052393" observedRunningTime="2025-11-21 09:56:01.602256504 +0000 UTC m=+906.711399002" watchObservedRunningTime="2025-11-21 09:56:01.605196122 +0000 UTC m=+906.714338660" Nov 21 09:56:05 crc kubenswrapper[4972]: I1121 09:56:05.758627 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:56:05 crc kubenswrapper[4972]: I1121 09:56:05.761273 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:56:05 crc kubenswrapper[4972]: I1121 09:56:05.944856 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2wpdc"] Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.575125 4972 scope.go:117] "RemoveContainer" containerID="16f24b4aaafd96da6cc8c7d41e8bfe31f25673b06e6ae30d236d6341b6acb3ba" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.593871 4972 scope.go:117] "RemoveContainer" containerID="5acdee8f981fac5dc192acdfb2c97c99a59bd050734e91a71bdd2134df96052c" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.607532 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" event={"ID":"6e85a9ad-624e-40f7-9084-3be164ba8fb2","Type":"ContainerStarted","Data":"67a116b0447327bc538b951596914083344684d4d938db4519fdbfdda2599c36"} Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.607576 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" event={"ID":"6e85a9ad-624e-40f7-9084-3be164ba8fb2","Type":"ContainerStarted","Data":"d4e56f371ebfc55adc77f10d5fab029956fdc716f0909784d598176625bb4799"} Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.608109 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.610270 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.610555 4972 scope.go:117] "RemoveContainer" containerID="309db296985e6199894874dbdf99862f1b166a00f6c9201bcbb35d7864adfdb8" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.625250 4972 scope.go:117] "RemoveContainer" containerID="94b1d97c84c9ee803ad51c051ebda5421b2005cf15277b6212a5b5412d55777f" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.629699 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2wpdc" podStartSLOduration=14.629680707 podStartE2EDuration="14.629680707s" podCreationTimestamp="2025-11-21 09:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:56:06.626186284 +0000 UTC m=+911.735328802" watchObservedRunningTime="2025-11-21 09:56:06.629680707 +0000 UTC m=+911.738823205" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.639894 4972 scope.go:117] "RemoveContainer" containerID="6ef5309fc1625c7e7db65fa613e4e7874aeb4c2c0df70f5961c0ca3f968ae722" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.652607 4972 scope.go:117] "RemoveContainer" containerID="8dbe9e7a2bbd27c912cf804f8d8a11bf3282f37eb448e30e34374680542563f0" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.672674 4972 scope.go:117] "RemoveContainer" containerID="5fc381b92d5ff8f7e518e54ab65458e963f37662fd0657d3b81fd72896c18023" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.714772 4972 scope.go:117] "RemoveContainer" containerID="654633c4c1d491bcd03ae118fca3940aa4d22444b34fe3f2c41672f5beaf07b3" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.739609 4972 scope.go:117] "RemoveContainer" containerID="d7427fc0cb0a7be625f24adebdc2cdb70039b44b47b9e999acf4ef3e9445e021" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.758727 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.759448 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.780960 4972 scope.go:117] "RemoveContainer" containerID="aaaba8d2f8c9a49e68fa20e547c582ed289f0a44f78922a49ccb6c0332cb22f3" Nov 21 09:56:06 crc kubenswrapper[4972]: I1121 09:56:06.948413 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggnjc"] Nov 21 09:56:06 crc kubenswrapper[4972]: W1121 09:56:06.954880 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod586db52b_8a9d_4052_a47c_bea7e440b977.slice/crio-6a438a4410b2a4d462ece31b15a44c2e7b322005805f2765820ceec75abbd7b6 WatchSource:0}: Error finding container 6a438a4410b2a4d462ece31b15a44c2e7b322005805f2765820ceec75abbd7b6: Status 404 returned error can't find the container with id 6a438a4410b2a4d462ece31b15a44c2e7b322005805f2765820ceec75abbd7b6 Nov 21 09:56:07 crc kubenswrapper[4972]: I1121 09:56:07.613371 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:56:07 crc kubenswrapper[4972]: I1121 09:56:07.613896 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:56:07 crc kubenswrapper[4972]: I1121 09:56:07.616719 4972 generic.go:334] "Generic (PLEG): container finished" podID="586db52b-8a9d-4052-a47c-bea7e440b977" containerID="85d5fd7ad1351651afc383acae3502996574553281db5c41b6fb66d48d7be31a" exitCode=0 Nov 21 09:56:07 crc kubenswrapper[4972]: I1121 09:56:07.616994 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggnjc" event={"ID":"586db52b-8a9d-4052-a47c-bea7e440b977","Type":"ContainerDied","Data":"85d5fd7ad1351651afc383acae3502996574553281db5c41b6fb66d48d7be31a"} Nov 21 09:56:07 crc kubenswrapper[4972]: I1121 09:56:07.617038 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggnjc" event={"ID":"586db52b-8a9d-4052-a47c-bea7e440b977","Type":"ContainerStarted","Data":"6a438a4410b2a4d462ece31b15a44c2e7b322005805f2765820ceec75abbd7b6"} Nov 21 09:56:07 crc kubenswrapper[4972]: I1121 09:56:07.675856 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:56:07 crc kubenswrapper[4972]: I1121 09:56:07.758929 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:56:07 crc kubenswrapper[4972]: I1121 09:56:07.758946 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:56:07 crc kubenswrapper[4972]: I1121 09:56:07.759501 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:56:07 crc kubenswrapper[4972]: I1121 09:56:07.759607 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:56:07 crc kubenswrapper[4972]: I1121 09:56:07.940646 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tj4g9"] Nov 21 09:56:07 crc kubenswrapper[4972]: W1121 09:56:07.953133 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ce1c1d4_10f3_4105_bca2_5e0589cd1bb8.slice/crio-d66f3dad72f47f3b933d6758edc54d1b1eed5e40c462e853199963356437b22e WatchSource:0}: Error finding container d66f3dad72f47f3b933d6758edc54d1b1eed5e40c462e853199963356437b22e: Status 404 returned error can't find the container with id d66f3dad72f47f3b933d6758edc54d1b1eed5e40c462e853199963356437b22e Nov 21 09:56:07 crc kubenswrapper[4972]: I1121 09:56:07.994797 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rttlt"] Nov 21 09:56:08 crc kubenswrapper[4972]: W1121 09:56:08.003544 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod561f007e_bddb_4c6f_83a3_d9052a392b37.slice/crio-4e51c398b8fe7badc8e4b146e5170bf9ee12ddc44b2f4189346ade96e5f443fd WatchSource:0}: Error finding container 4e51c398b8fe7badc8e4b146e5170bf9ee12ddc44b2f4189346ade96e5f443fd: Status 404 returned error can't find the container with id 4e51c398b8fe7badc8e4b146e5170bf9ee12ddc44b2f4189346ade96e5f443fd Nov 21 09:56:08 crc kubenswrapper[4972]: I1121 09:56:08.625812 4972 generic.go:334] "Generic (PLEG): container finished" podID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" containerID="b02ce10bc57604f8d2ea3e100ca87819b437fc60286eba31057a75d531b47edf" exitCode=0 Nov 21 09:56:08 crc kubenswrapper[4972]: I1121 09:56:08.625952 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tj4g9" event={"ID":"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8","Type":"ContainerDied","Data":"b02ce10bc57604f8d2ea3e100ca87819b437fc60286eba31057a75d531b47edf"} Nov 21 09:56:08 crc kubenswrapper[4972]: I1121 09:56:08.625995 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tj4g9" event={"ID":"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8","Type":"ContainerStarted","Data":"d66f3dad72f47f3b933d6758edc54d1b1eed5e40c462e853199963356437b22e"} Nov 21 09:56:08 crc kubenswrapper[4972]: I1121 09:56:08.629435 4972 generic.go:334] "Generic (PLEG): container finished" podID="561f007e-bddb-4c6f-83a3-d9052a392b37" containerID="9b983f4231335def6ac2eaea1f95dce0966ff5e336e598d0652119c40b759dc4" exitCode=0 Nov 21 09:56:08 crc kubenswrapper[4972]: I1121 09:56:08.629509 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttlt" event={"ID":"561f007e-bddb-4c6f-83a3-d9052a392b37","Type":"ContainerDied","Data":"9b983f4231335def6ac2eaea1f95dce0966ff5e336e598d0652119c40b759dc4"} Nov 21 09:56:08 crc kubenswrapper[4972]: I1121 09:56:08.629569 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttlt" event={"ID":"561f007e-bddb-4c6f-83a3-d9052a392b37","Type":"ContainerStarted","Data":"4e51c398b8fe7badc8e4b146e5170bf9ee12ddc44b2f4189346ade96e5f443fd"} Nov 21 09:56:08 crc kubenswrapper[4972]: I1121 09:56:08.634249 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggnjc" event={"ID":"586db52b-8a9d-4052-a47c-bea7e440b977","Type":"ContainerStarted","Data":"57057652bf5b08dc825b1d7a1a6726686f8c3fe5b6df1a2aeb0f8792367f49f0"} Nov 21 09:56:08 crc kubenswrapper[4972]: I1121 09:56:08.690962 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m4qfs" Nov 21 09:56:08 crc kubenswrapper[4972]: I1121 09:56:08.759191 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:56:08 crc kubenswrapper[4972]: I1121 09:56:08.759912 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:56:08 crc kubenswrapper[4972]: I1121 09:56:08.950184 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tp9g9"] Nov 21 09:56:08 crc kubenswrapper[4972]: W1121 09:56:08.958734 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4909061_0974_4269_bdb7_5617d42e01af.slice/crio-2f212f4e7c702c2357d985795c4d8d9b5a31092bc8d7ef1914cccfba8e832190 WatchSource:0}: Error finding container 2f212f4e7c702c2357d985795c4d8d9b5a31092bc8d7ef1914cccfba8e832190: Status 404 returned error can't find the container with id 2f212f4e7c702c2357d985795c4d8d9b5a31092bc8d7ef1914cccfba8e832190 Nov 21 09:56:09 crc kubenswrapper[4972]: I1121 09:56:09.640457 4972 generic.go:334] "Generic (PLEG): container finished" podID="586db52b-8a9d-4052-a47c-bea7e440b977" containerID="57057652bf5b08dc825b1d7a1a6726686f8c3fe5b6df1a2aeb0f8792367f49f0" exitCode=0 Nov 21 09:56:09 crc kubenswrapper[4972]: I1121 09:56:09.640552 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggnjc" event={"ID":"586db52b-8a9d-4052-a47c-bea7e440b977","Type":"ContainerDied","Data":"57057652bf5b08dc825b1d7a1a6726686f8c3fe5b6df1a2aeb0f8792367f49f0"} Nov 21 09:56:09 crc kubenswrapper[4972]: I1121 09:56:09.642733 4972 generic.go:334] "Generic (PLEG): container finished" podID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" containerID="a19892c25976d78cd137e56bbfc9e69477d34aba9ffe9654afb15aaf9ecb64fc" exitCode=0 Nov 21 09:56:09 crc kubenswrapper[4972]: I1121 09:56:09.642773 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tj4g9" event={"ID":"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8","Type":"ContainerDied","Data":"a19892c25976d78cd137e56bbfc9e69477d34aba9ffe9654afb15aaf9ecb64fc"} Nov 21 09:56:09 crc kubenswrapper[4972]: I1121 09:56:09.645895 4972 generic.go:334] "Generic (PLEG): container finished" podID="e4909061-0974-4269-bdb7-5617d42e01af" containerID="44afc4eef3103269cf45d7edf7c440d6cb57d2d1d59765d42c2f2100e32db4fc" exitCode=0 Nov 21 09:56:09 crc kubenswrapper[4972]: I1121 09:56:09.645966 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp9g9" event={"ID":"e4909061-0974-4269-bdb7-5617d42e01af","Type":"ContainerDied","Data":"44afc4eef3103269cf45d7edf7c440d6cb57d2d1d59765d42c2f2100e32db4fc"} Nov 21 09:56:09 crc kubenswrapper[4972]: I1121 09:56:09.646002 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp9g9" event={"ID":"e4909061-0974-4269-bdb7-5617d42e01af","Type":"ContainerStarted","Data":"2f212f4e7c702c2357d985795c4d8d9b5a31092bc8d7ef1914cccfba8e832190"} Nov 21 09:56:09 crc kubenswrapper[4972]: I1121 09:56:09.655085 4972 generic.go:334] "Generic (PLEG): container finished" podID="561f007e-bddb-4c6f-83a3-d9052a392b37" containerID="eaa4054b6eaddf5fadeb51c6e2b4e14ed6cb2373f83aff013c149445101af7b1" exitCode=0 Nov 21 09:56:09 crc kubenswrapper[4972]: I1121 09:56:09.656192 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttlt" event={"ID":"561f007e-bddb-4c6f-83a3-d9052a392b37","Type":"ContainerDied","Data":"eaa4054b6eaddf5fadeb51c6e2b4e14ed6cb2373f83aff013c149445101af7b1"} Nov 21 09:56:09 crc kubenswrapper[4972]: I1121 09:56:09.760289 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:56:09 crc kubenswrapper[4972]: I1121 09:56:09.760928 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:56:10 crc kubenswrapper[4972]: I1121 09:56:10.178127 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-npssb"] Nov 21 09:56:10 crc kubenswrapper[4972]: I1121 09:56:10.662563 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttlt" event={"ID":"561f007e-bddb-4c6f-83a3-d9052a392b37","Type":"ContainerStarted","Data":"0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d"} Nov 21 09:56:10 crc kubenswrapper[4972]: I1121 09:56:10.664466 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggnjc" event={"ID":"586db52b-8a9d-4052-a47c-bea7e440b977","Type":"ContainerStarted","Data":"fe1d5d2919f859c0de5641cf4964065bee97d51a122c157839efa09d86dabc7c"} Nov 21 09:56:10 crc kubenswrapper[4972]: I1121 09:56:10.666032 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tj4g9" event={"ID":"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8","Type":"ContainerStarted","Data":"48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4"} Nov 21 09:56:10 crc kubenswrapper[4972]: I1121 09:56:10.667230 4972 generic.go:334] "Generic (PLEG): container finished" podID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" containerID="27ab6a653d3711c9a9fa1d4befc39f2511aac3649025dd5b8b0c856d4587774f" exitCode=0 Nov 21 09:56:10 crc kubenswrapper[4972]: I1121 09:56:10.667294 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npssb" event={"ID":"ad421b56-53fa-4ad2-9233-eb8e2d72be57","Type":"ContainerDied","Data":"27ab6a653d3711c9a9fa1d4befc39f2511aac3649025dd5b8b0c856d4587774f"} Nov 21 09:56:10 crc kubenswrapper[4972]: I1121 09:56:10.668136 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npssb" event={"ID":"ad421b56-53fa-4ad2-9233-eb8e2d72be57","Type":"ContainerStarted","Data":"69f71dc141ff7a0bd6e131b72d0fca0d4a435bb7950a121639584d685d1e7c47"} Nov 21 09:56:10 crc kubenswrapper[4972]: I1121 09:56:10.675403 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp9g9" event={"ID":"e4909061-0974-4269-bdb7-5617d42e01af","Type":"ContainerStarted","Data":"f81c37aae227b57b0d7008ee8a671b0aba029bb4e5f8dd401ca64a2ab076188e"} Nov 21 09:56:10 crc kubenswrapper[4972]: I1121 09:56:10.701943 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rttlt" podStartSLOduration=15.201080729 podStartE2EDuration="16.701929975s" podCreationTimestamp="2025-11-21 09:55:54 +0000 UTC" firstStartedPulling="2025-11-21 09:56:08.631799748 +0000 UTC m=+913.740942246" lastFinishedPulling="2025-11-21 09:56:10.132648984 +0000 UTC m=+915.241791492" observedRunningTime="2025-11-21 09:56:10.699336736 +0000 UTC m=+915.808479254" watchObservedRunningTime="2025-11-21 09:56:10.701929975 +0000 UTC m=+915.811072473" Nov 21 09:56:10 crc kubenswrapper[4972]: I1121 09:56:10.738891 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tj4g9" podStartSLOduration=17.341626852 podStartE2EDuration="18.738872922s" podCreationTimestamp="2025-11-21 09:55:52 +0000 UTC" firstStartedPulling="2025-11-21 09:56:08.627793791 +0000 UTC m=+913.736936299" lastFinishedPulling="2025-11-21 09:56:10.025039871 +0000 UTC m=+915.134182369" observedRunningTime="2025-11-21 09:56:10.736626782 +0000 UTC m=+915.845769280" watchObservedRunningTime="2025-11-21 09:56:10.738872922 +0000 UTC m=+915.848015410" Nov 21 09:56:10 crc kubenswrapper[4972]: I1121 09:56:10.757886 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ggnjc" podStartSLOduration=16.311605909 podStartE2EDuration="18.757869099s" podCreationTimestamp="2025-11-21 09:55:52 +0000 UTC" firstStartedPulling="2025-11-21 09:56:07.620050343 +0000 UTC m=+912.729192881" lastFinishedPulling="2025-11-21 09:56:10.066313553 +0000 UTC m=+915.175456071" observedRunningTime="2025-11-21 09:56:10.757505889 +0000 UTC m=+915.866648407" watchObservedRunningTime="2025-11-21 09:56:10.757869099 +0000 UTC m=+915.867011597" Nov 21 09:56:11 crc kubenswrapper[4972]: I1121 09:56:11.681134 4972 generic.go:334] "Generic (PLEG): container finished" podID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" containerID="5ca26177fe1524348c54e8be338d8198dc6dd2f1d3f688f8d5d326c4d0ad0085" exitCode=0 Nov 21 09:56:11 crc kubenswrapper[4972]: I1121 09:56:11.681191 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npssb" event={"ID":"ad421b56-53fa-4ad2-9233-eb8e2d72be57","Type":"ContainerDied","Data":"5ca26177fe1524348c54e8be338d8198dc6dd2f1d3f688f8d5d326c4d0ad0085"} Nov 21 09:56:11 crc kubenswrapper[4972]: I1121 09:56:11.684354 4972 generic.go:334] "Generic (PLEG): container finished" podID="e4909061-0974-4269-bdb7-5617d42e01af" containerID="f81c37aae227b57b0d7008ee8a671b0aba029bb4e5f8dd401ca64a2ab076188e" exitCode=0 Nov 21 09:56:11 crc kubenswrapper[4972]: I1121 09:56:11.684428 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp9g9" event={"ID":"e4909061-0974-4269-bdb7-5617d42e01af","Type":"ContainerDied","Data":"f81c37aae227b57b0d7008ee8a671b0aba029bb4e5f8dd401ca64a2ab076188e"} Nov 21 09:56:12 crc kubenswrapper[4972]: I1121 09:56:12.565770 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:56:12 crc kubenswrapper[4972]: I1121 09:56:12.566178 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:56:12 crc kubenswrapper[4972]: I1121 09:56:12.694930 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npssb" event={"ID":"ad421b56-53fa-4ad2-9233-eb8e2d72be57","Type":"ContainerStarted","Data":"83c67644502ce19d54478a37737d508376785bcebc1aec6a5cd2f67119daca33"} Nov 21 09:56:12 crc kubenswrapper[4972]: I1121 09:56:12.698653 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp9g9" event={"ID":"e4909061-0974-4269-bdb7-5617d42e01af","Type":"ContainerStarted","Data":"e3b3258b16788ac0fc53718b3dc66eaceae4ec9540aff15a9f4356bbc76b7376"} Nov 21 09:56:12 crc kubenswrapper[4972]: I1121 09:56:12.716283 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-npssb" podStartSLOduration=17.289762432 podStartE2EDuration="18.716268202s" podCreationTimestamp="2025-11-21 09:55:54 +0000 UTC" firstStartedPulling="2025-11-21 09:56:10.673116016 +0000 UTC m=+915.782258514" lastFinishedPulling="2025-11-21 09:56:12.099621786 +0000 UTC m=+917.208764284" observedRunningTime="2025-11-21 09:56:12.714488515 +0000 UTC m=+917.823631013" watchObservedRunningTime="2025-11-21 09:56:12.716268202 +0000 UTC m=+917.825410700" Nov 21 09:56:12 crc kubenswrapper[4972]: I1121 09:56:12.738182 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tp9g9" podStartSLOduration=13.33059147 podStartE2EDuration="15.738160587s" podCreationTimestamp="2025-11-21 09:55:57 +0000 UTC" firstStartedPulling="2025-11-21 09:56:09.651601989 +0000 UTC m=+914.760744487" lastFinishedPulling="2025-11-21 09:56:12.059171116 +0000 UTC m=+917.168313604" observedRunningTime="2025-11-21 09:56:12.735237759 +0000 UTC m=+917.844380267" watchObservedRunningTime="2025-11-21 09:56:12.738160587 +0000 UTC m=+917.847303085" Nov 21 09:56:12 crc kubenswrapper[4972]: I1121 09:56:12.875745 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:56:12 crc kubenswrapper[4972]: I1121 09:56:12.875817 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:56:12 crc kubenswrapper[4972]: I1121 09:56:12.941461 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:56:13 crc kubenswrapper[4972]: I1121 09:56:13.607238 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ggnjc" podUID="586db52b-8a9d-4052-a47c-bea7e440b977" containerName="registry-server" probeResult="failure" output=< Nov 21 09:56:13 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 09:56:13 crc kubenswrapper[4972]: > Nov 21 09:56:14 crc kubenswrapper[4972]: I1121 09:56:14.992094 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:56:14 crc kubenswrapper[4972]: I1121 09:56:14.992364 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:56:15 crc kubenswrapper[4972]: I1121 09:56:15.034390 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:56:15 crc kubenswrapper[4972]: I1121 09:56:15.202378 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:56:15 crc kubenswrapper[4972]: I1121 09:56:15.202429 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:56:15 crc kubenswrapper[4972]: I1121 09:56:15.240167 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:56:15 crc kubenswrapper[4972]: I1121 09:56:15.752942 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rttlt" Nov 21 09:56:17 crc kubenswrapper[4972]: I1121 09:56:17.257095 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gkmfx" Nov 21 09:56:17 crc kubenswrapper[4972]: I1121 09:56:17.410012 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:56:17 crc kubenswrapper[4972]: I1121 09:56:17.410265 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:56:18 crc kubenswrapper[4972]: I1121 09:56:18.452920 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tp9g9" podUID="e4909061-0974-4269-bdb7-5617d42e01af" containerName="registry-server" probeResult="failure" output=< Nov 21 09:56:18 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 09:56:18 crc kubenswrapper[4972]: > Nov 21 09:56:22 crc kubenswrapper[4972]: I1121 09:56:22.610779 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:56:22 crc kubenswrapper[4972]: I1121 09:56:22.657240 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:56:22 crc kubenswrapper[4972]: I1121 09:56:22.935077 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:56:24 crc kubenswrapper[4972]: I1121 09:56:24.873481 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggnjc"] Nov 21 09:56:24 crc kubenswrapper[4972]: I1121 09:56:24.874385 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ggnjc" podUID="586db52b-8a9d-4052-a47c-bea7e440b977" containerName="registry-server" containerID="cri-o://fe1d5d2919f859c0de5641cf4964065bee97d51a122c157839efa09d86dabc7c" gracePeriod=2 Nov 21 09:56:25 crc kubenswrapper[4972]: I1121 09:56:25.036450 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-npssb" Nov 21 09:56:25 crc kubenswrapper[4972]: I1121 09:56:25.771923 4972 generic.go:334] "Generic (PLEG): container finished" podID="586db52b-8a9d-4052-a47c-bea7e440b977" containerID="fe1d5d2919f859c0de5641cf4964065bee97d51a122c157839efa09d86dabc7c" exitCode=0 Nov 21 09:56:25 crc kubenswrapper[4972]: I1121 09:56:25.771965 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggnjc" event={"ID":"586db52b-8a9d-4052-a47c-bea7e440b977","Type":"ContainerDied","Data":"fe1d5d2919f859c0de5641cf4964065bee97d51a122c157839efa09d86dabc7c"} Nov 21 09:56:25 crc kubenswrapper[4972]: I1121 09:56:25.771996 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggnjc" event={"ID":"586db52b-8a9d-4052-a47c-bea7e440b977","Type":"ContainerDied","Data":"6a438a4410b2a4d462ece31b15a44c2e7b322005805f2765820ceec75abbd7b6"} Nov 21 09:56:25 crc kubenswrapper[4972]: I1121 09:56:25.772006 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a438a4410b2a4d462ece31b15a44c2e7b322005805f2765820ceec75abbd7b6" Nov 21 09:56:25 crc kubenswrapper[4972]: I1121 09:56:25.795139 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:56:25 crc kubenswrapper[4972]: I1121 09:56:25.929736 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586db52b-8a9d-4052-a47c-bea7e440b977-utilities\") pod \"586db52b-8a9d-4052-a47c-bea7e440b977\" (UID: \"586db52b-8a9d-4052-a47c-bea7e440b977\") " Nov 21 09:56:25 crc kubenswrapper[4972]: I1121 09:56:25.929784 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlfpd\" (UniqueName: \"kubernetes.io/projected/586db52b-8a9d-4052-a47c-bea7e440b977-kube-api-access-vlfpd\") pod \"586db52b-8a9d-4052-a47c-bea7e440b977\" (UID: \"586db52b-8a9d-4052-a47c-bea7e440b977\") " Nov 21 09:56:25 crc kubenswrapper[4972]: I1121 09:56:25.929869 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586db52b-8a9d-4052-a47c-bea7e440b977-catalog-content\") pod \"586db52b-8a9d-4052-a47c-bea7e440b977\" (UID: \"586db52b-8a9d-4052-a47c-bea7e440b977\") " Nov 21 09:56:25 crc kubenswrapper[4972]: I1121 09:56:25.930580 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/586db52b-8a9d-4052-a47c-bea7e440b977-utilities" (OuterVolumeSpecName: "utilities") pod "586db52b-8a9d-4052-a47c-bea7e440b977" (UID: "586db52b-8a9d-4052-a47c-bea7e440b977"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:56:25 crc kubenswrapper[4972]: I1121 09:56:25.935178 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586db52b-8a9d-4052-a47c-bea7e440b977-kube-api-access-vlfpd" (OuterVolumeSpecName: "kube-api-access-vlfpd") pod "586db52b-8a9d-4052-a47c-bea7e440b977" (UID: "586db52b-8a9d-4052-a47c-bea7e440b977"). InnerVolumeSpecName "kube-api-access-vlfpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:56:26 crc kubenswrapper[4972]: I1121 09:56:26.012334 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/586db52b-8a9d-4052-a47c-bea7e440b977-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "586db52b-8a9d-4052-a47c-bea7e440b977" (UID: "586db52b-8a9d-4052-a47c-bea7e440b977"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:56:26 crc kubenswrapper[4972]: I1121 09:56:26.031423 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586db52b-8a9d-4052-a47c-bea7e440b977-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:56:26 crc kubenswrapper[4972]: I1121 09:56:26.031465 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586db52b-8a9d-4052-a47c-bea7e440b977-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:56:26 crc kubenswrapper[4972]: I1121 09:56:26.031480 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlfpd\" (UniqueName: \"kubernetes.io/projected/586db52b-8a9d-4052-a47c-bea7e440b977-kube-api-access-vlfpd\") on node \"crc\" DevicePath \"\"" Nov 21 09:56:26 crc kubenswrapper[4972]: I1121 09:56:26.775704 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggnjc" Nov 21 09:56:26 crc kubenswrapper[4972]: I1121 09:56:26.808965 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggnjc"] Nov 21 09:56:26 crc kubenswrapper[4972]: I1121 09:56:26.812407 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ggnjc"] Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.276035 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tj4g9"] Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.276394 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tj4g9" podUID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" containerName="registry-server" containerID="cri-o://48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4" gracePeriod=2 Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.453715 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.514908 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tp9g9" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.602245 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.750796 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7q8b\" (UniqueName: \"kubernetes.io/projected/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-kube-api-access-q7q8b\") pod \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\" (UID: \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\") " Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.750934 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-utilities\") pod \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\" (UID: \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\") " Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.750968 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-catalog-content\") pod \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\" (UID: \"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8\") " Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.752049 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-utilities" (OuterVolumeSpecName: "utilities") pod "7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" (UID: "7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.763120 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-kube-api-access-q7q8b" (OuterVolumeSpecName: "kube-api-access-q7q8b") pod "7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" (UID: "7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8"). InnerVolumeSpecName "kube-api-access-q7q8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.766723 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" (UID: "7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.768314 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586db52b-8a9d-4052-a47c-bea7e440b977" path="/var/lib/kubelet/pods/586db52b-8a9d-4052-a47c-bea7e440b977/volumes" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.783908 4972 generic.go:334] "Generic (PLEG): container finished" podID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" containerID="48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4" exitCode=0 Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.784021 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tj4g9" event={"ID":"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8","Type":"ContainerDied","Data":"48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4"} Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.784039 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tj4g9" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.784079 4972 scope.go:117] "RemoveContainer" containerID="48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.784066 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tj4g9" event={"ID":"7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8","Type":"ContainerDied","Data":"d66f3dad72f47f3b933d6758edc54d1b1eed5e40c462e853199963356437b22e"} Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.805251 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tj4g9"] Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.807216 4972 scope.go:117] "RemoveContainer" containerID="a19892c25976d78cd137e56bbfc9e69477d34aba9ffe9654afb15aaf9ecb64fc" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.808994 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tj4g9"] Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.825363 4972 scope.go:117] "RemoveContainer" containerID="b02ce10bc57604f8d2ea3e100ca87819b437fc60286eba31057a75d531b47edf" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.850131 4972 scope.go:117] "RemoveContainer" containerID="48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4" Nov 21 09:56:27 crc kubenswrapper[4972]: E1121 09:56:27.850602 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4\": container with ID starting with 48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4 not found: ID does not exist" containerID="48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.850645 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4"} err="failed to get container status \"48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4\": rpc error: code = NotFound desc = could not find container \"48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4\": container with ID starting with 48d3366ddec5bf270741fa16e86c19043160073d7712fe613ff5a08007d2f0a4 not found: ID does not exist" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.850672 4972 scope.go:117] "RemoveContainer" containerID="a19892c25976d78cd137e56bbfc9e69477d34aba9ffe9654afb15aaf9ecb64fc" Nov 21 09:56:27 crc kubenswrapper[4972]: E1121 09:56:27.851073 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a19892c25976d78cd137e56bbfc9e69477d34aba9ffe9654afb15aaf9ecb64fc\": container with ID starting with a19892c25976d78cd137e56bbfc9e69477d34aba9ffe9654afb15aaf9ecb64fc not found: ID does not exist" containerID="a19892c25976d78cd137e56bbfc9e69477d34aba9ffe9654afb15aaf9ecb64fc" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.851118 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a19892c25976d78cd137e56bbfc9e69477d34aba9ffe9654afb15aaf9ecb64fc"} err="failed to get container status \"a19892c25976d78cd137e56bbfc9e69477d34aba9ffe9654afb15aaf9ecb64fc\": rpc error: code = NotFound desc = could not find container \"a19892c25976d78cd137e56bbfc9e69477d34aba9ffe9654afb15aaf9ecb64fc\": container with ID starting with a19892c25976d78cd137e56bbfc9e69477d34aba9ffe9654afb15aaf9ecb64fc not found: ID does not exist" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.851134 4972 scope.go:117] "RemoveContainer" containerID="b02ce10bc57604f8d2ea3e100ca87819b437fc60286eba31057a75d531b47edf" Nov 21 09:56:27 crc kubenswrapper[4972]: E1121 09:56:27.851442 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b02ce10bc57604f8d2ea3e100ca87819b437fc60286eba31057a75d531b47edf\": container with ID starting with b02ce10bc57604f8d2ea3e100ca87819b437fc60286eba31057a75d531b47edf not found: ID does not exist" containerID="b02ce10bc57604f8d2ea3e100ca87819b437fc60286eba31057a75d531b47edf" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.851472 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b02ce10bc57604f8d2ea3e100ca87819b437fc60286eba31057a75d531b47edf"} err="failed to get container status \"b02ce10bc57604f8d2ea3e100ca87819b437fc60286eba31057a75d531b47edf\": rpc error: code = NotFound desc = could not find container \"b02ce10bc57604f8d2ea3e100ca87819b437fc60286eba31057a75d531b47edf\": container with ID starting with b02ce10bc57604f8d2ea3e100ca87819b437fc60286eba31057a75d531b47edf not found: ID does not exist" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.851823 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7q8b\" (UniqueName: \"kubernetes.io/projected/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-kube-api-access-q7q8b\") on node \"crc\" DevicePath \"\"" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.851899 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 09:56:27 crc kubenswrapper[4972]: I1121 09:56:27.851909 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 09:56:29 crc kubenswrapper[4972]: I1121 09:56:29.767393 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" path="/var/lib/kubelet/pods/7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8/volumes" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.182304 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8"] Nov 21 09:56:41 crc kubenswrapper[4972]: E1121 09:56:41.182881 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" containerName="extract-utilities" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.182896 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" containerName="extract-utilities" Nov 21 09:56:41 crc kubenswrapper[4972]: E1121 09:56:41.182905 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" containerName="extract-content" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.182912 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" containerName="extract-content" Nov 21 09:56:41 crc kubenswrapper[4972]: E1121 09:56:41.182923 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586db52b-8a9d-4052-a47c-bea7e440b977" containerName="registry-server" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.182931 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="586db52b-8a9d-4052-a47c-bea7e440b977" containerName="registry-server" Nov 21 09:56:41 crc kubenswrapper[4972]: E1121 09:56:41.182942 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586db52b-8a9d-4052-a47c-bea7e440b977" containerName="extract-content" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.182949 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="586db52b-8a9d-4052-a47c-bea7e440b977" containerName="extract-content" Nov 21 09:56:41 crc kubenswrapper[4972]: E1121 09:56:41.182959 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" containerName="registry-server" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.182965 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" containerName="registry-server" Nov 21 09:56:41 crc kubenswrapper[4972]: E1121 09:56:41.182978 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586db52b-8a9d-4052-a47c-bea7e440b977" containerName="extract-utilities" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.182984 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="586db52b-8a9d-4052-a47c-bea7e440b977" containerName="extract-utilities" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.183081 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce1c1d4-10f3-4105-bca2-5e0589cd1bb8" containerName="registry-server" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.183096 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="586db52b-8a9d-4052-a47c-bea7e440b977" containerName="registry-server" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.184450 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.189220 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.200794 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8"] Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.320759 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93746499-c9a7-415b-9525-d4f061c35e89-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8\" (UID: \"93746499-c9a7-415b-9525-d4f061c35e89\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.320941 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kbvs\" (UniqueName: \"kubernetes.io/projected/93746499-c9a7-415b-9525-d4f061c35e89-kube-api-access-8kbvs\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8\" (UID: \"93746499-c9a7-415b-9525-d4f061c35e89\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.321000 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93746499-c9a7-415b-9525-d4f061c35e89-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8\" (UID: \"93746499-c9a7-415b-9525-d4f061c35e89\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.422270 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93746499-c9a7-415b-9525-d4f061c35e89-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8\" (UID: \"93746499-c9a7-415b-9525-d4f061c35e89\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.422631 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kbvs\" (UniqueName: \"kubernetes.io/projected/93746499-c9a7-415b-9525-d4f061c35e89-kube-api-access-8kbvs\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8\" (UID: \"93746499-c9a7-415b-9525-d4f061c35e89\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.422816 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93746499-c9a7-415b-9525-d4f061c35e89-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8\" (UID: \"93746499-c9a7-415b-9525-d4f061c35e89\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.423469 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93746499-c9a7-415b-9525-d4f061c35e89-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8\" (UID: \"93746499-c9a7-415b-9525-d4f061c35e89\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.424230 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93746499-c9a7-415b-9525-d4f061c35e89-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8\" (UID: \"93746499-c9a7-415b-9525-d4f061c35e89\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.457018 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kbvs\" (UniqueName: \"kubernetes.io/projected/93746499-c9a7-415b-9525-d4f061c35e89-kube-api-access-8kbvs\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8\" (UID: \"93746499-c9a7-415b-9525-d4f061c35e89\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.501199 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.667498 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8"] Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.863323 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" event={"ID":"93746499-c9a7-415b-9525-d4f061c35e89","Type":"ContainerStarted","Data":"3d319023d708c6e19ed8912a717f2735cc2431d991c4bf727716e8b460d843f5"} Nov 21 09:56:41 crc kubenswrapper[4972]: I1121 09:56:41.863368 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" event={"ID":"93746499-c9a7-415b-9525-d4f061c35e89","Type":"ContainerStarted","Data":"b57440386e4f60bb630932fa9ff441d1b5c8b6bfa51149f2e1835b43730b5aa0"} Nov 21 09:56:42 crc kubenswrapper[4972]: I1121 09:56:42.869598 4972 generic.go:334] "Generic (PLEG): container finished" podID="93746499-c9a7-415b-9525-d4f061c35e89" containerID="3d319023d708c6e19ed8912a717f2735cc2431d991c4bf727716e8b460d843f5" exitCode=0 Nov 21 09:56:42 crc kubenswrapper[4972]: I1121 09:56:42.869640 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" event={"ID":"93746499-c9a7-415b-9525-d4f061c35e89","Type":"ContainerDied","Data":"3d319023d708c6e19ed8912a717f2735cc2431d991c4bf727716e8b460d843f5"} Nov 21 09:56:44 crc kubenswrapper[4972]: I1121 09:56:44.885001 4972 generic.go:334] "Generic (PLEG): container finished" podID="93746499-c9a7-415b-9525-d4f061c35e89" containerID="d2497eb42a36448a50c984562c7e07ff84adb786a92ddf256d296ddd33ecb9b7" exitCode=0 Nov 21 09:56:44 crc kubenswrapper[4972]: I1121 09:56:44.885082 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" event={"ID":"93746499-c9a7-415b-9525-d4f061c35e89","Type":"ContainerDied","Data":"d2497eb42a36448a50c984562c7e07ff84adb786a92ddf256d296ddd33ecb9b7"} Nov 21 09:56:45 crc kubenswrapper[4972]: I1121 09:56:45.898600 4972 generic.go:334] "Generic (PLEG): container finished" podID="93746499-c9a7-415b-9525-d4f061c35e89" containerID="71306c30a8b29655c1d3db358640058bfe55164cec9458499d195b9b04180a3c" exitCode=0 Nov 21 09:56:45 crc kubenswrapper[4972]: I1121 09:56:45.898679 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" event={"ID":"93746499-c9a7-415b-9525-d4f061c35e89","Type":"ContainerDied","Data":"71306c30a8b29655c1d3db358640058bfe55164cec9458499d195b9b04180a3c"} Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.188742 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.297082 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93746499-c9a7-415b-9525-d4f061c35e89-util\") pod \"93746499-c9a7-415b-9525-d4f061c35e89\" (UID: \"93746499-c9a7-415b-9525-d4f061c35e89\") " Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.297198 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93746499-c9a7-415b-9525-d4f061c35e89-bundle\") pod \"93746499-c9a7-415b-9525-d4f061c35e89\" (UID: \"93746499-c9a7-415b-9525-d4f061c35e89\") " Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.297238 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kbvs\" (UniqueName: \"kubernetes.io/projected/93746499-c9a7-415b-9525-d4f061c35e89-kube-api-access-8kbvs\") pod \"93746499-c9a7-415b-9525-d4f061c35e89\" (UID: \"93746499-c9a7-415b-9525-d4f061c35e89\") " Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.298780 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93746499-c9a7-415b-9525-d4f061c35e89-bundle" (OuterVolumeSpecName: "bundle") pod "93746499-c9a7-415b-9525-d4f061c35e89" (UID: "93746499-c9a7-415b-9525-d4f061c35e89"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.307108 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93746499-c9a7-415b-9525-d4f061c35e89-kube-api-access-8kbvs" (OuterVolumeSpecName: "kube-api-access-8kbvs") pod "93746499-c9a7-415b-9525-d4f061c35e89" (UID: "93746499-c9a7-415b-9525-d4f061c35e89"). InnerVolumeSpecName "kube-api-access-8kbvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.398420 4972 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93746499-c9a7-415b-9525-d4f061c35e89-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.398458 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kbvs\" (UniqueName: \"kubernetes.io/projected/93746499-c9a7-415b-9525-d4f061c35e89-kube-api-access-8kbvs\") on node \"crc\" DevicePath \"\"" Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.593437 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93746499-c9a7-415b-9525-d4f061c35e89-util" (OuterVolumeSpecName: "util") pod "93746499-c9a7-415b-9525-d4f061c35e89" (UID: "93746499-c9a7-415b-9525-d4f061c35e89"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.600954 4972 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93746499-c9a7-415b-9525-d4f061c35e89-util\") on node \"crc\" DevicePath \"\"" Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.913704 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.913689 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8" event={"ID":"93746499-c9a7-415b-9525-d4f061c35e89","Type":"ContainerDied","Data":"b57440386e4f60bb630932fa9ff441d1b5c8b6bfa51149f2e1835b43730b5aa0"} Nov 21 09:56:47 crc kubenswrapper[4972]: I1121 09:56:47.914226 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b57440386e4f60bb630932fa9ff441d1b5c8b6bfa51149f2e1835b43730b5aa0" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.459804 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-m89vw"] Nov 21 09:56:50 crc kubenswrapper[4972]: E1121 09:56:50.460306 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93746499-c9a7-415b-9525-d4f061c35e89" containerName="extract" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.460318 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="93746499-c9a7-415b-9525-d4f061c35e89" containerName="extract" Nov 21 09:56:50 crc kubenswrapper[4972]: E1121 09:56:50.460339 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93746499-c9a7-415b-9525-d4f061c35e89" containerName="util" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.460345 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="93746499-c9a7-415b-9525-d4f061c35e89" containerName="util" Nov 21 09:56:50 crc kubenswrapper[4972]: E1121 09:56:50.460355 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93746499-c9a7-415b-9525-d4f061c35e89" containerName="pull" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.460363 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="93746499-c9a7-415b-9525-d4f061c35e89" containerName="pull" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.460459 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="93746499-c9a7-415b-9525-d4f061c35e89" containerName="extract" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.460869 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-m89vw" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.465137 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.465587 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.465668 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-6bbrs" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.474970 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-m89vw"] Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.542214 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcsp6\" (UniqueName: \"kubernetes.io/projected/35df56e5-4739-474c-af19-8b79bd18c12c-kube-api-access-vcsp6\") pod \"nmstate-operator-557fdffb88-m89vw\" (UID: \"35df56e5-4739-474c-af19-8b79bd18c12c\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-m89vw" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.643658 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcsp6\" (UniqueName: \"kubernetes.io/projected/35df56e5-4739-474c-af19-8b79bd18c12c-kube-api-access-vcsp6\") pod \"nmstate-operator-557fdffb88-m89vw\" (UID: \"35df56e5-4739-474c-af19-8b79bd18c12c\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-m89vw" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.663553 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcsp6\" (UniqueName: \"kubernetes.io/projected/35df56e5-4739-474c-af19-8b79bd18c12c-kube-api-access-vcsp6\") pod \"nmstate-operator-557fdffb88-m89vw\" (UID: \"35df56e5-4739-474c-af19-8b79bd18c12c\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-m89vw" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.781197 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-m89vw" Nov 21 09:56:50 crc kubenswrapper[4972]: I1121 09:56:50.972816 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-m89vw"] Nov 21 09:56:51 crc kubenswrapper[4972]: I1121 09:56:51.940536 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-m89vw" event={"ID":"35df56e5-4739-474c-af19-8b79bd18c12c","Type":"ContainerStarted","Data":"1d9bdb99f26beb9bef2fb8194dcdb73d3e100687d6f91c200f44b3c87b6fe45f"} Nov 21 09:56:53 crc kubenswrapper[4972]: I1121 09:56:53.973891 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-m89vw" event={"ID":"35df56e5-4739-474c-af19-8b79bd18c12c","Type":"ContainerStarted","Data":"a9abdec3b42ae64cea4abadba7c238c0c2b83a458585a612be72844661a909fc"} Nov 21 09:56:56 crc kubenswrapper[4972]: I1121 09:56:56.178908 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:56:56 crc kubenswrapper[4972]: I1121 09:56:56.179312 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.488333 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-m89vw" podStartSLOduration=9.557753182 podStartE2EDuration="11.488312072s" podCreationTimestamp="2025-11-21 09:56:50 +0000 UTC" firstStartedPulling="2025-11-21 09:56:50.981550938 +0000 UTC m=+956.090693436" lastFinishedPulling="2025-11-21 09:56:52.912109828 +0000 UTC m=+958.021252326" observedRunningTime="2025-11-21 09:56:53.994970784 +0000 UTC m=+959.104113292" watchObservedRunningTime="2025-11-21 09:57:01.488312072 +0000 UTC m=+966.597454570" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.492272 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-8ddtn"] Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.493403 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8ddtn" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.495460 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gdjsr" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.505844 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-76brj"] Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.506632 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.508696 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-8ddtn"] Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.509998 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.523431 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-zj289"] Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.524879 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.549806 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-76brj"] Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.587066 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/289a84f1-0b97-4282-8a9a-643bfb19b117-ovs-socket\") pod \"nmstate-handler-zj289\" (UID: \"289a84f1-0b97-4282-8a9a-643bfb19b117\") " pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.587148 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/289a84f1-0b97-4282-8a9a-643bfb19b117-dbus-socket\") pod \"nmstate-handler-zj289\" (UID: \"289a84f1-0b97-4282-8a9a-643bfb19b117\") " pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.587177 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/875c8c14-4fbd-4041-93d1-9fc99e815156-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-76brj\" (UID: \"875c8c14-4fbd-4041-93d1-9fc99e815156\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.587206 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/289a84f1-0b97-4282-8a9a-643bfb19b117-nmstate-lock\") pod \"nmstate-handler-zj289\" (UID: \"289a84f1-0b97-4282-8a9a-643bfb19b117\") " pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.587241 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfdjd\" (UniqueName: \"kubernetes.io/projected/8db64828-f701-4320-9c0e-1d2897bdfa94-kube-api-access-rfdjd\") pod \"nmstate-metrics-5dcf9c57c5-8ddtn\" (UID: \"8db64828-f701-4320-9c0e-1d2897bdfa94\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8ddtn" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.587268 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcnjd\" (UniqueName: \"kubernetes.io/projected/875c8c14-4fbd-4041-93d1-9fc99e815156-kube-api-access-fcnjd\") pod \"nmstate-webhook-6b89b748d8-76brj\" (UID: \"875c8c14-4fbd-4041-93d1-9fc99e815156\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.587291 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98hsq\" (UniqueName: \"kubernetes.io/projected/289a84f1-0b97-4282-8a9a-643bfb19b117-kube-api-access-98hsq\") pod \"nmstate-handler-zj289\" (UID: \"289a84f1-0b97-4282-8a9a-643bfb19b117\") " pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.617065 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f"] Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.617665 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.619218 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.620633 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-w5sd8" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.621238 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.655109 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f"] Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.687985 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/289a84f1-0b97-4282-8a9a-643bfb19b117-ovs-socket\") pod \"nmstate-handler-zj289\" (UID: \"289a84f1-0b97-4282-8a9a-643bfb19b117\") " pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.688048 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d45e5a89-dbd0-49f3-a285-a8d14e35d7de-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-j4g7f\" (UID: \"d45e5a89-dbd0-49f3-a285-a8d14e35d7de\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.688089 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/289a84f1-0b97-4282-8a9a-643bfb19b117-dbus-socket\") pod \"nmstate-handler-zj289\" (UID: \"289a84f1-0b97-4282-8a9a-643bfb19b117\") " pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.688116 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/875c8c14-4fbd-4041-93d1-9fc99e815156-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-76brj\" (UID: \"875c8c14-4fbd-4041-93d1-9fc99e815156\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.688149 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/289a84f1-0b97-4282-8a9a-643bfb19b117-nmstate-lock\") pod \"nmstate-handler-zj289\" (UID: \"289a84f1-0b97-4282-8a9a-643bfb19b117\") " pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.688206 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfdjd\" (UniqueName: \"kubernetes.io/projected/8db64828-f701-4320-9c0e-1d2897bdfa94-kube-api-access-rfdjd\") pod \"nmstate-metrics-5dcf9c57c5-8ddtn\" (UID: \"8db64828-f701-4320-9c0e-1d2897bdfa94\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8ddtn" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.688237 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcnjd\" (UniqueName: \"kubernetes.io/projected/875c8c14-4fbd-4041-93d1-9fc99e815156-kube-api-access-fcnjd\") pod \"nmstate-webhook-6b89b748d8-76brj\" (UID: \"875c8c14-4fbd-4041-93d1-9fc99e815156\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.688262 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98hsq\" (UniqueName: \"kubernetes.io/projected/289a84f1-0b97-4282-8a9a-643bfb19b117-kube-api-access-98hsq\") pod \"nmstate-handler-zj289\" (UID: \"289a84f1-0b97-4282-8a9a-643bfb19b117\") " pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.688290 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d45e5a89-dbd0-49f3-a285-a8d14e35d7de-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-j4g7f\" (UID: \"d45e5a89-dbd0-49f3-a285-a8d14e35d7de\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.688315 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk4zk\" (UniqueName: \"kubernetes.io/projected/d45e5a89-dbd0-49f3-a285-a8d14e35d7de-kube-api-access-gk4zk\") pod \"nmstate-console-plugin-5874bd7bc5-j4g7f\" (UID: \"d45e5a89-dbd0-49f3-a285-a8d14e35d7de\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.688410 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/289a84f1-0b97-4282-8a9a-643bfb19b117-ovs-socket\") pod \"nmstate-handler-zj289\" (UID: \"289a84f1-0b97-4282-8a9a-643bfb19b117\") " pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.688729 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/289a84f1-0b97-4282-8a9a-643bfb19b117-dbus-socket\") pod \"nmstate-handler-zj289\" (UID: \"289a84f1-0b97-4282-8a9a-643bfb19b117\") " pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: E1121 09:57:01.688826 4972 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 21 09:57:01 crc kubenswrapper[4972]: E1121 09:57:01.688902 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/875c8c14-4fbd-4041-93d1-9fc99e815156-tls-key-pair podName:875c8c14-4fbd-4041-93d1-9fc99e815156 nodeName:}" failed. No retries permitted until 2025-11-21 09:57:02.188882597 +0000 UTC m=+967.298025115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/875c8c14-4fbd-4041-93d1-9fc99e815156-tls-key-pair") pod "nmstate-webhook-6b89b748d8-76brj" (UID: "875c8c14-4fbd-4041-93d1-9fc99e815156") : secret "openshift-nmstate-webhook" not found Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.689196 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/289a84f1-0b97-4282-8a9a-643bfb19b117-nmstate-lock\") pod \"nmstate-handler-zj289\" (UID: \"289a84f1-0b97-4282-8a9a-643bfb19b117\") " pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.708504 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98hsq\" (UniqueName: \"kubernetes.io/projected/289a84f1-0b97-4282-8a9a-643bfb19b117-kube-api-access-98hsq\") pod \"nmstate-handler-zj289\" (UID: \"289a84f1-0b97-4282-8a9a-643bfb19b117\") " pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.710341 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcnjd\" (UniqueName: \"kubernetes.io/projected/875c8c14-4fbd-4041-93d1-9fc99e815156-kube-api-access-fcnjd\") pod \"nmstate-webhook-6b89b748d8-76brj\" (UID: \"875c8c14-4fbd-4041-93d1-9fc99e815156\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.726347 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfdjd\" (UniqueName: \"kubernetes.io/projected/8db64828-f701-4320-9c0e-1d2897bdfa94-kube-api-access-rfdjd\") pod \"nmstate-metrics-5dcf9c57c5-8ddtn\" (UID: \"8db64828-f701-4320-9c0e-1d2897bdfa94\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8ddtn" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.789757 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d45e5a89-dbd0-49f3-a285-a8d14e35d7de-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-j4g7f\" (UID: \"d45e5a89-dbd0-49f3-a285-a8d14e35d7de\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.789872 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk4zk\" (UniqueName: \"kubernetes.io/projected/d45e5a89-dbd0-49f3-a285-a8d14e35d7de-kube-api-access-gk4zk\") pod \"nmstate-console-plugin-5874bd7bc5-j4g7f\" (UID: \"d45e5a89-dbd0-49f3-a285-a8d14e35d7de\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" Nov 21 09:57:01 crc kubenswrapper[4972]: E1121 09:57:01.789946 4972 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 21 09:57:01 crc kubenswrapper[4972]: E1121 09:57:01.790072 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d45e5a89-dbd0-49f3-a285-a8d14e35d7de-plugin-serving-cert podName:d45e5a89-dbd0-49f3-a285-a8d14e35d7de nodeName:}" failed. No retries permitted until 2025-11-21 09:57:02.290049898 +0000 UTC m=+967.399192406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d45e5a89-dbd0-49f3-a285-a8d14e35d7de-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-j4g7f" (UID: "d45e5a89-dbd0-49f3-a285-a8d14e35d7de") : secret "plugin-serving-cert" not found Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.790427 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d45e5a89-dbd0-49f3-a285-a8d14e35d7de-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-j4g7f\" (UID: \"d45e5a89-dbd0-49f3-a285-a8d14e35d7de\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.791486 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d45e5a89-dbd0-49f3-a285-a8d14e35d7de-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-j4g7f\" (UID: \"d45e5a89-dbd0-49f3-a285-a8d14e35d7de\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.804238 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b9b7d686d-cxr2h"] Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.805490 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.811937 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8ddtn" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.819981 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk4zk\" (UniqueName: \"kubernetes.io/projected/d45e5a89-dbd0-49f3-a285-a8d14e35d7de-kube-api-access-gk4zk\") pod \"nmstate-console-plugin-5874bd7bc5-j4g7f\" (UID: \"d45e5a89-dbd0-49f3-a285-a8d14e35d7de\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.825661 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b9b7d686d-cxr2h"] Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.836533 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.893370 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/458919cf-08c7-4462-b196-53d976979440-console-serving-cert\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.893413 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/458919cf-08c7-4462-b196-53d976979440-trusted-ca-bundle\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.893464 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/458919cf-08c7-4462-b196-53d976979440-oauth-serving-cert\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.893489 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/458919cf-08c7-4462-b196-53d976979440-console-oauth-config\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.893515 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/458919cf-08c7-4462-b196-53d976979440-console-config\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.893546 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-877pb\" (UniqueName: \"kubernetes.io/projected/458919cf-08c7-4462-b196-53d976979440-kube-api-access-877pb\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.893569 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/458919cf-08c7-4462-b196-53d976979440-service-ca\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.995115 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/458919cf-08c7-4462-b196-53d976979440-oauth-serving-cert\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.995390 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/458919cf-08c7-4462-b196-53d976979440-console-oauth-config\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.995409 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/458919cf-08c7-4462-b196-53d976979440-console-config\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.995440 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-877pb\" (UniqueName: \"kubernetes.io/projected/458919cf-08c7-4462-b196-53d976979440-kube-api-access-877pb\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.995457 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/458919cf-08c7-4462-b196-53d976979440-service-ca\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.995511 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/458919cf-08c7-4462-b196-53d976979440-console-serving-cert\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.995539 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/458919cf-08c7-4462-b196-53d976979440-trusted-ca-bundle\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.996553 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/458919cf-08c7-4462-b196-53d976979440-service-ca\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:01 crc kubenswrapper[4972]: I1121 09:57:01.996661 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/458919cf-08c7-4462-b196-53d976979440-trusted-ca-bundle\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:01.996984 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/458919cf-08c7-4462-b196-53d976979440-console-config\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:01.997988 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/458919cf-08c7-4462-b196-53d976979440-oauth-serving-cert\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.004805 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/458919cf-08c7-4462-b196-53d976979440-console-oauth-config\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.013743 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/458919cf-08c7-4462-b196-53d976979440-console-serving-cert\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.014698 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-877pb\" (UniqueName: \"kubernetes.io/projected/458919cf-08c7-4462-b196-53d976979440-kube-api-access-877pb\") pod \"console-b9b7d686d-cxr2h\" (UID: \"458919cf-08c7-4462-b196-53d976979440\") " pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.019062 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-8ddtn"] Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.022271 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-zj289" event={"ID":"289a84f1-0b97-4282-8a9a-643bfb19b117","Type":"ContainerStarted","Data":"6c318a35aa843a8fe22c9b72f904db057dc6b68f94d606a23afc32c22cc1d67b"} Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.183303 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.198113 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/875c8c14-4fbd-4041-93d1-9fc99e815156-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-76brj\" (UID: \"875c8c14-4fbd-4041-93d1-9fc99e815156\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.204015 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/875c8c14-4fbd-4041-93d1-9fc99e815156-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-76brj\" (UID: \"875c8c14-4fbd-4041-93d1-9fc99e815156\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.303975 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d45e5a89-dbd0-49f3-a285-a8d14e35d7de-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-j4g7f\" (UID: \"d45e5a89-dbd0-49f3-a285-a8d14e35d7de\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.307757 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d45e5a89-dbd0-49f3-a285-a8d14e35d7de-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-j4g7f\" (UID: \"d45e5a89-dbd0-49f3-a285-a8d14e35d7de\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.411014 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b9b7d686d-cxr2h"] Nov 21 09:57:02 crc kubenswrapper[4972]: W1121 09:57:02.421152 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod458919cf_08c7_4462_b196_53d976979440.slice/crio-10c159d8935058429276f3d0c3edc5ba587d35b760d13924cf21b152b84d1a15 WatchSource:0}: Error finding container 10c159d8935058429276f3d0c3edc5ba587d35b760d13924cf21b152b84d1a15: Status 404 returned error can't find the container with id 10c159d8935058429276f3d0c3edc5ba587d35b760d13924cf21b152b84d1a15 Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.421638 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.531492 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.597790 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-76brj"] Nov 21 09:57:02 crc kubenswrapper[4972]: I1121 09:57:02.946446 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f"] Nov 21 09:57:02 crc kubenswrapper[4972]: W1121 09:57:02.961033 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd45e5a89_dbd0_49f3_a285_a8d14e35d7de.slice/crio-c5765802137156ccf409474ed1ba02134e8a698bffba59fc50629f88d46e0902 WatchSource:0}: Error finding container c5765802137156ccf409474ed1ba02134e8a698bffba59fc50629f88d46e0902: Status 404 returned error can't find the container with id c5765802137156ccf409474ed1ba02134e8a698bffba59fc50629f88d46e0902 Nov 21 09:57:03 crc kubenswrapper[4972]: I1121 09:57:03.036699 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8ddtn" event={"ID":"8db64828-f701-4320-9c0e-1d2897bdfa94","Type":"ContainerStarted","Data":"8ceb5939d146981d4f41da3afea39365830c5d00eaf3cec2d6a3e1a398b50464"} Nov 21 09:57:03 crc kubenswrapper[4972]: I1121 09:57:03.038284 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b9b7d686d-cxr2h" event={"ID":"458919cf-08c7-4462-b196-53d976979440","Type":"ContainerStarted","Data":"afd3b6f7ef8b9330906c5c0269506873876fa62f3931ea52fa572b4fb7ba8b25"} Nov 21 09:57:03 crc kubenswrapper[4972]: I1121 09:57:03.038325 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b9b7d686d-cxr2h" event={"ID":"458919cf-08c7-4462-b196-53d976979440","Type":"ContainerStarted","Data":"10c159d8935058429276f3d0c3edc5ba587d35b760d13924cf21b152b84d1a15"} Nov 21 09:57:03 crc kubenswrapper[4972]: I1121 09:57:03.040995 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" event={"ID":"875c8c14-4fbd-4041-93d1-9fc99e815156","Type":"ContainerStarted","Data":"9abbea5cf9645be60d958bc7080168bda6770e7e3310c1ddc964d94ca5b194b6"} Nov 21 09:57:03 crc kubenswrapper[4972]: I1121 09:57:03.043056 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" event={"ID":"d45e5a89-dbd0-49f3-a285-a8d14e35d7de","Type":"ContainerStarted","Data":"c5765802137156ccf409474ed1ba02134e8a698bffba59fc50629f88d46e0902"} Nov 21 09:57:03 crc kubenswrapper[4972]: I1121 09:57:03.063750 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b9b7d686d-cxr2h" podStartSLOduration=2.063732409 podStartE2EDuration="2.063732409s" podCreationTimestamp="2025-11-21 09:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:57:03.060989825 +0000 UTC m=+968.170132353" watchObservedRunningTime="2025-11-21 09:57:03.063732409 +0000 UTC m=+968.172874917" Nov 21 09:57:05 crc kubenswrapper[4972]: I1121 09:57:05.056130 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-zj289" event={"ID":"289a84f1-0b97-4282-8a9a-643bfb19b117","Type":"ContainerStarted","Data":"39efe66b4a3fcc112bb128625deadefa9def7a19022e15a722d5bad97d00f9e0"} Nov 21 09:57:05 crc kubenswrapper[4972]: I1121 09:57:05.056802 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:05 crc kubenswrapper[4972]: I1121 09:57:05.059437 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8ddtn" event={"ID":"8db64828-f701-4320-9c0e-1d2897bdfa94","Type":"ContainerStarted","Data":"b63d958c5bcf880c1db01a1ba69bf029f26e4acb4e8baed0dab3a29595c9b06d"} Nov 21 09:57:05 crc kubenswrapper[4972]: I1121 09:57:05.061543 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" event={"ID":"875c8c14-4fbd-4041-93d1-9fc99e815156","Type":"ContainerStarted","Data":"694d127804ddfd454c175f2f67e3cc4fc66558ab62f5dece24f25329ab875d4f"} Nov 21 09:57:05 crc kubenswrapper[4972]: I1121 09:57:05.061775 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" Nov 21 09:57:05 crc kubenswrapper[4972]: I1121 09:57:05.072959 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-zj289" podStartSLOduration=1.495486834 podStartE2EDuration="4.072942649s" podCreationTimestamp="2025-11-21 09:57:01 +0000 UTC" firstStartedPulling="2025-11-21 09:57:01.87551313 +0000 UTC m=+966.984655628" lastFinishedPulling="2025-11-21 09:57:04.452968945 +0000 UTC m=+969.562111443" observedRunningTime="2025-11-21 09:57:05.071902202 +0000 UTC m=+970.181044710" watchObservedRunningTime="2025-11-21 09:57:05.072942649 +0000 UTC m=+970.182085147" Nov 21 09:57:05 crc kubenswrapper[4972]: I1121 09:57:05.088258 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" podStartSLOduration=2.217088223 podStartE2EDuration="4.088238298s" podCreationTimestamp="2025-11-21 09:57:01 +0000 UTC" firstStartedPulling="2025-11-21 09:57:02.619942858 +0000 UTC m=+967.729085356" lastFinishedPulling="2025-11-21 09:57:04.491092933 +0000 UTC m=+969.600235431" observedRunningTime="2025-11-21 09:57:05.085361351 +0000 UTC m=+970.194503869" watchObservedRunningTime="2025-11-21 09:57:05.088238298 +0000 UTC m=+970.197380796" Nov 21 09:57:06 crc kubenswrapper[4972]: I1121 09:57:06.072380 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" event={"ID":"d45e5a89-dbd0-49f3-a285-a8d14e35d7de","Type":"ContainerStarted","Data":"7ecb7bd2c3822a15e9954374d0b5424f53873fd29b15f268d75bfd7ca5dd5daf"} Nov 21 09:57:06 crc kubenswrapper[4972]: I1121 09:57:06.086681 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-j4g7f" podStartSLOduration=2.482824249 podStartE2EDuration="5.086662177s" podCreationTimestamp="2025-11-21 09:57:01 +0000 UTC" firstStartedPulling="2025-11-21 09:57:02.964105088 +0000 UTC m=+968.073247596" lastFinishedPulling="2025-11-21 09:57:05.567943016 +0000 UTC m=+970.677085524" observedRunningTime="2025-11-21 09:57:06.085365933 +0000 UTC m=+971.194508451" watchObservedRunningTime="2025-11-21 09:57:06.086662177 +0000 UTC m=+971.195804665" Nov 21 09:57:07 crc kubenswrapper[4972]: I1121 09:57:07.082419 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8ddtn" event={"ID":"8db64828-f701-4320-9c0e-1d2897bdfa94","Type":"ContainerStarted","Data":"03f8b9ec49ed6a87069757779069e51671deea2dd88ac58caffd44e193b4f001"} Nov 21 09:57:07 crc kubenswrapper[4972]: I1121 09:57:07.118462 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8ddtn" podStartSLOduration=1.389387586 podStartE2EDuration="6.118428662s" podCreationTimestamp="2025-11-21 09:57:01 +0000 UTC" firstStartedPulling="2025-11-21 09:57:02.023309417 +0000 UTC m=+967.132451915" lastFinishedPulling="2025-11-21 09:57:06.752350493 +0000 UTC m=+971.861492991" observedRunningTime="2025-11-21 09:57:07.114408736 +0000 UTC m=+972.223551234" watchObservedRunningTime="2025-11-21 09:57:07.118428662 +0000 UTC m=+972.227571200" Nov 21 09:57:11 crc kubenswrapper[4972]: I1121 09:57:11.872554 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-zj289" Nov 21 09:57:12 crc kubenswrapper[4972]: I1121 09:57:12.183449 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:12 crc kubenswrapper[4972]: I1121 09:57:12.183557 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:12 crc kubenswrapper[4972]: I1121 09:57:12.189486 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:13 crc kubenswrapper[4972]: I1121 09:57:13.139305 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-b9b7d686d-cxr2h" Nov 21 09:57:13 crc kubenswrapper[4972]: I1121 09:57:13.196967 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-j7xxl"] Nov 21 09:57:22 crc kubenswrapper[4972]: I1121 09:57:22.427747 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-76brj" Nov 21 09:57:26 crc kubenswrapper[4972]: I1121 09:57:26.179047 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:57:26 crc kubenswrapper[4972]: I1121 09:57:26.179372 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.648848 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs"] Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.651307 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.654172 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.670258 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs"] Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.719924 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b5ct\" (UniqueName: \"kubernetes.io/projected/d16d14b4-fe18-4865-a9db-e203aeb6ed09-kube-api-access-8b5ct\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs\" (UID: \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.720018 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d16d14b4-fe18-4865-a9db-e203aeb6ed09-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs\" (UID: \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.720087 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d16d14b4-fe18-4865-a9db-e203aeb6ed09-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs\" (UID: \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.821714 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d16d14b4-fe18-4865-a9db-e203aeb6ed09-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs\" (UID: \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.821857 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d16d14b4-fe18-4865-a9db-e203aeb6ed09-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs\" (UID: \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.821980 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b5ct\" (UniqueName: \"kubernetes.io/projected/d16d14b4-fe18-4865-a9db-e203aeb6ed09-kube-api-access-8b5ct\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs\" (UID: \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.822647 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d16d14b4-fe18-4865-a9db-e203aeb6ed09-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs\" (UID: \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.822699 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d16d14b4-fe18-4865-a9db-e203aeb6ed09-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs\" (UID: \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.849142 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b5ct\" (UniqueName: \"kubernetes.io/projected/d16d14b4-fe18-4865-a9db-e203aeb6ed09-kube-api-access-8b5ct\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs\" (UID: \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:34 crc kubenswrapper[4972]: I1121 09:57:34.967951 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:35 crc kubenswrapper[4972]: I1121 09:57:35.167715 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs"] Nov 21 09:57:35 crc kubenswrapper[4972]: I1121 09:57:35.272964 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" event={"ID":"d16d14b4-fe18-4865-a9db-e203aeb6ed09","Type":"ContainerStarted","Data":"3a9dc52352141ca041489b4d8ad013572d8e3ca440596f63ce768a4801ea22b2"} Nov 21 09:57:36 crc kubenswrapper[4972]: I1121 09:57:36.283873 4972 generic.go:334] "Generic (PLEG): container finished" podID="d16d14b4-fe18-4865-a9db-e203aeb6ed09" containerID="f2778b1ba84b55acdb117217be6128dfce68645922b6122b4bf03afe3cf268de" exitCode=0 Nov 21 09:57:36 crc kubenswrapper[4972]: I1121 09:57:36.284048 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" event={"ID":"d16d14b4-fe18-4865-a9db-e203aeb6ed09","Type":"ContainerDied","Data":"f2778b1ba84b55acdb117217be6128dfce68645922b6122b4bf03afe3cf268de"} Nov 21 09:57:38 crc kubenswrapper[4972]: I1121 09:57:38.300563 4972 generic.go:334] "Generic (PLEG): container finished" podID="d16d14b4-fe18-4865-a9db-e203aeb6ed09" containerID="ec7074d7c88a3a2988a7612137dba6bd499889a83ef68aca2772801d630d1291" exitCode=0 Nov 21 09:57:38 crc kubenswrapper[4972]: I1121 09:57:38.300612 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" event={"ID":"d16d14b4-fe18-4865-a9db-e203aeb6ed09","Type":"ContainerDied","Data":"ec7074d7c88a3a2988a7612137dba6bd499889a83ef68aca2772801d630d1291"} Nov 21 09:57:38 crc kubenswrapper[4972]: I1121 09:57:38.306991 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-j7xxl" podUID="7b0e4d64-f901-4a4e-9644-408eb534401e" containerName="console" containerID="cri-o://98c7e87ada638d1f8994428ed2de5ff17f70f867ea3a5f22448fbeb0a8f69a38" gracePeriod=15 Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.308781 4972 generic.go:334] "Generic (PLEG): container finished" podID="d16d14b4-fe18-4865-a9db-e203aeb6ed09" containerID="37367efe09e828b15dc811674e69d093a3013acc991abc7f7fefead5694882c2" exitCode=0 Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.308886 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" event={"ID":"d16d14b4-fe18-4865-a9db-e203aeb6ed09","Type":"ContainerDied","Data":"37367efe09e828b15dc811674e69d093a3013acc991abc7f7fefead5694882c2"} Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.311791 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-j7xxl_7b0e4d64-f901-4a4e-9644-408eb534401e/console/0.log" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.311895 4972 generic.go:334] "Generic (PLEG): container finished" podID="7b0e4d64-f901-4a4e-9644-408eb534401e" containerID="98c7e87ada638d1f8994428ed2de5ff17f70f867ea3a5f22448fbeb0a8f69a38" exitCode=2 Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.311931 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j7xxl" event={"ID":"7b0e4d64-f901-4a4e-9644-408eb534401e","Type":"ContainerDied","Data":"98c7e87ada638d1f8994428ed2de5ff17f70f867ea3a5f22448fbeb0a8f69a38"} Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.422043 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-j7xxl_7b0e4d64-f901-4a4e-9644-408eb534401e/console/0.log" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.422119 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.490517 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-oauth-serving-cert\") pod \"7b0e4d64-f901-4a4e-9644-408eb534401e\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.490589 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b0e4d64-f901-4a4e-9644-408eb534401e-console-serving-cert\") pod \"7b0e4d64-f901-4a4e-9644-408eb534401e\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.490678 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b0e4d64-f901-4a4e-9644-408eb534401e-console-oauth-config\") pod \"7b0e4d64-f901-4a4e-9644-408eb534401e\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.490717 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-console-config\") pod \"7b0e4d64-f901-4a4e-9644-408eb534401e\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.490758 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bph4s\" (UniqueName: \"kubernetes.io/projected/7b0e4d64-f901-4a4e-9644-408eb534401e-kube-api-access-bph4s\") pod \"7b0e4d64-f901-4a4e-9644-408eb534401e\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.490801 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-service-ca\") pod \"7b0e4d64-f901-4a4e-9644-408eb534401e\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.490871 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-trusted-ca-bundle\") pod \"7b0e4d64-f901-4a4e-9644-408eb534401e\" (UID: \"7b0e4d64-f901-4a4e-9644-408eb534401e\") " Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.491086 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7b0e4d64-f901-4a4e-9644-408eb534401e" (UID: "7b0e4d64-f901-4a4e-9644-408eb534401e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.491166 4972 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.491909 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7b0e4d64-f901-4a4e-9644-408eb534401e" (UID: "7b0e4d64-f901-4a4e-9644-408eb534401e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.491942 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-console-config" (OuterVolumeSpecName: "console-config") pod "7b0e4d64-f901-4a4e-9644-408eb534401e" (UID: "7b0e4d64-f901-4a4e-9644-408eb534401e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.492470 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-service-ca" (OuterVolumeSpecName: "service-ca") pod "7b0e4d64-f901-4a4e-9644-408eb534401e" (UID: "7b0e4d64-f901-4a4e-9644-408eb534401e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.497605 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0e4d64-f901-4a4e-9644-408eb534401e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7b0e4d64-f901-4a4e-9644-408eb534401e" (UID: "7b0e4d64-f901-4a4e-9644-408eb534401e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.497949 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0e4d64-f901-4a4e-9644-408eb534401e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7b0e4d64-f901-4a4e-9644-408eb534401e" (UID: "7b0e4d64-f901-4a4e-9644-408eb534401e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.497944 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b0e4d64-f901-4a4e-9644-408eb534401e-kube-api-access-bph4s" (OuterVolumeSpecName: "kube-api-access-bph4s") pod "7b0e4d64-f901-4a4e-9644-408eb534401e" (UID: "7b0e4d64-f901-4a4e-9644-408eb534401e"). InnerVolumeSpecName "kube-api-access-bph4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.592229 4972 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.592296 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bph4s\" (UniqueName: \"kubernetes.io/projected/7b0e4d64-f901-4a4e-9644-408eb534401e-kube-api-access-bph4s\") on node \"crc\" DevicePath \"\"" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.592310 4972 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.592319 4972 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b0e4d64-f901-4a4e-9644-408eb534401e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.592330 4972 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b0e4d64-f901-4a4e-9644-408eb534401e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 09:57:39 crc kubenswrapper[4972]: I1121 09:57:39.592340 4972 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b0e4d64-f901-4a4e-9644-408eb534401e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.321735 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-j7xxl_7b0e4d64-f901-4a4e-9644-408eb534401e/console/0.log" Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.323488 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j7xxl" event={"ID":"7b0e4d64-f901-4a4e-9644-408eb534401e","Type":"ContainerDied","Data":"124f3d9bdcc65ef31e252a5f8e57248caf6e499772ea0946beb4d7fd4ffa3c64"} Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.323535 4972 scope.go:117] "RemoveContainer" containerID="98c7e87ada638d1f8994428ed2de5ff17f70f867ea3a5f22448fbeb0a8f69a38" Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.323526 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j7xxl" Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.343670 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-j7xxl"] Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.346810 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-j7xxl"] Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.560963 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.607909 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b5ct\" (UniqueName: \"kubernetes.io/projected/d16d14b4-fe18-4865-a9db-e203aeb6ed09-kube-api-access-8b5ct\") pod \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\" (UID: \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\") " Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.608032 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d16d14b4-fe18-4865-a9db-e203aeb6ed09-bundle\") pod \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\" (UID: \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\") " Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.608095 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d16d14b4-fe18-4865-a9db-e203aeb6ed09-util\") pod \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\" (UID: \"d16d14b4-fe18-4865-a9db-e203aeb6ed09\") " Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.611162 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d16d14b4-fe18-4865-a9db-e203aeb6ed09-bundle" (OuterVolumeSpecName: "bundle") pod "d16d14b4-fe18-4865-a9db-e203aeb6ed09" (UID: "d16d14b4-fe18-4865-a9db-e203aeb6ed09"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.613426 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16d14b4-fe18-4865-a9db-e203aeb6ed09-kube-api-access-8b5ct" (OuterVolumeSpecName: "kube-api-access-8b5ct") pod "d16d14b4-fe18-4865-a9db-e203aeb6ed09" (UID: "d16d14b4-fe18-4865-a9db-e203aeb6ed09"). InnerVolumeSpecName "kube-api-access-8b5ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.708951 4972 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d16d14b4-fe18-4865-a9db-e203aeb6ed09-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.708977 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b5ct\" (UniqueName: \"kubernetes.io/projected/d16d14b4-fe18-4865-a9db-e203aeb6ed09-kube-api-access-8b5ct\") on node \"crc\" DevicePath \"\"" Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.730654 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d16d14b4-fe18-4865-a9db-e203aeb6ed09-util" (OuterVolumeSpecName: "util") pod "d16d14b4-fe18-4865-a9db-e203aeb6ed09" (UID: "d16d14b4-fe18-4865-a9db-e203aeb6ed09"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:57:40 crc kubenswrapper[4972]: I1121 09:57:40.810746 4972 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d16d14b4-fe18-4865-a9db-e203aeb6ed09-util\") on node \"crc\" DevicePath \"\"" Nov 21 09:57:41 crc kubenswrapper[4972]: I1121 09:57:41.333845 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" Nov 21 09:57:41 crc kubenswrapper[4972]: I1121 09:57:41.333815 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs" event={"ID":"d16d14b4-fe18-4865-a9db-e203aeb6ed09","Type":"ContainerDied","Data":"3a9dc52352141ca041489b4d8ad013572d8e3ca440596f63ce768a4801ea22b2"} Nov 21 09:57:41 crc kubenswrapper[4972]: I1121 09:57:41.333938 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a9dc52352141ca041489b4d8ad013572d8e3ca440596f63ce768a4801ea22b2" Nov 21 09:57:41 crc kubenswrapper[4972]: I1121 09:57:41.771437 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b0e4d64-f901-4a4e-9644-408eb534401e" path="/var/lib/kubelet/pods/7b0e4d64-f901-4a4e-9644-408eb534401e/volumes" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.630949 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h"] Nov 21 09:57:49 crc kubenswrapper[4972]: E1121 09:57:49.631710 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16d14b4-fe18-4865-a9db-e203aeb6ed09" containerName="util" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.631725 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16d14b4-fe18-4865-a9db-e203aeb6ed09" containerName="util" Nov 21 09:57:49 crc kubenswrapper[4972]: E1121 09:57:49.631740 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0e4d64-f901-4a4e-9644-408eb534401e" containerName="console" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.631747 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0e4d64-f901-4a4e-9644-408eb534401e" containerName="console" Nov 21 09:57:49 crc kubenswrapper[4972]: E1121 09:57:49.631758 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16d14b4-fe18-4865-a9db-e203aeb6ed09" containerName="extract" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.631765 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16d14b4-fe18-4865-a9db-e203aeb6ed09" containerName="extract" Nov 21 09:57:49 crc kubenswrapper[4972]: E1121 09:57:49.631784 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16d14b4-fe18-4865-a9db-e203aeb6ed09" containerName="pull" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.631791 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16d14b4-fe18-4865-a9db-e203aeb6ed09" containerName="pull" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.631923 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0e4d64-f901-4a4e-9644-408eb534401e" containerName="console" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.631934 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16d14b4-fe18-4865-a9db-e203aeb6ed09" containerName="extract" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.632404 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.635510 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.635880 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.636296 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-dphhb" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.636536 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.636810 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.689711 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h"] Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.755762 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d67eba6d-5d34-4253-8718-a833c7b43c41-apiservice-cert\") pod \"metallb-operator-controller-manager-55cdd8d9bf-nk57h\" (UID: \"d67eba6d-5d34-4253-8718-a833c7b43c41\") " pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.755945 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fgz4\" (UniqueName: \"kubernetes.io/projected/d67eba6d-5d34-4253-8718-a833c7b43c41-kube-api-access-7fgz4\") pod \"metallb-operator-controller-manager-55cdd8d9bf-nk57h\" (UID: \"d67eba6d-5d34-4253-8718-a833c7b43c41\") " pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.756005 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d67eba6d-5d34-4253-8718-a833c7b43c41-webhook-cert\") pod \"metallb-operator-controller-manager-55cdd8d9bf-nk57h\" (UID: \"d67eba6d-5d34-4253-8718-a833c7b43c41\") " pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.857430 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d67eba6d-5d34-4253-8718-a833c7b43c41-apiservice-cert\") pod \"metallb-operator-controller-manager-55cdd8d9bf-nk57h\" (UID: \"d67eba6d-5d34-4253-8718-a833c7b43c41\") " pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.857497 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fgz4\" (UniqueName: \"kubernetes.io/projected/d67eba6d-5d34-4253-8718-a833c7b43c41-kube-api-access-7fgz4\") pod \"metallb-operator-controller-manager-55cdd8d9bf-nk57h\" (UID: \"d67eba6d-5d34-4253-8718-a833c7b43c41\") " pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.857563 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d67eba6d-5d34-4253-8718-a833c7b43c41-webhook-cert\") pod \"metallb-operator-controller-manager-55cdd8d9bf-nk57h\" (UID: \"d67eba6d-5d34-4253-8718-a833c7b43c41\") " pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.863701 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d67eba6d-5d34-4253-8718-a833c7b43c41-apiservice-cert\") pod \"metallb-operator-controller-manager-55cdd8d9bf-nk57h\" (UID: \"d67eba6d-5d34-4253-8718-a833c7b43c41\") " pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.869472 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d67eba6d-5d34-4253-8718-a833c7b43c41-webhook-cert\") pod \"metallb-operator-controller-manager-55cdd8d9bf-nk57h\" (UID: \"d67eba6d-5d34-4253-8718-a833c7b43c41\") " pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.879494 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fgz4\" (UniqueName: \"kubernetes.io/projected/d67eba6d-5d34-4253-8718-a833c7b43c41-kube-api-access-7fgz4\") pod \"metallb-operator-controller-manager-55cdd8d9bf-nk57h\" (UID: \"d67eba6d-5d34-4253-8718-a833c7b43c41\") " pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:57:49 crc kubenswrapper[4972]: I1121 09:57:49.949191 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.065554 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9"] Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.066547 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.073421 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.074199 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.074333 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-glhnj" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.076056 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9"] Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.163936 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b654f52d-03b3-4047-bc73-89cbcf0a1d00-apiservice-cert\") pod \"metallb-operator-webhook-server-dcc4dbb97-cc8w9\" (UID: \"b654f52d-03b3-4047-bc73-89cbcf0a1d00\") " pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.164012 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lntmc\" (UniqueName: \"kubernetes.io/projected/b654f52d-03b3-4047-bc73-89cbcf0a1d00-kube-api-access-lntmc\") pod \"metallb-operator-webhook-server-dcc4dbb97-cc8w9\" (UID: \"b654f52d-03b3-4047-bc73-89cbcf0a1d00\") " pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.164068 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b654f52d-03b3-4047-bc73-89cbcf0a1d00-webhook-cert\") pod \"metallb-operator-webhook-server-dcc4dbb97-cc8w9\" (UID: \"b654f52d-03b3-4047-bc73-89cbcf0a1d00\") " pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.265382 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b654f52d-03b3-4047-bc73-89cbcf0a1d00-apiservice-cert\") pod \"metallb-operator-webhook-server-dcc4dbb97-cc8w9\" (UID: \"b654f52d-03b3-4047-bc73-89cbcf0a1d00\") " pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.265473 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lntmc\" (UniqueName: \"kubernetes.io/projected/b654f52d-03b3-4047-bc73-89cbcf0a1d00-kube-api-access-lntmc\") pod \"metallb-operator-webhook-server-dcc4dbb97-cc8w9\" (UID: \"b654f52d-03b3-4047-bc73-89cbcf0a1d00\") " pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.265555 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b654f52d-03b3-4047-bc73-89cbcf0a1d00-webhook-cert\") pod \"metallb-operator-webhook-server-dcc4dbb97-cc8w9\" (UID: \"b654f52d-03b3-4047-bc73-89cbcf0a1d00\") " pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.271975 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b654f52d-03b3-4047-bc73-89cbcf0a1d00-apiservice-cert\") pod \"metallb-operator-webhook-server-dcc4dbb97-cc8w9\" (UID: \"b654f52d-03b3-4047-bc73-89cbcf0a1d00\") " pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.290607 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b654f52d-03b3-4047-bc73-89cbcf0a1d00-webhook-cert\") pod \"metallb-operator-webhook-server-dcc4dbb97-cc8w9\" (UID: \"b654f52d-03b3-4047-bc73-89cbcf0a1d00\") " pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.293260 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lntmc\" (UniqueName: \"kubernetes.io/projected/b654f52d-03b3-4047-bc73-89cbcf0a1d00-kube-api-access-lntmc\") pod \"metallb-operator-webhook-server-dcc4dbb97-cc8w9\" (UID: \"b654f52d-03b3-4047-bc73-89cbcf0a1d00\") " pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.410290 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.431186 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h"] Nov 21 09:57:50 crc kubenswrapper[4972]: I1121 09:57:50.615194 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9"] Nov 21 09:57:50 crc kubenswrapper[4972]: W1121 09:57:50.621770 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb654f52d_03b3_4047_bc73_89cbcf0a1d00.slice/crio-32248b98292dd1bcd3d4ea663fda19d301ea862ba831950eb61ae3c691bd2481 WatchSource:0}: Error finding container 32248b98292dd1bcd3d4ea663fda19d301ea862ba831950eb61ae3c691bd2481: Status 404 returned error can't find the container with id 32248b98292dd1bcd3d4ea663fda19d301ea862ba831950eb61ae3c691bd2481 Nov 21 09:57:51 crc kubenswrapper[4972]: I1121 09:57:51.391196 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" event={"ID":"d67eba6d-5d34-4253-8718-a833c7b43c41","Type":"ContainerStarted","Data":"abe44ee480b511cfc1202b0246c9930eb27cff4f28169ac8b3595bab4825cd05"} Nov 21 09:57:51 crc kubenswrapper[4972]: I1121 09:57:51.391976 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" event={"ID":"b654f52d-03b3-4047-bc73-89cbcf0a1d00","Type":"ContainerStarted","Data":"32248b98292dd1bcd3d4ea663fda19d301ea862ba831950eb61ae3c691bd2481"} Nov 21 09:57:55 crc kubenswrapper[4972]: I1121 09:57:55.421804 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" event={"ID":"d67eba6d-5d34-4253-8718-a833c7b43c41","Type":"ContainerStarted","Data":"b5a5a00e6cc956d688e966ffb7cf3a1e4a8670cb5cfa9847c32a52e89bb7e886"} Nov 21 09:57:55 crc kubenswrapper[4972]: I1121 09:57:55.424235 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" event={"ID":"b654f52d-03b3-4047-bc73-89cbcf0a1d00","Type":"ContainerStarted","Data":"95a6f97af912790d8e2191091fe1f7440f7a797c00ff2402e01703af2cd09189"} Nov 21 09:57:55 crc kubenswrapper[4972]: I1121 09:57:55.424907 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:57:55 crc kubenswrapper[4972]: I1121 09:57:55.445166 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" podStartSLOduration=2.006257828 podStartE2EDuration="6.445149089s" podCreationTimestamp="2025-11-21 09:57:49 +0000 UTC" firstStartedPulling="2025-11-21 09:57:50.440123161 +0000 UTC m=+1015.549265679" lastFinishedPulling="2025-11-21 09:57:54.879014442 +0000 UTC m=+1019.988156940" observedRunningTime="2025-11-21 09:57:55.441992365 +0000 UTC m=+1020.551134873" watchObservedRunningTime="2025-11-21 09:57:55.445149089 +0000 UTC m=+1020.554291597" Nov 21 09:57:55 crc kubenswrapper[4972]: I1121 09:57:55.461584 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" podStartSLOduration=1.18736169 podStartE2EDuration="5.461563792s" podCreationTimestamp="2025-11-21 09:57:50 +0000 UTC" firstStartedPulling="2025-11-21 09:57:50.624786497 +0000 UTC m=+1015.733928995" lastFinishedPulling="2025-11-21 09:57:54.898988599 +0000 UTC m=+1020.008131097" observedRunningTime="2025-11-21 09:57:55.458655226 +0000 UTC m=+1020.567797724" watchObservedRunningTime="2025-11-21 09:57:55.461563792 +0000 UTC m=+1020.570706290" Nov 21 09:57:56 crc kubenswrapper[4972]: I1121 09:57:56.178978 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:57:56 crc kubenswrapper[4972]: I1121 09:57:56.179475 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:57:56 crc kubenswrapper[4972]: I1121 09:57:56.179572 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 09:57:56 crc kubenswrapper[4972]: I1121 09:57:56.180544 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"918ebedd08e1b9dafe5e4f67da03ac43cd2232ecc2da24b0d75e1131226f344f"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 09:57:56 crc kubenswrapper[4972]: I1121 09:57:56.180655 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://918ebedd08e1b9dafe5e4f67da03ac43cd2232ecc2da24b0d75e1131226f344f" gracePeriod=600 Nov 21 09:57:56 crc kubenswrapper[4972]: I1121 09:57:56.432896 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="918ebedd08e1b9dafe5e4f67da03ac43cd2232ecc2da24b0d75e1131226f344f" exitCode=0 Nov 21 09:57:56 crc kubenswrapper[4972]: I1121 09:57:56.433125 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"918ebedd08e1b9dafe5e4f67da03ac43cd2232ecc2da24b0d75e1131226f344f"} Nov 21 09:57:56 crc kubenswrapper[4972]: I1121 09:57:56.433490 4972 scope.go:117] "RemoveContainer" containerID="291ebe608526f7ac9a64156ae1087bae54f85cc7ee3e395ff3ac3ef42a7a5a21" Nov 21 09:57:56 crc kubenswrapper[4972]: I1121 09:57:56.434209 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:57:57 crc kubenswrapper[4972]: I1121 09:57:57.444790 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"b5f6ea95f3d9b88cf1528773dedbad651b22ffa03b2cdc9849fa7c5b9b96c05e"} Nov 21 09:58:10 crc kubenswrapper[4972]: I1121 09:58:10.415868 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-dcc4dbb97-cc8w9" Nov 21 09:58:29 crc kubenswrapper[4972]: I1121 09:58:29.952065 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-55cdd8d9bf-nk57h" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.731616 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-q5hqx"] Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.733992 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.738406 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.739790 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.742621 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-c8llb"] Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.743383 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.744726 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-tzztd" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.746077 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.766303 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-c8llb"] Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.793972 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-metrics\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.794071 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-frr-sockets\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.794101 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2h25\" (UniqueName: \"kubernetes.io/projected/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-kube-api-access-w2h25\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.794119 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-metrics-certs\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.794137 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-frr-startup\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.794169 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-reloader\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.794228 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89xx8\" (UniqueName: \"kubernetes.io/projected/e74efb92-2741-41d9-a2aa-01e53dc1492c-kube-api-access-89xx8\") pod \"frr-k8s-webhook-server-6998585d5-c8llb\" (UID: \"e74efb92-2741-41d9-a2aa-01e53dc1492c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.794247 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-frr-conf\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.794262 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e74efb92-2741-41d9-a2aa-01e53dc1492c-cert\") pod \"frr-k8s-webhook-server-6998585d5-c8llb\" (UID: \"e74efb92-2741-41d9-a2aa-01e53dc1492c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.828677 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-n54ks"] Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.829627 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-n54ks" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.834413 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.834688 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-qrpd6" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.835010 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.837177 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.865209 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-gvl4b"] Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.866014 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.870487 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.882172 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-gvl4b"] Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.894984 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1d3b3072-9fd2-451d-83e1-4a7962179659-metallb-excludel2\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895056 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-frr-sockets\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895091 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2h25\" (UniqueName: \"kubernetes.io/projected/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-kube-api-access-w2h25\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895117 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-metrics-certs\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895141 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1d3b3072-9fd2-451d-83e1-4a7962179659-memberlist\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895164 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-frr-startup\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895194 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-reloader\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895224 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d3b3072-9fd2-451d-83e1-4a7962179659-metrics-certs\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895262 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfrj\" (UniqueName: \"kubernetes.io/projected/1d3b3072-9fd2-451d-83e1-4a7962179659-kube-api-access-lwfrj\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895291 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89xx8\" (UniqueName: \"kubernetes.io/projected/e74efb92-2741-41d9-a2aa-01e53dc1492c-kube-api-access-89xx8\") pod \"frr-k8s-webhook-server-6998585d5-c8llb\" (UID: \"e74efb92-2741-41d9-a2aa-01e53dc1492c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895316 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-frr-conf\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895337 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e74efb92-2741-41d9-a2aa-01e53dc1492c-cert\") pod \"frr-k8s-webhook-server-6998585d5-c8llb\" (UID: \"e74efb92-2741-41d9-a2aa-01e53dc1492c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895365 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-metrics\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895441 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-frr-sockets\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895668 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-metrics\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.895763 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-reloader\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: E1121 09:58:30.895863 4972 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 21 09:58:30 crc kubenswrapper[4972]: E1121 09:58:30.895912 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-metrics-certs podName:7a4cb27c-a1d4-49dd-935d-8ee648d8349f nodeName:}" failed. No retries permitted until 2025-11-21 09:58:31.3958947 +0000 UTC m=+1056.505037208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-metrics-certs") pod "frr-k8s-q5hqx" (UID: "7a4cb27c-a1d4-49dd-935d-8ee648d8349f") : secret "frr-k8s-certs-secret" not found Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.896174 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-frr-conf\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.896218 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-frr-startup\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: E1121 09:58:30.896387 4972 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 21 09:58:30 crc kubenswrapper[4972]: E1121 09:58:30.896542 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e74efb92-2741-41d9-a2aa-01e53dc1492c-cert podName:e74efb92-2741-41d9-a2aa-01e53dc1492c nodeName:}" failed. No retries permitted until 2025-11-21 09:58:31.396517217 +0000 UTC m=+1056.505659795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e74efb92-2741-41d9-a2aa-01e53dc1492c-cert") pod "frr-k8s-webhook-server-6998585d5-c8llb" (UID: "e74efb92-2741-41d9-a2aa-01e53dc1492c") : secret "frr-k8s-webhook-server-cert" not found Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.920558 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89xx8\" (UniqueName: \"kubernetes.io/projected/e74efb92-2741-41d9-a2aa-01e53dc1492c-kube-api-access-89xx8\") pod \"frr-k8s-webhook-server-6998585d5-c8llb\" (UID: \"e74efb92-2741-41d9-a2aa-01e53dc1492c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.926374 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2h25\" (UniqueName: \"kubernetes.io/projected/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-kube-api-access-w2h25\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.996383 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d3b3072-9fd2-451d-83e1-4a7962179659-metrics-certs\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.997245 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwfrj\" (UniqueName: \"kubernetes.io/projected/1d3b3072-9fd2-451d-83e1-4a7962179659-kube-api-access-lwfrj\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.997747 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/515ef80a-c079-44e2-ba9a-cef67b0a5965-cert\") pod \"controller-6c7b4b5f48-gvl4b\" (UID: \"515ef80a-c079-44e2-ba9a-cef67b0a5965\") " pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.997909 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1d3b3072-9fd2-451d-83e1-4a7962179659-metallb-excludel2\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.998562 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsxjc\" (UniqueName: \"kubernetes.io/projected/515ef80a-c079-44e2-ba9a-cef67b0a5965-kube-api-access-fsxjc\") pod \"controller-6c7b4b5f48-gvl4b\" (UID: \"515ef80a-c079-44e2-ba9a-cef67b0a5965\") " pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.998683 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/515ef80a-c079-44e2-ba9a-cef67b0a5965-metrics-certs\") pod \"controller-6c7b4b5f48-gvl4b\" (UID: \"515ef80a-c079-44e2-ba9a-cef67b0a5965\") " pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.998509 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1d3b3072-9fd2-451d-83e1-4a7962179659-metallb-excludel2\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:30 crc kubenswrapper[4972]: I1121 09:58:30.998902 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1d3b3072-9fd2-451d-83e1-4a7962179659-memberlist\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:30 crc kubenswrapper[4972]: E1121 09:58:30.999049 4972 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 21 09:58:30 crc kubenswrapper[4972]: E1121 09:58:30.999203 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d3b3072-9fd2-451d-83e1-4a7962179659-memberlist podName:1d3b3072-9fd2-451d-83e1-4a7962179659 nodeName:}" failed. No retries permitted until 2025-11-21 09:58:31.499191908 +0000 UTC m=+1056.608334406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1d3b3072-9fd2-451d-83e1-4a7962179659-memberlist") pod "speaker-n54ks" (UID: "1d3b3072-9fd2-451d-83e1-4a7962179659") : secret "metallb-memberlist" not found Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.000628 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d3b3072-9fd2-451d-83e1-4a7962179659-metrics-certs\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.015081 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwfrj\" (UniqueName: \"kubernetes.io/projected/1d3b3072-9fd2-451d-83e1-4a7962179659-kube-api-access-lwfrj\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.099954 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/515ef80a-c079-44e2-ba9a-cef67b0a5965-cert\") pod \"controller-6c7b4b5f48-gvl4b\" (UID: \"515ef80a-c079-44e2-ba9a-cef67b0a5965\") " pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.100557 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsxjc\" (UniqueName: \"kubernetes.io/projected/515ef80a-c079-44e2-ba9a-cef67b0a5965-kube-api-access-fsxjc\") pod \"controller-6c7b4b5f48-gvl4b\" (UID: \"515ef80a-c079-44e2-ba9a-cef67b0a5965\") " pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.100928 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/515ef80a-c079-44e2-ba9a-cef67b0a5965-metrics-certs\") pod \"controller-6c7b4b5f48-gvl4b\" (UID: \"515ef80a-c079-44e2-ba9a-cef67b0a5965\") " pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.101546 4972 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.104443 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/515ef80a-c079-44e2-ba9a-cef67b0a5965-metrics-certs\") pod \"controller-6c7b4b5f48-gvl4b\" (UID: \"515ef80a-c079-44e2-ba9a-cef67b0a5965\") " pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.114474 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/515ef80a-c079-44e2-ba9a-cef67b0a5965-cert\") pod \"controller-6c7b4b5f48-gvl4b\" (UID: \"515ef80a-c079-44e2-ba9a-cef67b0a5965\") " pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.121500 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsxjc\" (UniqueName: \"kubernetes.io/projected/515ef80a-c079-44e2-ba9a-cef67b0a5965-kube-api-access-fsxjc\") pod \"controller-6c7b4b5f48-gvl4b\" (UID: \"515ef80a-c079-44e2-ba9a-cef67b0a5965\") " pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.179603 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.406179 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-metrics-certs\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.406622 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e74efb92-2741-41d9-a2aa-01e53dc1492c-cert\") pod \"frr-k8s-webhook-server-6998585d5-c8llb\" (UID: \"e74efb92-2741-41d9-a2aa-01e53dc1492c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.410765 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e74efb92-2741-41d9-a2aa-01e53dc1492c-cert\") pod \"frr-k8s-webhook-server-6998585d5-c8llb\" (UID: \"e74efb92-2741-41d9-a2aa-01e53dc1492c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.411241 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a4cb27c-a1d4-49dd-935d-8ee648d8349f-metrics-certs\") pod \"frr-k8s-q5hqx\" (UID: \"7a4cb27c-a1d4-49dd-935d-8ee648d8349f\") " pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.447722 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-gvl4b"] Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.507463 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1d3b3072-9fd2-451d-83e1-4a7962179659-memberlist\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:31 crc kubenswrapper[4972]: E1121 09:58:31.507616 4972 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 21 09:58:31 crc kubenswrapper[4972]: E1121 09:58:31.507661 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d3b3072-9fd2-451d-83e1-4a7962179659-memberlist podName:1d3b3072-9fd2-451d-83e1-4a7962179659 nodeName:}" failed. No retries permitted until 2025-11-21 09:58:32.507648222 +0000 UTC m=+1057.616790720 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1d3b3072-9fd2-451d-83e1-4a7962179659-memberlist") pod "speaker-n54ks" (UID: "1d3b3072-9fd2-451d-83e1-4a7962179659") : secret "metallb-memberlist" not found Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.644605 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-gvl4b" event={"ID":"515ef80a-c079-44e2-ba9a-cef67b0a5965","Type":"ContainerStarted","Data":"fa3793cede49bfb2f70213e9a69ba3fc8b284cd62c71dd4820a3a1435df568c9"} Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.644673 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-gvl4b" event={"ID":"515ef80a-c079-44e2-ba9a-cef67b0a5965","Type":"ContainerStarted","Data":"088b8379b39d44569afa11649081da674d489bdd06fe5f34f45948fb1d3b069f"} Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.652096 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.658933 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" Nov 21 09:58:31 crc kubenswrapper[4972]: I1121 09:58:31.907863 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-c8llb"] Nov 21 09:58:31 crc kubenswrapper[4972]: W1121 09:58:31.911509 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode74efb92_2741_41d9_a2aa_01e53dc1492c.slice/crio-4800a3b8872be230d1a1fcf8be2d80962191b253e85d76a1c4b096f7ddba9d10 WatchSource:0}: Error finding container 4800a3b8872be230d1a1fcf8be2d80962191b253e85d76a1c4b096f7ddba9d10: Status 404 returned error can't find the container with id 4800a3b8872be230d1a1fcf8be2d80962191b253e85d76a1c4b096f7ddba9d10 Nov 21 09:58:32 crc kubenswrapper[4972]: I1121 09:58:32.522569 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1d3b3072-9fd2-451d-83e1-4a7962179659-memberlist\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:32 crc kubenswrapper[4972]: I1121 09:58:32.536522 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1d3b3072-9fd2-451d-83e1-4a7962179659-memberlist\") pod \"speaker-n54ks\" (UID: \"1d3b3072-9fd2-451d-83e1-4a7962179659\") " pod="metallb-system/speaker-n54ks" Nov 21 09:58:32 crc kubenswrapper[4972]: I1121 09:58:32.646031 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-n54ks" Nov 21 09:58:32 crc kubenswrapper[4972]: I1121 09:58:32.652723 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-gvl4b" event={"ID":"515ef80a-c079-44e2-ba9a-cef67b0a5965","Type":"ContainerStarted","Data":"c27c8f835d53eda370ccf65ef3ef33768b6cb422fb783c3674953b6bb21989c7"} Nov 21 09:58:32 crc kubenswrapper[4972]: I1121 09:58:32.652895 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:32 crc kubenswrapper[4972]: I1121 09:58:32.654133 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" event={"ID":"e74efb92-2741-41d9-a2aa-01e53dc1492c","Type":"ContainerStarted","Data":"4800a3b8872be230d1a1fcf8be2d80962191b253e85d76a1c4b096f7ddba9d10"} Nov 21 09:58:32 crc kubenswrapper[4972]: I1121 09:58:32.657461 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hqx" event={"ID":"7a4cb27c-a1d4-49dd-935d-8ee648d8349f","Type":"ContainerStarted","Data":"bb4c26d455db174d0c9b3c6f669be3df614c56e2e11c5cbd1a799490f4d90526"} Nov 21 09:58:32 crc kubenswrapper[4972]: I1121 09:58:32.674634 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-gvl4b" podStartSLOduration=2.674616354 podStartE2EDuration="2.674616354s" podCreationTimestamp="2025-11-21 09:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:58:32.671596964 +0000 UTC m=+1057.780739482" watchObservedRunningTime="2025-11-21 09:58:32.674616354 +0000 UTC m=+1057.783758852" Nov 21 09:58:33 crc kubenswrapper[4972]: I1121 09:58:33.666809 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-n54ks" event={"ID":"1d3b3072-9fd2-451d-83e1-4a7962179659","Type":"ContainerStarted","Data":"a088093de120fbf3d0fff3efd71ed1e3947de11d31c4312ce3ed0cb271b78fc6"} Nov 21 09:58:33 crc kubenswrapper[4972]: I1121 09:58:33.667204 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-n54ks" event={"ID":"1d3b3072-9fd2-451d-83e1-4a7962179659","Type":"ContainerStarted","Data":"3f96ebdf69ea240ffa4d517b50ba845712bd5643d01929262d07e2362be40ba5"} Nov 21 09:58:34 crc kubenswrapper[4972]: I1121 09:58:34.675373 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-n54ks" event={"ID":"1d3b3072-9fd2-451d-83e1-4a7962179659","Type":"ContainerStarted","Data":"f171520d2f22df460627ca325f5412668e14ad2122de7645c6f24e7a4e7a3fd9"} Nov 21 09:58:34 crc kubenswrapper[4972]: I1121 09:58:34.676400 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-n54ks" Nov 21 09:58:34 crc kubenswrapper[4972]: I1121 09:58:34.693982 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-n54ks" podStartSLOduration=4.69396205 podStartE2EDuration="4.69396205s" podCreationTimestamp="2025-11-21 09:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:58:34.691787233 +0000 UTC m=+1059.800929751" watchObservedRunningTime="2025-11-21 09:58:34.69396205 +0000 UTC m=+1059.803104548" Nov 21 09:58:39 crc kubenswrapper[4972]: I1121 09:58:39.707183 4972 generic.go:334] "Generic (PLEG): container finished" podID="7a4cb27c-a1d4-49dd-935d-8ee648d8349f" containerID="beecc3971df89265c1541cc6a2f3e2885617bb1390e4584b8a5e8ee7f50059c7" exitCode=0 Nov 21 09:58:39 crc kubenswrapper[4972]: I1121 09:58:39.707249 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hqx" event={"ID":"7a4cb27c-a1d4-49dd-935d-8ee648d8349f","Type":"ContainerDied","Data":"beecc3971df89265c1541cc6a2f3e2885617bb1390e4584b8a5e8ee7f50059c7"} Nov 21 09:58:39 crc kubenswrapper[4972]: I1121 09:58:39.711142 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" event={"ID":"e74efb92-2741-41d9-a2aa-01e53dc1492c","Type":"ContainerStarted","Data":"6e10009b40b6f653df64c3b643aa4393ab309d77f1aebed3243b027559675eda"} Nov 21 09:58:39 crc kubenswrapper[4972]: I1121 09:58:39.711436 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" Nov 21 09:58:39 crc kubenswrapper[4972]: I1121 09:58:39.778664 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" podStartSLOduration=2.224623952 podStartE2EDuration="9.778636982s" podCreationTimestamp="2025-11-21 09:58:30 +0000 UTC" firstStartedPulling="2025-11-21 09:58:31.914123984 +0000 UTC m=+1057.023266482" lastFinishedPulling="2025-11-21 09:58:39.468137004 +0000 UTC m=+1064.577279512" observedRunningTime="2025-11-21 09:58:39.767312013 +0000 UTC m=+1064.876454511" watchObservedRunningTime="2025-11-21 09:58:39.778636982 +0000 UTC m=+1064.887779500" Nov 21 09:58:40 crc kubenswrapper[4972]: I1121 09:58:40.722392 4972 generic.go:334] "Generic (PLEG): container finished" podID="7a4cb27c-a1d4-49dd-935d-8ee648d8349f" containerID="115431849b45f3bffa7d7990d9813b3e7f2ec39b4c6a3797a67b4a089f585def" exitCode=0 Nov 21 09:58:40 crc kubenswrapper[4972]: I1121 09:58:40.723984 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hqx" event={"ID":"7a4cb27c-a1d4-49dd-935d-8ee648d8349f","Type":"ContainerDied","Data":"115431849b45f3bffa7d7990d9813b3e7f2ec39b4c6a3797a67b4a089f585def"} Nov 21 09:58:41 crc kubenswrapper[4972]: I1121 09:58:41.184169 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-gvl4b" Nov 21 09:58:41 crc kubenswrapper[4972]: I1121 09:58:41.736484 4972 generic.go:334] "Generic (PLEG): container finished" podID="7a4cb27c-a1d4-49dd-935d-8ee648d8349f" containerID="e783ed4c61a105c45f620c9733742c5e2ec0efd02839f1dea96814b1cc90bd26" exitCode=0 Nov 21 09:58:41 crc kubenswrapper[4972]: I1121 09:58:41.736549 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hqx" event={"ID":"7a4cb27c-a1d4-49dd-935d-8ee648d8349f","Type":"ContainerDied","Data":"e783ed4c61a105c45f620c9733742c5e2ec0efd02839f1dea96814b1cc90bd26"} Nov 21 09:58:42 crc kubenswrapper[4972]: I1121 09:58:42.747148 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hqx" event={"ID":"7a4cb27c-a1d4-49dd-935d-8ee648d8349f","Type":"ContainerStarted","Data":"dffbb8c6de0e319e226388d6907ec5591aeaef4e267505406b6b8ca88c0a48a9"} Nov 21 09:58:42 crc kubenswrapper[4972]: I1121 09:58:42.747521 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hqx" event={"ID":"7a4cb27c-a1d4-49dd-935d-8ee648d8349f","Type":"ContainerStarted","Data":"43ae8c1c9d73f3b9475d96847d602b5f7e12207c25ddf52a3f64ab5124d50139"} Nov 21 09:58:42 crc kubenswrapper[4972]: I1121 09:58:42.747546 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hqx" event={"ID":"7a4cb27c-a1d4-49dd-935d-8ee648d8349f","Type":"ContainerStarted","Data":"11ea04e250f2f64a10068c5058ad9636efcab885a4738c5bfacde05ffb9bc1c4"} Nov 21 09:58:42 crc kubenswrapper[4972]: I1121 09:58:42.747564 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hqx" event={"ID":"7a4cb27c-a1d4-49dd-935d-8ee648d8349f","Type":"ContainerStarted","Data":"7ce6a9523c8901bac3f198a3ecb9fbe4552c70e15311abfa1db2819eb7528f12"} Nov 21 09:58:42 crc kubenswrapper[4972]: I1121 09:58:42.747582 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hqx" event={"ID":"7a4cb27c-a1d4-49dd-935d-8ee648d8349f","Type":"ContainerStarted","Data":"195590b1e463ffc9d60ea69c8a4b1e373b961470df292a133654b0f9c64380e0"} Nov 21 09:58:43 crc kubenswrapper[4972]: I1121 09:58:43.770447 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-q5hqx" event={"ID":"7a4cb27c-a1d4-49dd-935d-8ee648d8349f","Type":"ContainerStarted","Data":"b8fb11d4928d2dc00bbc131df54b0377a87fe7fe5066304f76a5b5f87838d155"} Nov 21 09:58:43 crc kubenswrapper[4972]: I1121 09:58:43.790347 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-q5hqx" podStartSLOduration=6.119620184 podStartE2EDuration="13.790318784s" podCreationTimestamp="2025-11-21 09:58:30 +0000 UTC" firstStartedPulling="2025-11-21 09:58:31.824007925 +0000 UTC m=+1056.933150423" lastFinishedPulling="2025-11-21 09:58:39.494706525 +0000 UTC m=+1064.603849023" observedRunningTime="2025-11-21 09:58:43.788203008 +0000 UTC m=+1068.897345526" watchObservedRunningTime="2025-11-21 09:58:43.790318784 +0000 UTC m=+1068.899461292" Nov 21 09:58:44 crc kubenswrapper[4972]: I1121 09:58:44.767636 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:46 crc kubenswrapper[4972]: I1121 09:58:46.652785 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:46 crc kubenswrapper[4972]: I1121 09:58:46.688658 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:51 crc kubenswrapper[4972]: I1121 09:58:51.655307 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-q5hqx" Nov 21 09:58:51 crc kubenswrapper[4972]: I1121 09:58:51.664011 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" Nov 21 09:58:52 crc kubenswrapper[4972]: I1121 09:58:52.650583 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-n54ks" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.168730 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg"] Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.170230 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.172002 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.184494 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg"] Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.345506 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58bf36c5-f150-4320-ba2f-7b728bd1cc43-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg\" (UID: \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.345560 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq2cz\" (UniqueName: \"kubernetes.io/projected/58bf36c5-f150-4320-ba2f-7b728bd1cc43-kube-api-access-lq2cz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg\" (UID: \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.345585 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58bf36c5-f150-4320-ba2f-7b728bd1cc43-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg\" (UID: \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.447673 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58bf36c5-f150-4320-ba2f-7b728bd1cc43-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg\" (UID: \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.447741 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq2cz\" (UniqueName: \"kubernetes.io/projected/58bf36c5-f150-4320-ba2f-7b728bd1cc43-kube-api-access-lq2cz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg\" (UID: \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.447784 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58bf36c5-f150-4320-ba2f-7b728bd1cc43-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg\" (UID: \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.448346 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58bf36c5-f150-4320-ba2f-7b728bd1cc43-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg\" (UID: \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.448507 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58bf36c5-f150-4320-ba2f-7b728bd1cc43-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg\" (UID: \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.472671 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq2cz\" (UniqueName: \"kubernetes.io/projected/58bf36c5-f150-4320-ba2f-7b728bd1cc43-kube-api-access-lq2cz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg\" (UID: \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.490599 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:58:54 crc kubenswrapper[4972]: I1121 09:58:54.933732 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg"] Nov 21 09:58:54 crc kubenswrapper[4972]: W1121 09:58:54.938472 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58bf36c5_f150_4320_ba2f_7b728bd1cc43.slice/crio-8b647e9475685ea2fa458c1db6761c5b9f3fbab5f25aba907a5817037252bc42 WatchSource:0}: Error finding container 8b647e9475685ea2fa458c1db6761c5b9f3fbab5f25aba907a5817037252bc42: Status 404 returned error can't find the container with id 8b647e9475685ea2fa458c1db6761c5b9f3fbab5f25aba907a5817037252bc42 Nov 21 09:58:55 crc kubenswrapper[4972]: I1121 09:58:55.852848 4972 generic.go:334] "Generic (PLEG): container finished" podID="58bf36c5-f150-4320-ba2f-7b728bd1cc43" containerID="b17c144ea1465aa87403062d150aa9e2ea18ab1ad66f40f44a01592ef1573afd" exitCode=0 Nov 21 09:58:55 crc kubenswrapper[4972]: I1121 09:58:55.853115 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" event={"ID":"58bf36c5-f150-4320-ba2f-7b728bd1cc43","Type":"ContainerDied","Data":"b17c144ea1465aa87403062d150aa9e2ea18ab1ad66f40f44a01592ef1573afd"} Nov 21 09:58:55 crc kubenswrapper[4972]: I1121 09:58:55.853324 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" event={"ID":"58bf36c5-f150-4320-ba2f-7b728bd1cc43","Type":"ContainerStarted","Data":"8b647e9475685ea2fa458c1db6761c5b9f3fbab5f25aba907a5817037252bc42"} Nov 21 09:58:58 crc kubenswrapper[4972]: I1121 09:58:58.875437 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" event={"ID":"58bf36c5-f150-4320-ba2f-7b728bd1cc43","Type":"ContainerStarted","Data":"5b271f2ba0ad615a37d02f689b73e5d0937363ed9f8159add45a2d7704c31cb0"} Nov 21 09:58:59 crc kubenswrapper[4972]: I1121 09:58:59.882155 4972 generic.go:334] "Generic (PLEG): container finished" podID="58bf36c5-f150-4320-ba2f-7b728bd1cc43" containerID="5b271f2ba0ad615a37d02f689b73e5d0937363ed9f8159add45a2d7704c31cb0" exitCode=0 Nov 21 09:58:59 crc kubenswrapper[4972]: I1121 09:58:59.882201 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" event={"ID":"58bf36c5-f150-4320-ba2f-7b728bd1cc43","Type":"ContainerDied","Data":"5b271f2ba0ad615a37d02f689b73e5d0937363ed9f8159add45a2d7704c31cb0"} Nov 21 09:59:00 crc kubenswrapper[4972]: I1121 09:59:00.891107 4972 generic.go:334] "Generic (PLEG): container finished" podID="58bf36c5-f150-4320-ba2f-7b728bd1cc43" containerID="f5bd4ef26d1dd2997794e662040c8695a8fa63d9378de83fe17b1f998259b609" exitCode=0 Nov 21 09:59:00 crc kubenswrapper[4972]: I1121 09:59:00.891155 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" event={"ID":"58bf36c5-f150-4320-ba2f-7b728bd1cc43","Type":"ContainerDied","Data":"f5bd4ef26d1dd2997794e662040c8695a8fa63d9378de83fe17b1f998259b609"} Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.266288 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.373447 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58bf36c5-f150-4320-ba2f-7b728bd1cc43-bundle\") pod \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\" (UID: \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\") " Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.373522 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58bf36c5-f150-4320-ba2f-7b728bd1cc43-util\") pod \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\" (UID: \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\") " Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.373551 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq2cz\" (UniqueName: \"kubernetes.io/projected/58bf36c5-f150-4320-ba2f-7b728bd1cc43-kube-api-access-lq2cz\") pod \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\" (UID: \"58bf36c5-f150-4320-ba2f-7b728bd1cc43\") " Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.374672 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58bf36c5-f150-4320-ba2f-7b728bd1cc43-bundle" (OuterVolumeSpecName: "bundle") pod "58bf36c5-f150-4320-ba2f-7b728bd1cc43" (UID: "58bf36c5-f150-4320-ba2f-7b728bd1cc43"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.374752 4972 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58bf36c5-f150-4320-ba2f-7b728bd1cc43-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.379618 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58bf36c5-f150-4320-ba2f-7b728bd1cc43-kube-api-access-lq2cz" (OuterVolumeSpecName: "kube-api-access-lq2cz") pod "58bf36c5-f150-4320-ba2f-7b728bd1cc43" (UID: "58bf36c5-f150-4320-ba2f-7b728bd1cc43"). InnerVolumeSpecName "kube-api-access-lq2cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.388208 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58bf36c5-f150-4320-ba2f-7b728bd1cc43-util" (OuterVolumeSpecName: "util") pod "58bf36c5-f150-4320-ba2f-7b728bd1cc43" (UID: "58bf36c5-f150-4320-ba2f-7b728bd1cc43"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.475876 4972 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58bf36c5-f150-4320-ba2f-7b728bd1cc43-util\") on node \"crc\" DevicePath \"\"" Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.475942 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq2cz\" (UniqueName: \"kubernetes.io/projected/58bf36c5-f150-4320-ba2f-7b728bd1cc43-kube-api-access-lq2cz\") on node \"crc\" DevicePath \"\"" Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.909253 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" event={"ID":"58bf36c5-f150-4320-ba2f-7b728bd1cc43","Type":"ContainerDied","Data":"8b647e9475685ea2fa458c1db6761c5b9f3fbab5f25aba907a5817037252bc42"} Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.909345 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b647e9475685ea2fa458c1db6761c5b9f3fbab5f25aba907a5817037252bc42" Nov 21 09:59:02 crc kubenswrapper[4972]: I1121 09:59:02.909303 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.815147 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj"] Nov 21 09:59:05 crc kubenswrapper[4972]: E1121 09:59:05.815847 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58bf36c5-f150-4320-ba2f-7b728bd1cc43" containerName="util" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.815866 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="58bf36c5-f150-4320-ba2f-7b728bd1cc43" containerName="util" Nov 21 09:59:05 crc kubenswrapper[4972]: E1121 09:59:05.815898 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58bf36c5-f150-4320-ba2f-7b728bd1cc43" containerName="pull" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.815909 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="58bf36c5-f150-4320-ba2f-7b728bd1cc43" containerName="pull" Nov 21 09:59:05 crc kubenswrapper[4972]: E1121 09:59:05.815927 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58bf36c5-f150-4320-ba2f-7b728bd1cc43" containerName="extract" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.815938 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="58bf36c5-f150-4320-ba2f-7b728bd1cc43" containerName="extract" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.816149 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="58bf36c5-f150-4320-ba2f-7b728bd1cc43" containerName="extract" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.817008 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.817710 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8e06edb1-e9cf-4b4d-aee9-433205ae53ac-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fwncj\" (UID: \"8e06edb1-e9cf-4b4d-aee9-433205ae53ac\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.817766 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-942bk\" (UniqueName: \"kubernetes.io/projected/8e06edb1-e9cf-4b4d-aee9-433205ae53ac-kube-api-access-942bk\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fwncj\" (UID: \"8e06edb1-e9cf-4b4d-aee9-433205ae53ac\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.821672 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.821973 4972 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-vbbpx" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.822819 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.833302 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj"] Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.918942 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8e06edb1-e9cf-4b4d-aee9-433205ae53ac-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fwncj\" (UID: \"8e06edb1-e9cf-4b4d-aee9-433205ae53ac\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.918993 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-942bk\" (UniqueName: \"kubernetes.io/projected/8e06edb1-e9cf-4b4d-aee9-433205ae53ac-kube-api-access-942bk\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fwncj\" (UID: \"8e06edb1-e9cf-4b4d-aee9-433205ae53ac\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.919806 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8e06edb1-e9cf-4b4d-aee9-433205ae53ac-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fwncj\" (UID: \"8e06edb1-e9cf-4b4d-aee9-433205ae53ac\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj" Nov 21 09:59:05 crc kubenswrapper[4972]: I1121 09:59:05.963683 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-942bk\" (UniqueName: \"kubernetes.io/projected/8e06edb1-e9cf-4b4d-aee9-433205ae53ac-kube-api-access-942bk\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fwncj\" (UID: \"8e06edb1-e9cf-4b4d-aee9-433205ae53ac\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj" Nov 21 09:59:06 crc kubenswrapper[4972]: I1121 09:59:06.138087 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj" Nov 21 09:59:06 crc kubenswrapper[4972]: I1121 09:59:06.615032 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj"] Nov 21 09:59:06 crc kubenswrapper[4972]: W1121 09:59:06.621962 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e06edb1_e9cf_4b4d_aee9_433205ae53ac.slice/crio-ea2f9ab410e9000d9efc657986ac3f2973a451ba02a91e8ab709ae410d001216 WatchSource:0}: Error finding container ea2f9ab410e9000d9efc657986ac3f2973a451ba02a91e8ab709ae410d001216: Status 404 returned error can't find the container with id ea2f9ab410e9000d9efc657986ac3f2973a451ba02a91e8ab709ae410d001216 Nov 21 09:59:06 crc kubenswrapper[4972]: I1121 09:59:06.939033 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj" event={"ID":"8e06edb1-e9cf-4b4d-aee9-433205ae53ac","Type":"ContainerStarted","Data":"ea2f9ab410e9000d9efc657986ac3f2973a451ba02a91e8ab709ae410d001216"} Nov 21 09:59:13 crc kubenswrapper[4972]: I1121 09:59:13.987873 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj" event={"ID":"8e06edb1-e9cf-4b4d-aee9-433205ae53ac","Type":"ContainerStarted","Data":"87de3ec1d2a063d09b5bbe527b20b2bc5f70b9b981fc81411e2fa88d9d260582"} Nov 21 09:59:14 crc kubenswrapper[4972]: I1121 09:59:14.015090 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fwncj" podStartSLOduration=2.697469511 podStartE2EDuration="9.015063465s" podCreationTimestamp="2025-11-21 09:59:05 +0000 UTC" firstStartedPulling="2025-11-21 09:59:06.624634194 +0000 UTC m=+1091.733776682" lastFinishedPulling="2025-11-21 09:59:12.942228138 +0000 UTC m=+1098.051370636" observedRunningTime="2025-11-21 09:59:14.008470841 +0000 UTC m=+1099.117613349" watchObservedRunningTime="2025-11-21 09:59:14.015063465 +0000 UTC m=+1099.124205983" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.553916 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-25b5p"] Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.555123 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.557568 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.557894 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.557980 4972 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9hw4w" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.571443 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-25b5p"] Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.674745 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bc21339d-081f-4b15-b46f-fd322a3c938d-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-25b5p\" (UID: \"bc21339d-081f-4b15-b46f-fd322a3c938d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.675012 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8d75\" (UniqueName: \"kubernetes.io/projected/bc21339d-081f-4b15-b46f-fd322a3c938d-kube-api-access-z8d75\") pod \"cert-manager-webhook-f4fb5df64-25b5p\" (UID: \"bc21339d-081f-4b15-b46f-fd322a3c938d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.776599 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bc21339d-081f-4b15-b46f-fd322a3c938d-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-25b5p\" (UID: \"bc21339d-081f-4b15-b46f-fd322a3c938d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.776671 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8d75\" (UniqueName: \"kubernetes.io/projected/bc21339d-081f-4b15-b46f-fd322a3c938d-kube-api-access-z8d75\") pod \"cert-manager-webhook-f4fb5df64-25b5p\" (UID: \"bc21339d-081f-4b15-b46f-fd322a3c938d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.795446 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bc21339d-081f-4b15-b46f-fd322a3c938d-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-25b5p\" (UID: \"bc21339d-081f-4b15-b46f-fd322a3c938d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.795724 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8d75\" (UniqueName: \"kubernetes.io/projected/bc21339d-081f-4b15-b46f-fd322a3c938d-kube-api-access-z8d75\") pod \"cert-manager-webhook-f4fb5df64-25b5p\" (UID: \"bc21339d-081f-4b15-b46f-fd322a3c938d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.872072 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.946504 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh"] Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.947451 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.949884 4972 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ch8lf" Nov 21 09:59:17 crc kubenswrapper[4972]: I1121 09:59:17.953018 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh"] Nov 21 09:59:18 crc kubenswrapper[4972]: I1121 09:59:18.080585 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm2t7\" (UniqueName: \"kubernetes.io/projected/dd3b7a0f-6e23-4379-bc20-83d489a6a650-kube-api-access-mm2t7\") pod \"cert-manager-cainjector-855d9ccff4-sfwsh\" (UID: \"dd3b7a0f-6e23-4379-bc20-83d489a6a650\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh" Nov 21 09:59:18 crc kubenswrapper[4972]: I1121 09:59:18.081069 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd3b7a0f-6e23-4379-bc20-83d489a6a650-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-sfwsh\" (UID: \"dd3b7a0f-6e23-4379-bc20-83d489a6a650\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh" Nov 21 09:59:18 crc kubenswrapper[4972]: I1121 09:59:18.095197 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-25b5p"] Nov 21 09:59:18 crc kubenswrapper[4972]: I1121 09:59:18.182815 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm2t7\" (UniqueName: \"kubernetes.io/projected/dd3b7a0f-6e23-4379-bc20-83d489a6a650-kube-api-access-mm2t7\") pod \"cert-manager-cainjector-855d9ccff4-sfwsh\" (UID: \"dd3b7a0f-6e23-4379-bc20-83d489a6a650\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh" Nov 21 09:59:18 crc kubenswrapper[4972]: I1121 09:59:18.182902 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd3b7a0f-6e23-4379-bc20-83d489a6a650-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-sfwsh\" (UID: \"dd3b7a0f-6e23-4379-bc20-83d489a6a650\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh" Nov 21 09:59:18 crc kubenswrapper[4972]: I1121 09:59:18.201002 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd3b7a0f-6e23-4379-bc20-83d489a6a650-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-sfwsh\" (UID: \"dd3b7a0f-6e23-4379-bc20-83d489a6a650\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh" Nov 21 09:59:18 crc kubenswrapper[4972]: I1121 09:59:18.201101 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm2t7\" (UniqueName: \"kubernetes.io/projected/dd3b7a0f-6e23-4379-bc20-83d489a6a650-kube-api-access-mm2t7\") pod \"cert-manager-cainjector-855d9ccff4-sfwsh\" (UID: \"dd3b7a0f-6e23-4379-bc20-83d489a6a650\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh" Nov 21 09:59:18 crc kubenswrapper[4972]: I1121 09:59:18.274538 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh" Nov 21 09:59:18 crc kubenswrapper[4972]: I1121 09:59:18.730702 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh"] Nov 21 09:59:18 crc kubenswrapper[4972]: W1121 09:59:18.731903 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd3b7a0f_6e23_4379_bc20_83d489a6a650.slice/crio-c584477ad79140d0ee3908ed0ef0fa562563a93afd454fbe6b6f45be3bf64532 WatchSource:0}: Error finding container c584477ad79140d0ee3908ed0ef0fa562563a93afd454fbe6b6f45be3bf64532: Status 404 returned error can't find the container with id c584477ad79140d0ee3908ed0ef0fa562563a93afd454fbe6b6f45be3bf64532 Nov 21 09:59:19 crc kubenswrapper[4972]: I1121 09:59:19.027654 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" event={"ID":"bc21339d-081f-4b15-b46f-fd322a3c938d","Type":"ContainerStarted","Data":"d10590a66c4853f9fbb381e4d41269f3a0ef34be71eb6671d535457b9138c1d4"} Nov 21 09:59:19 crc kubenswrapper[4972]: I1121 09:59:19.028883 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh" event={"ID":"dd3b7a0f-6e23-4379-bc20-83d489a6a650","Type":"ContainerStarted","Data":"c584477ad79140d0ee3908ed0ef0fa562563a93afd454fbe6b6f45be3bf64532"} Nov 21 09:59:28 crc kubenswrapper[4972]: I1121 09:59:28.655275 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" event={"ID":"bc21339d-081f-4b15-b46f-fd322a3c938d","Type":"ContainerStarted","Data":"cf44cc17e5b8f15ecffd33a116241f5f0012e8a9da5aac015165c787dd313014"} Nov 21 09:59:28 crc kubenswrapper[4972]: I1121 09:59:28.656301 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" Nov 21 09:59:28 crc kubenswrapper[4972]: I1121 09:59:28.658591 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh" event={"ID":"dd3b7a0f-6e23-4379-bc20-83d489a6a650","Type":"ContainerStarted","Data":"fc37b2a5598df62bce2438c8277dd5509170888185c6e9bd4dcd6c7eeafd5a80"} Nov 21 09:59:28 crc kubenswrapper[4972]: I1121 09:59:28.673406 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" podStartSLOduration=1.393739992 podStartE2EDuration="11.673382097s" podCreationTimestamp="2025-11-21 09:59:17 +0000 UTC" firstStartedPulling="2025-11-21 09:59:18.107531467 +0000 UTC m=+1103.216673965" lastFinishedPulling="2025-11-21 09:59:28.387173572 +0000 UTC m=+1113.496316070" observedRunningTime="2025-11-21 09:59:28.66898249 +0000 UTC m=+1113.778125008" watchObservedRunningTime="2025-11-21 09:59:28.673382097 +0000 UTC m=+1113.782524595" Nov 21 09:59:28 crc kubenswrapper[4972]: I1121 09:59:28.686724 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-sfwsh" podStartSLOduration=2.000440494 podStartE2EDuration="11.686702732s" podCreationTimestamp="2025-11-21 09:59:17 +0000 UTC" firstStartedPulling="2025-11-21 09:59:18.737729084 +0000 UTC m=+1103.846871612" lastFinishedPulling="2025-11-21 09:59:28.423991352 +0000 UTC m=+1113.533133850" observedRunningTime="2025-11-21 09:59:28.684669258 +0000 UTC m=+1113.793811766" watchObservedRunningTime="2025-11-21 09:59:28.686702732 +0000 UTC m=+1113.795845240" Nov 21 09:59:35 crc kubenswrapper[4972]: I1121 09:59:35.805983 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-zbmqj"] Nov 21 09:59:35 crc kubenswrapper[4972]: I1121 09:59:35.810323 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-zbmqj" Nov 21 09:59:35 crc kubenswrapper[4972]: I1121 09:59:35.811997 4972 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hbqqp" Nov 21 09:59:35 crc kubenswrapper[4972]: I1121 09:59:35.813317 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-zbmqj"] Nov 21 09:59:35 crc kubenswrapper[4972]: I1121 09:59:35.839417 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc2d4\" (UniqueName: \"kubernetes.io/projected/619fcbfa-aa5a-4e42-8c95-6bf5ad357cee-kube-api-access-rc2d4\") pod \"cert-manager-86cb77c54b-zbmqj\" (UID: \"619fcbfa-aa5a-4e42-8c95-6bf5ad357cee\") " pod="cert-manager/cert-manager-86cb77c54b-zbmqj" Nov 21 09:59:35 crc kubenswrapper[4972]: I1121 09:59:35.839605 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/619fcbfa-aa5a-4e42-8c95-6bf5ad357cee-bound-sa-token\") pod \"cert-manager-86cb77c54b-zbmqj\" (UID: \"619fcbfa-aa5a-4e42-8c95-6bf5ad357cee\") " pod="cert-manager/cert-manager-86cb77c54b-zbmqj" Nov 21 09:59:35 crc kubenswrapper[4972]: I1121 09:59:35.941760 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/619fcbfa-aa5a-4e42-8c95-6bf5ad357cee-bound-sa-token\") pod \"cert-manager-86cb77c54b-zbmqj\" (UID: \"619fcbfa-aa5a-4e42-8c95-6bf5ad357cee\") " pod="cert-manager/cert-manager-86cb77c54b-zbmqj" Nov 21 09:59:35 crc kubenswrapper[4972]: I1121 09:59:35.942079 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc2d4\" (UniqueName: \"kubernetes.io/projected/619fcbfa-aa5a-4e42-8c95-6bf5ad357cee-kube-api-access-rc2d4\") pod \"cert-manager-86cb77c54b-zbmqj\" (UID: \"619fcbfa-aa5a-4e42-8c95-6bf5ad357cee\") " pod="cert-manager/cert-manager-86cb77c54b-zbmqj" Nov 21 09:59:35 crc kubenswrapper[4972]: I1121 09:59:35.967053 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc2d4\" (UniqueName: \"kubernetes.io/projected/619fcbfa-aa5a-4e42-8c95-6bf5ad357cee-kube-api-access-rc2d4\") pod \"cert-manager-86cb77c54b-zbmqj\" (UID: \"619fcbfa-aa5a-4e42-8c95-6bf5ad357cee\") " pod="cert-manager/cert-manager-86cb77c54b-zbmqj" Nov 21 09:59:35 crc kubenswrapper[4972]: I1121 09:59:35.968541 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/619fcbfa-aa5a-4e42-8c95-6bf5ad357cee-bound-sa-token\") pod \"cert-manager-86cb77c54b-zbmqj\" (UID: \"619fcbfa-aa5a-4e42-8c95-6bf5ad357cee\") " pod="cert-manager/cert-manager-86cb77c54b-zbmqj" Nov 21 09:59:36 crc kubenswrapper[4972]: I1121 09:59:36.136400 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-zbmqj" Nov 21 09:59:36 crc kubenswrapper[4972]: I1121 09:59:36.631273 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-zbmqj"] Nov 21 09:59:36 crc kubenswrapper[4972]: I1121 09:59:36.709951 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-zbmqj" event={"ID":"619fcbfa-aa5a-4e42-8c95-6bf5ad357cee","Type":"ContainerStarted","Data":"db660bd90744fabb5e4cab4893e2f7fc3cddcecc7a03fc77ebee01cd7d4feca1"} Nov 21 09:59:37 crc kubenswrapper[4972]: I1121 09:59:37.721069 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-zbmqj" event={"ID":"619fcbfa-aa5a-4e42-8c95-6bf5ad357cee","Type":"ContainerStarted","Data":"b52664004ef79ef96d0dcb34762ec5f1eab4080038ec60f6af18db192b075fa0"} Nov 21 09:59:37 crc kubenswrapper[4972]: I1121 09:59:37.742510 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-zbmqj" podStartSLOduration=2.742478034 podStartE2EDuration="2.742478034s" podCreationTimestamp="2025-11-21 09:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 09:59:37.739986507 +0000 UTC m=+1122.849129025" watchObservedRunningTime="2025-11-21 09:59:37.742478034 +0000 UTC m=+1122.851620572" Nov 21 09:59:37 crc kubenswrapper[4972]: I1121 09:59:37.875183 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-25b5p" Nov 21 09:59:41 crc kubenswrapper[4972]: I1121 09:59:41.417024 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-d7xb8"] Nov 21 09:59:41 crc kubenswrapper[4972]: I1121 09:59:41.418211 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d7xb8" Nov 21 09:59:41 crc kubenswrapper[4972]: I1121 09:59:41.420558 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 21 09:59:41 crc kubenswrapper[4972]: I1121 09:59:41.421575 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7t7qb" Nov 21 09:59:41 crc kubenswrapper[4972]: I1121 09:59:41.427673 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 21 09:59:41 crc kubenswrapper[4972]: I1121 09:59:41.428951 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d7xb8"] Nov 21 09:59:41 crc kubenswrapper[4972]: I1121 09:59:41.605394 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6n7v\" (UniqueName: \"kubernetes.io/projected/51990b7a-1db7-4b42-9556-6a9cf92c9cc5-kube-api-access-f6n7v\") pod \"openstack-operator-index-d7xb8\" (UID: \"51990b7a-1db7-4b42-9556-6a9cf92c9cc5\") " pod="openstack-operators/openstack-operator-index-d7xb8" Nov 21 09:59:41 crc kubenswrapper[4972]: I1121 09:59:41.706980 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6n7v\" (UniqueName: \"kubernetes.io/projected/51990b7a-1db7-4b42-9556-6a9cf92c9cc5-kube-api-access-f6n7v\") pod \"openstack-operator-index-d7xb8\" (UID: \"51990b7a-1db7-4b42-9556-6a9cf92c9cc5\") " pod="openstack-operators/openstack-operator-index-d7xb8" Nov 21 09:59:41 crc kubenswrapper[4972]: I1121 09:59:41.724859 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6n7v\" (UniqueName: \"kubernetes.io/projected/51990b7a-1db7-4b42-9556-6a9cf92c9cc5-kube-api-access-f6n7v\") pod \"openstack-operator-index-d7xb8\" (UID: \"51990b7a-1db7-4b42-9556-6a9cf92c9cc5\") " pod="openstack-operators/openstack-operator-index-d7xb8" Nov 21 09:59:41 crc kubenswrapper[4972]: I1121 09:59:41.788619 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d7xb8" Nov 21 09:59:42 crc kubenswrapper[4972]: I1121 09:59:42.218462 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d7xb8"] Nov 21 09:59:42 crc kubenswrapper[4972]: I1121 09:59:42.757164 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d7xb8" event={"ID":"51990b7a-1db7-4b42-9556-6a9cf92c9cc5","Type":"ContainerStarted","Data":"c5a1865f71c9066596b42e46adeefc039b123d327922d457f473e9b31be4ebcb"} Nov 21 09:59:43 crc kubenswrapper[4972]: I1121 09:59:43.771471 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d7xb8" event={"ID":"51990b7a-1db7-4b42-9556-6a9cf92c9cc5","Type":"ContainerStarted","Data":"80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc"} Nov 21 09:59:43 crc kubenswrapper[4972]: I1121 09:59:43.786734 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-d7xb8" podStartSLOduration=1.981016493 podStartE2EDuration="2.786716099s" podCreationTimestamp="2025-11-21 09:59:41 +0000 UTC" firstStartedPulling="2025-11-21 09:59:42.230158835 +0000 UTC m=+1127.339301373" lastFinishedPulling="2025-11-21 09:59:43.035858451 +0000 UTC m=+1128.145000979" observedRunningTime="2025-11-21 09:59:43.784334165 +0000 UTC m=+1128.893476663" watchObservedRunningTime="2025-11-21 09:59:43.786716099 +0000 UTC m=+1128.895858597" Nov 21 09:59:44 crc kubenswrapper[4972]: I1121 09:59:44.770760 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-d7xb8"] Nov 21 09:59:45 crc kubenswrapper[4972]: I1121 09:59:45.375180 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-s7pn8"] Nov 21 09:59:45 crc kubenswrapper[4972]: I1121 09:59:45.375913 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s7pn8" Nov 21 09:59:45 crc kubenswrapper[4972]: I1121 09:59:45.387212 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s7pn8"] Nov 21 09:59:45 crc kubenswrapper[4972]: I1121 09:59:45.458086 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq6r5\" (UniqueName: \"kubernetes.io/projected/cdbf6692-280c-4fa4-8f94-cc2a0e29ef5f-kube-api-access-tq6r5\") pod \"openstack-operator-index-s7pn8\" (UID: \"cdbf6692-280c-4fa4-8f94-cc2a0e29ef5f\") " pod="openstack-operators/openstack-operator-index-s7pn8" Nov 21 09:59:45 crc kubenswrapper[4972]: I1121 09:59:45.559643 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq6r5\" (UniqueName: \"kubernetes.io/projected/cdbf6692-280c-4fa4-8f94-cc2a0e29ef5f-kube-api-access-tq6r5\") pod \"openstack-operator-index-s7pn8\" (UID: \"cdbf6692-280c-4fa4-8f94-cc2a0e29ef5f\") " pod="openstack-operators/openstack-operator-index-s7pn8" Nov 21 09:59:45 crc kubenswrapper[4972]: I1121 09:59:45.584897 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq6r5\" (UniqueName: \"kubernetes.io/projected/cdbf6692-280c-4fa4-8f94-cc2a0e29ef5f-kube-api-access-tq6r5\") pod \"openstack-operator-index-s7pn8\" (UID: \"cdbf6692-280c-4fa4-8f94-cc2a0e29ef5f\") " pod="openstack-operators/openstack-operator-index-s7pn8" Nov 21 09:59:45 crc kubenswrapper[4972]: I1121 09:59:45.697469 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s7pn8" Nov 21 09:59:45 crc kubenswrapper[4972]: I1121 09:59:45.780419 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-d7xb8" podUID="51990b7a-1db7-4b42-9556-6a9cf92c9cc5" containerName="registry-server" containerID="cri-o://80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc" gracePeriod=2 Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.096549 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s7pn8"] Nov 21 09:59:46 crc kubenswrapper[4972]: W1121 09:59:46.103032 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdbf6692_280c_4fa4_8f94_cc2a0e29ef5f.slice/crio-62de823a7e87737045db1e670e1051abb1282e001f22393c9859b85a5def8715 WatchSource:0}: Error finding container 62de823a7e87737045db1e670e1051abb1282e001f22393c9859b85a5def8715: Status 404 returned error can't find the container with id 62de823a7e87737045db1e670e1051abb1282e001f22393c9859b85a5def8715 Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.119804 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d7xb8" Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.269162 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6n7v\" (UniqueName: \"kubernetes.io/projected/51990b7a-1db7-4b42-9556-6a9cf92c9cc5-kube-api-access-f6n7v\") pod \"51990b7a-1db7-4b42-9556-6a9cf92c9cc5\" (UID: \"51990b7a-1db7-4b42-9556-6a9cf92c9cc5\") " Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.273725 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51990b7a-1db7-4b42-9556-6a9cf92c9cc5-kube-api-access-f6n7v" (OuterVolumeSpecName: "kube-api-access-f6n7v") pod "51990b7a-1db7-4b42-9556-6a9cf92c9cc5" (UID: "51990b7a-1db7-4b42-9556-6a9cf92c9cc5"). InnerVolumeSpecName "kube-api-access-f6n7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.371654 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6n7v\" (UniqueName: \"kubernetes.io/projected/51990b7a-1db7-4b42-9556-6a9cf92c9cc5-kube-api-access-f6n7v\") on node \"crc\" DevicePath \"\"" Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.796222 4972 generic.go:334] "Generic (PLEG): container finished" podID="51990b7a-1db7-4b42-9556-6a9cf92c9cc5" containerID="80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc" exitCode=0 Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.796281 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d7xb8" event={"ID":"51990b7a-1db7-4b42-9556-6a9cf92c9cc5","Type":"ContainerDied","Data":"80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc"} Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.796332 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d7xb8" Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.796659 4972 scope.go:117] "RemoveContainer" containerID="80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc" Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.796643 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d7xb8" event={"ID":"51990b7a-1db7-4b42-9556-6a9cf92c9cc5","Type":"ContainerDied","Data":"c5a1865f71c9066596b42e46adeefc039b123d327922d457f473e9b31be4ebcb"} Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.801521 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s7pn8" event={"ID":"cdbf6692-280c-4fa4-8f94-cc2a0e29ef5f","Type":"ContainerStarted","Data":"3ae69ee7e4efd7a787f0ee6fbe1bd4a7ab2ce2de209d1ac8140b1c1cfb97a7cd"} Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.801562 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s7pn8" event={"ID":"cdbf6692-280c-4fa4-8f94-cc2a0e29ef5f","Type":"ContainerStarted","Data":"62de823a7e87737045db1e670e1051abb1282e001f22393c9859b85a5def8715"} Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.814280 4972 scope.go:117] "RemoveContainer" containerID="80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc" Nov 21 09:59:46 crc kubenswrapper[4972]: E1121 09:59:46.814611 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc\": container with ID starting with 80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc not found: ID does not exist" containerID="80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc" Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.814725 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc"} err="failed to get container status \"80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc\": rpc error: code = NotFound desc = could not find container \"80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc\": container with ID starting with 80de2f1b9ce53a11d3cc5c8f02754ae3e9c6108992206c58c7420180f44ff7dc not found: ID does not exist" Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.819398 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-s7pn8" podStartSLOduration=1.436163652 podStartE2EDuration="1.819378607s" podCreationTimestamp="2025-11-21 09:59:45 +0000 UTC" firstStartedPulling="2025-11-21 09:59:46.105604817 +0000 UTC m=+1131.214747315" lastFinishedPulling="2025-11-21 09:59:46.488819752 +0000 UTC m=+1131.597962270" observedRunningTime="2025-11-21 09:59:46.815822043 +0000 UTC m=+1131.924964561" watchObservedRunningTime="2025-11-21 09:59:46.819378607 +0000 UTC m=+1131.928521105" Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.838143 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-d7xb8"] Nov 21 09:59:46 crc kubenswrapper[4972]: I1121 09:59:46.841481 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-d7xb8"] Nov 21 09:59:47 crc kubenswrapper[4972]: I1121 09:59:47.765690 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51990b7a-1db7-4b42-9556-6a9cf92c9cc5" path="/var/lib/kubelet/pods/51990b7a-1db7-4b42-9556-6a9cf92c9cc5/volumes" Nov 21 09:59:55 crc kubenswrapper[4972]: I1121 09:59:55.698857 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-s7pn8" Nov 21 09:59:55 crc kubenswrapper[4972]: I1121 09:59:55.699777 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-s7pn8" Nov 21 09:59:55 crc kubenswrapper[4972]: I1121 09:59:55.738374 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-s7pn8" Nov 21 09:59:55 crc kubenswrapper[4972]: I1121 09:59:55.898441 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-s7pn8" Nov 21 09:59:56 crc kubenswrapper[4972]: I1121 09:59:56.178708 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 09:59:56 crc kubenswrapper[4972]: I1121 09:59:56.178816 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 09:59:56 crc kubenswrapper[4972]: I1121 09:59:56.831363 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm"] Nov 21 09:59:56 crc kubenswrapper[4972]: E1121 09:59:56.831642 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51990b7a-1db7-4b42-9556-6a9cf92c9cc5" containerName="registry-server" Nov 21 09:59:56 crc kubenswrapper[4972]: I1121 09:59:56.831657 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="51990b7a-1db7-4b42-9556-6a9cf92c9cc5" containerName="registry-server" Nov 21 09:59:56 crc kubenswrapper[4972]: I1121 09:59:56.831803 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="51990b7a-1db7-4b42-9556-6a9cf92c9cc5" containerName="registry-server" Nov 21 09:59:56 crc kubenswrapper[4972]: I1121 09:59:56.833063 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 09:59:56 crc kubenswrapper[4972]: I1121 09:59:56.835770 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-sld44" Nov 21 09:59:56 crc kubenswrapper[4972]: I1121 09:59:56.842141 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm"] Nov 21 09:59:56 crc kubenswrapper[4972]: I1121 09:59:56.948010 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twlvj\" (UniqueName: \"kubernetes.io/projected/aa3bf531-41ff-449f-a660-0886a7e8e87c-kube-api-access-twlvj\") pod \"2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm\" (UID: \"aa3bf531-41ff-449f-a660-0886a7e8e87c\") " pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 09:59:56 crc kubenswrapper[4972]: I1121 09:59:56.948121 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa3bf531-41ff-449f-a660-0886a7e8e87c-bundle\") pod \"2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm\" (UID: \"aa3bf531-41ff-449f-a660-0886a7e8e87c\") " pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 09:59:56 crc kubenswrapper[4972]: I1121 09:59:56.948190 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa3bf531-41ff-449f-a660-0886a7e8e87c-util\") pod \"2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm\" (UID: \"aa3bf531-41ff-449f-a660-0886a7e8e87c\") " pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 09:59:57 crc kubenswrapper[4972]: I1121 09:59:57.048921 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa3bf531-41ff-449f-a660-0886a7e8e87c-bundle\") pod \"2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm\" (UID: \"aa3bf531-41ff-449f-a660-0886a7e8e87c\") " pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 09:59:57 crc kubenswrapper[4972]: I1121 09:59:57.049184 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa3bf531-41ff-449f-a660-0886a7e8e87c-util\") pod \"2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm\" (UID: \"aa3bf531-41ff-449f-a660-0886a7e8e87c\") " pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 09:59:57 crc kubenswrapper[4972]: I1121 09:59:57.049318 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twlvj\" (UniqueName: \"kubernetes.io/projected/aa3bf531-41ff-449f-a660-0886a7e8e87c-kube-api-access-twlvj\") pod \"2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm\" (UID: \"aa3bf531-41ff-449f-a660-0886a7e8e87c\") " pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 09:59:57 crc kubenswrapper[4972]: I1121 09:59:57.049608 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa3bf531-41ff-449f-a660-0886a7e8e87c-bundle\") pod \"2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm\" (UID: \"aa3bf531-41ff-449f-a660-0886a7e8e87c\") " pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 09:59:57 crc kubenswrapper[4972]: I1121 09:59:57.050166 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa3bf531-41ff-449f-a660-0886a7e8e87c-util\") pod \"2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm\" (UID: \"aa3bf531-41ff-449f-a660-0886a7e8e87c\") " pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 09:59:57 crc kubenswrapper[4972]: I1121 09:59:57.069977 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twlvj\" (UniqueName: \"kubernetes.io/projected/aa3bf531-41ff-449f-a660-0886a7e8e87c-kube-api-access-twlvj\") pod \"2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm\" (UID: \"aa3bf531-41ff-449f-a660-0886a7e8e87c\") " pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 09:59:57 crc kubenswrapper[4972]: I1121 09:59:57.155545 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 09:59:57 crc kubenswrapper[4972]: I1121 09:59:57.567027 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm"] Nov 21 09:59:57 crc kubenswrapper[4972]: W1121 09:59:57.572024 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa3bf531_41ff_449f_a660_0886a7e8e87c.slice/crio-86e629e3dd44d5887ffa32b435b2cd34e3ce9936f78f393fd2f7c0f05b74c9a1 WatchSource:0}: Error finding container 86e629e3dd44d5887ffa32b435b2cd34e3ce9936f78f393fd2f7c0f05b74c9a1: Status 404 returned error can't find the container with id 86e629e3dd44d5887ffa32b435b2cd34e3ce9936f78f393fd2f7c0f05b74c9a1 Nov 21 09:59:57 crc kubenswrapper[4972]: I1121 09:59:57.882937 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" event={"ID":"aa3bf531-41ff-449f-a660-0886a7e8e87c","Type":"ContainerStarted","Data":"86e629e3dd44d5887ffa32b435b2cd34e3ce9936f78f393fd2f7c0f05b74c9a1"} Nov 21 09:59:59 crc kubenswrapper[4972]: I1121 09:59:59.897097 4972 generic.go:334] "Generic (PLEG): container finished" podID="aa3bf531-41ff-449f-a660-0886a7e8e87c" containerID="12030d12054ec162a08590cff48555833edddaf863f3bfb17f76fa855f20985f" exitCode=0 Nov 21 09:59:59 crc kubenswrapper[4972]: I1121 09:59:59.897139 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" event={"ID":"aa3bf531-41ff-449f-a660-0886a7e8e87c","Type":"ContainerDied","Data":"12030d12054ec162a08590cff48555833edddaf863f3bfb17f76fa855f20985f"} Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.137649 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4"] Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.138519 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.142004 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.142706 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.164953 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4"] Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.297974 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z8j9\" (UniqueName: \"kubernetes.io/projected/63a0e96b-d215-4176-8d40-d32016e09f67-kube-api-access-4z8j9\") pod \"collect-profiles-29395320-cr8b4\" (UID: \"63a0e96b-d215-4176-8d40-d32016e09f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.298113 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63a0e96b-d215-4176-8d40-d32016e09f67-secret-volume\") pod \"collect-profiles-29395320-cr8b4\" (UID: \"63a0e96b-d215-4176-8d40-d32016e09f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.298164 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63a0e96b-d215-4176-8d40-d32016e09f67-config-volume\") pod \"collect-profiles-29395320-cr8b4\" (UID: \"63a0e96b-d215-4176-8d40-d32016e09f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.399266 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z8j9\" (UniqueName: \"kubernetes.io/projected/63a0e96b-d215-4176-8d40-d32016e09f67-kube-api-access-4z8j9\") pod \"collect-profiles-29395320-cr8b4\" (UID: \"63a0e96b-d215-4176-8d40-d32016e09f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.399404 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63a0e96b-d215-4176-8d40-d32016e09f67-secret-volume\") pod \"collect-profiles-29395320-cr8b4\" (UID: \"63a0e96b-d215-4176-8d40-d32016e09f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.399432 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63a0e96b-d215-4176-8d40-d32016e09f67-config-volume\") pod \"collect-profiles-29395320-cr8b4\" (UID: \"63a0e96b-d215-4176-8d40-d32016e09f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.401517 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63a0e96b-d215-4176-8d40-d32016e09f67-config-volume\") pod \"collect-profiles-29395320-cr8b4\" (UID: \"63a0e96b-d215-4176-8d40-d32016e09f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.414890 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63a0e96b-d215-4176-8d40-d32016e09f67-secret-volume\") pod \"collect-profiles-29395320-cr8b4\" (UID: \"63a0e96b-d215-4176-8d40-d32016e09f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.422119 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z8j9\" (UniqueName: \"kubernetes.io/projected/63a0e96b-d215-4176-8d40-d32016e09f67-kube-api-access-4z8j9\") pod \"collect-profiles-29395320-cr8b4\" (UID: \"63a0e96b-d215-4176-8d40-d32016e09f67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.468985 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.878449 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4"] Nov 21 10:00:00 crc kubenswrapper[4972]: I1121 10:00:00.904554 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" event={"ID":"63a0e96b-d215-4176-8d40-d32016e09f67","Type":"ContainerStarted","Data":"138d863e68830e5c513b2e9994de88a838015434a079fe6dcd6ea37682fb47d4"} Nov 21 10:00:01 crc kubenswrapper[4972]: I1121 10:00:01.911363 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" event={"ID":"63a0e96b-d215-4176-8d40-d32016e09f67","Type":"ContainerStarted","Data":"19f19f652cafc2db3eca0d7c69e847fb2e27cb86eea736cbc5e10c4405d79d8a"} Nov 21 10:00:01 crc kubenswrapper[4972]: I1121 10:00:01.929766 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" podStartSLOduration=1.9297489319999999 podStartE2EDuration="1.929748932s" podCreationTimestamp="2025-11-21 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:00:01.928759716 +0000 UTC m=+1147.037902214" watchObservedRunningTime="2025-11-21 10:00:01.929748932 +0000 UTC m=+1147.038891430" Nov 21 10:00:02 crc kubenswrapper[4972]: I1121 10:00:02.919437 4972 generic.go:334] "Generic (PLEG): container finished" podID="63a0e96b-d215-4176-8d40-d32016e09f67" containerID="19f19f652cafc2db3eca0d7c69e847fb2e27cb86eea736cbc5e10c4405d79d8a" exitCode=0 Nov 21 10:00:02 crc kubenswrapper[4972]: I1121 10:00:02.919873 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" event={"ID":"63a0e96b-d215-4176-8d40-d32016e09f67","Type":"ContainerDied","Data":"19f19f652cafc2db3eca0d7c69e847fb2e27cb86eea736cbc5e10c4405d79d8a"} Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.280885 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.473463 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63a0e96b-d215-4176-8d40-d32016e09f67-config-volume\") pod \"63a0e96b-d215-4176-8d40-d32016e09f67\" (UID: \"63a0e96b-d215-4176-8d40-d32016e09f67\") " Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.473569 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63a0e96b-d215-4176-8d40-d32016e09f67-secret-volume\") pod \"63a0e96b-d215-4176-8d40-d32016e09f67\" (UID: \"63a0e96b-d215-4176-8d40-d32016e09f67\") " Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.473674 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z8j9\" (UniqueName: \"kubernetes.io/projected/63a0e96b-d215-4176-8d40-d32016e09f67-kube-api-access-4z8j9\") pod \"63a0e96b-d215-4176-8d40-d32016e09f67\" (UID: \"63a0e96b-d215-4176-8d40-d32016e09f67\") " Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.474420 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63a0e96b-d215-4176-8d40-d32016e09f67-config-volume" (OuterVolumeSpecName: "config-volume") pod "63a0e96b-d215-4176-8d40-d32016e09f67" (UID: "63a0e96b-d215-4176-8d40-d32016e09f67"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.480571 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a0e96b-d215-4176-8d40-d32016e09f67-kube-api-access-4z8j9" (OuterVolumeSpecName: "kube-api-access-4z8j9") pod "63a0e96b-d215-4176-8d40-d32016e09f67" (UID: "63a0e96b-d215-4176-8d40-d32016e09f67"). InnerVolumeSpecName "kube-api-access-4z8j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.480607 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a0e96b-d215-4176-8d40-d32016e09f67-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "63a0e96b-d215-4176-8d40-d32016e09f67" (UID: "63a0e96b-d215-4176-8d40-d32016e09f67"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.575477 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63a0e96b-d215-4176-8d40-d32016e09f67-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.575514 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63a0e96b-d215-4176-8d40-d32016e09f67-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.575531 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z8j9\" (UniqueName: \"kubernetes.io/projected/63a0e96b-d215-4176-8d40-d32016e09f67-kube-api-access-4z8j9\") on node \"crc\" DevicePath \"\"" Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.935235 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" event={"ID":"63a0e96b-d215-4176-8d40-d32016e09f67","Type":"ContainerDied","Data":"138d863e68830e5c513b2e9994de88a838015434a079fe6dcd6ea37682fb47d4"} Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.935279 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="138d863e68830e5c513b2e9994de88a838015434a079fe6dcd6ea37682fb47d4" Nov 21 10:00:04 crc kubenswrapper[4972]: I1121 10:00:04.935595 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4" Nov 21 10:00:05 crc kubenswrapper[4972]: I1121 10:00:05.945359 4972 generic.go:334] "Generic (PLEG): container finished" podID="aa3bf531-41ff-449f-a660-0886a7e8e87c" containerID="99a5e63f15ccf55484c08269fba748e0aad241f8a773b6e24856b87be189d356" exitCode=0 Nov 21 10:00:05 crc kubenswrapper[4972]: I1121 10:00:05.945478 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" event={"ID":"aa3bf531-41ff-449f-a660-0886a7e8e87c","Type":"ContainerDied","Data":"99a5e63f15ccf55484c08269fba748e0aad241f8a773b6e24856b87be189d356"} Nov 21 10:00:06 crc kubenswrapper[4972]: I1121 10:00:06.955787 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" event={"ID":"aa3bf531-41ff-449f-a660-0886a7e8e87c","Type":"ContainerStarted","Data":"e8184f162fcd5d8b4d586f935a8bb4ab756d1251e168d09c48321e231c4f1682"} Nov 21 10:00:07 crc kubenswrapper[4972]: I1121 10:00:07.966340 4972 generic.go:334] "Generic (PLEG): container finished" podID="aa3bf531-41ff-449f-a660-0886a7e8e87c" containerID="e8184f162fcd5d8b4d586f935a8bb4ab756d1251e168d09c48321e231c4f1682" exitCode=0 Nov 21 10:00:07 crc kubenswrapper[4972]: I1121 10:00:07.966425 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" event={"ID":"aa3bf531-41ff-449f-a660-0886a7e8e87c","Type":"ContainerDied","Data":"e8184f162fcd5d8b4d586f935a8bb4ab756d1251e168d09c48321e231c4f1682"} Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.208155 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.363002 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twlvj\" (UniqueName: \"kubernetes.io/projected/aa3bf531-41ff-449f-a660-0886a7e8e87c-kube-api-access-twlvj\") pod \"aa3bf531-41ff-449f-a660-0886a7e8e87c\" (UID: \"aa3bf531-41ff-449f-a660-0886a7e8e87c\") " Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.363136 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa3bf531-41ff-449f-a660-0886a7e8e87c-util\") pod \"aa3bf531-41ff-449f-a660-0886a7e8e87c\" (UID: \"aa3bf531-41ff-449f-a660-0886a7e8e87c\") " Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.363215 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa3bf531-41ff-449f-a660-0886a7e8e87c-bundle\") pod \"aa3bf531-41ff-449f-a660-0886a7e8e87c\" (UID: \"aa3bf531-41ff-449f-a660-0886a7e8e87c\") " Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.367068 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa3bf531-41ff-449f-a660-0886a7e8e87c-bundle" (OuterVolumeSpecName: "bundle") pod "aa3bf531-41ff-449f-a660-0886a7e8e87c" (UID: "aa3bf531-41ff-449f-a660-0886a7e8e87c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.369455 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3bf531-41ff-449f-a660-0886a7e8e87c-kube-api-access-twlvj" (OuterVolumeSpecName: "kube-api-access-twlvj") pod "aa3bf531-41ff-449f-a660-0886a7e8e87c" (UID: "aa3bf531-41ff-449f-a660-0886a7e8e87c"). InnerVolumeSpecName "kube-api-access-twlvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.380573 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa3bf531-41ff-449f-a660-0886a7e8e87c-util" (OuterVolumeSpecName: "util") pod "aa3bf531-41ff-449f-a660-0886a7e8e87c" (UID: "aa3bf531-41ff-449f-a660-0886a7e8e87c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.465172 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twlvj\" (UniqueName: \"kubernetes.io/projected/aa3bf531-41ff-449f-a660-0886a7e8e87c-kube-api-access-twlvj\") on node \"crc\" DevicePath \"\"" Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.465221 4972 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa3bf531-41ff-449f-a660-0886a7e8e87c-util\") on node \"crc\" DevicePath \"\"" Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.465241 4972 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa3bf531-41ff-449f-a660-0886a7e8e87c-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.983941 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" event={"ID":"aa3bf531-41ff-449f-a660-0886a7e8e87c","Type":"ContainerDied","Data":"86e629e3dd44d5887ffa32b435b2cd34e3ce9936f78f393fd2f7c0f05b74c9a1"} Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.984033 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86e629e3dd44d5887ffa32b435b2cd34e3ce9936f78f393fd2f7c0f05b74c9a1" Nov 21 10:00:09 crc kubenswrapper[4972]: I1121 10:00:09.984056 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.234172 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4"] Nov 21 10:00:15 crc kubenswrapper[4972]: E1121 10:00:15.235028 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3bf531-41ff-449f-a660-0886a7e8e87c" containerName="pull" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.235044 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3bf531-41ff-449f-a660-0886a7e8e87c" containerName="pull" Nov 21 10:00:15 crc kubenswrapper[4972]: E1121 10:00:15.235056 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3bf531-41ff-449f-a660-0886a7e8e87c" containerName="util" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.235064 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3bf531-41ff-449f-a660-0886a7e8e87c" containerName="util" Nov 21 10:00:15 crc kubenswrapper[4972]: E1121 10:00:15.235077 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3bf531-41ff-449f-a660-0886a7e8e87c" containerName="extract" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.235084 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3bf531-41ff-449f-a660-0886a7e8e87c" containerName="extract" Nov 21 10:00:15 crc kubenswrapper[4972]: E1121 10:00:15.235099 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a0e96b-d215-4176-8d40-d32016e09f67" containerName="collect-profiles" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.235106 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a0e96b-d215-4176-8d40-d32016e09f67" containerName="collect-profiles" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.235249 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3bf531-41ff-449f-a660-0886a7e8e87c" containerName="extract" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.235260 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a0e96b-d215-4176-8d40-d32016e09f67" containerName="collect-profiles" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.235907 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.237731 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-km24f" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.267891 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4"] Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.347461 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grnck\" (UniqueName: \"kubernetes.io/projected/7a19389a-b34a-4c15-9dbe-b3b4da3b51d4-kube-api-access-grnck\") pod \"openstack-operator-controller-operator-77c7f689f5-np6v4\" (UID: \"7a19389a-b34a-4c15-9dbe-b3b4da3b51d4\") " pod="openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.448640 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grnck\" (UniqueName: \"kubernetes.io/projected/7a19389a-b34a-4c15-9dbe-b3b4da3b51d4-kube-api-access-grnck\") pod \"openstack-operator-controller-operator-77c7f689f5-np6v4\" (UID: \"7a19389a-b34a-4c15-9dbe-b3b4da3b51d4\") " pod="openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.482057 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grnck\" (UniqueName: \"kubernetes.io/projected/7a19389a-b34a-4c15-9dbe-b3b4da3b51d4-kube-api-access-grnck\") pod \"openstack-operator-controller-operator-77c7f689f5-np6v4\" (UID: \"7a19389a-b34a-4c15-9dbe-b3b4da3b51d4\") " pod="openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.553686 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4" Nov 21 10:00:15 crc kubenswrapper[4972]: I1121 10:00:15.845381 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4"] Nov 21 10:00:15 crc kubenswrapper[4972]: W1121 10:00:15.863756 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a19389a_b34a_4c15_9dbe_b3b4da3b51d4.slice/crio-019723c13c1f9e5178afaa677d5b4744a3482b9dbc94e836ad393a1a9c0839fb WatchSource:0}: Error finding container 019723c13c1f9e5178afaa677d5b4744a3482b9dbc94e836ad393a1a9c0839fb: Status 404 returned error can't find the container with id 019723c13c1f9e5178afaa677d5b4744a3482b9dbc94e836ad393a1a9c0839fb Nov 21 10:00:16 crc kubenswrapper[4972]: I1121 10:00:16.029658 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4" event={"ID":"7a19389a-b34a-4c15-9dbe-b3b4da3b51d4","Type":"ContainerStarted","Data":"019723c13c1f9e5178afaa677d5b4744a3482b9dbc94e836ad393a1a9c0839fb"} Nov 21 10:00:20 crc kubenswrapper[4972]: I1121 10:00:20.055333 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4" event={"ID":"7a19389a-b34a-4c15-9dbe-b3b4da3b51d4","Type":"ContainerStarted","Data":"5f581a36c35363570d83709b7b4484921f71ab0e773c0646127194264e7e563f"} Nov 21 10:00:23 crc kubenswrapper[4972]: I1121 10:00:23.073942 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4" event={"ID":"7a19389a-b34a-4c15-9dbe-b3b4da3b51d4","Type":"ContainerStarted","Data":"db65fc9537d3e342a2c843fc31aad516f40bc211028edb2c135e030e19fb7c36"} Nov 21 10:00:23 crc kubenswrapper[4972]: I1121 10:00:23.075006 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4" Nov 21 10:00:23 crc kubenswrapper[4972]: I1121 10:00:23.104083 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4" podStartSLOduration=1.185257203 podStartE2EDuration="8.104064377s" podCreationTimestamp="2025-11-21 10:00:15 +0000 UTC" firstStartedPulling="2025-11-21 10:00:15.866664546 +0000 UTC m=+1160.975807054" lastFinishedPulling="2025-11-21 10:00:22.78547169 +0000 UTC m=+1167.894614228" observedRunningTime="2025-11-21 10:00:23.101951871 +0000 UTC m=+1168.211094379" watchObservedRunningTime="2025-11-21 10:00:23.104064377 +0000 UTC m=+1168.213206885" Nov 21 10:00:24 crc kubenswrapper[4972]: I1121 10:00:24.083526 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-77c7f689f5-np6v4" Nov 21 10:00:26 crc kubenswrapper[4972]: I1121 10:00:26.178636 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:00:26 crc kubenswrapper[4972]: I1121 10:00:26.178965 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:00:56 crc kubenswrapper[4972]: I1121 10:00:56.179412 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:00:56 crc kubenswrapper[4972]: I1121 10:00:56.180075 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:00:56 crc kubenswrapper[4972]: I1121 10:00:56.180131 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 10:00:56 crc kubenswrapper[4972]: I1121 10:00:56.180888 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5f6ea95f3d9b88cf1528773dedbad651b22ffa03b2cdc9849fa7c5b9b96c05e"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 10:00:56 crc kubenswrapper[4972]: I1121 10:00:56.180941 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://b5f6ea95f3d9b88cf1528773dedbad651b22ffa03b2cdc9849fa7c5b9b96c05e" gracePeriod=600 Nov 21 10:00:56 crc kubenswrapper[4972]: I1121 10:00:56.321599 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="b5f6ea95f3d9b88cf1528773dedbad651b22ffa03b2cdc9849fa7c5b9b96c05e" exitCode=0 Nov 21 10:00:56 crc kubenswrapper[4972]: I1121 10:00:56.321637 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"b5f6ea95f3d9b88cf1528773dedbad651b22ffa03b2cdc9849fa7c5b9b96c05e"} Nov 21 10:00:56 crc kubenswrapper[4972]: I1121 10:00:56.321938 4972 scope.go:117] "RemoveContainer" containerID="918ebedd08e1b9dafe5e4f67da03ac43cd2232ecc2da24b0d75e1131226f344f" Nov 21 10:00:57 crc kubenswrapper[4972]: I1121 10:00:57.330164 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"7ec11dc5626524562fd7c3b24c6b4002aa3a346dd5009bf5fa88dabd42ba42bd"} Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.707979 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.709789 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.712253 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lxcmp" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.717468 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.726000 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.727084 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.731152 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-qfdmb" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.731849 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.733042 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.743477 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.746753 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-klcrj" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.756729 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.766556 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.767788 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.773957 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.774125 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-jc6w2" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.790280 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.791260 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.794017 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-c6j9s" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.802778 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.809490 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446bh\" (UniqueName: \"kubernetes.io/projected/6aea9dc6-2e85-470c-9897-064111a4661e-kube-api-access-446bh\") pod \"cinder-operator-controller-manager-6d8fd67bf7-g4s6z\" (UID: \"6aea9dc6-2e85-470c-9897-064111a4661e\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.809573 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hvl5\" (UniqueName: \"kubernetes.io/projected/9304bb08-b481-42e5-89ed-f215f0102662-kube-api-access-6hvl5\") pod \"barbican-operator-controller-manager-7768f8c84f-t6wbg\" (UID: \"9304bb08-b481-42e5-89ed-f215f0102662\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.809594 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kvdb\" (UniqueName: \"kubernetes.io/projected/647ff257-bc23-44e5-9397-2696f04520d4-kube-api-access-8kvdb\") pod \"designate-operator-controller-manager-56dfb6b67f-vhv8d\" (UID: \"647ff257-bc23-44e5-9397-2696f04520d4\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.809978 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.810877 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.816883 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.818222 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-x84dt" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.844118 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.845334 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.857240 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jhv6c" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.857485 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.866078 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.867254 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.869920 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.871062 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-j7269" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.891730 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.897698 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.898700 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.907047 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.907434 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rzrrs" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.911077 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.912292 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-446bh\" (UniqueName: \"kubernetes.io/projected/6aea9dc6-2e85-470c-9897-064111a4661e-kube-api-access-446bh\") pod \"cinder-operator-controller-manager-6d8fd67bf7-g4s6z\" (UID: \"6aea9dc6-2e85-470c-9897-064111a4661e\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.912446 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8805826-cc65-49b4-ace4-44f25c209b4e-cert\") pod \"infra-operator-controller-manager-769d9c7585-lx4b9\" (UID: \"e8805826-cc65-49b4-ace4-44f25c209b4e\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.912632 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb477\" (UniqueName: \"kubernetes.io/projected/1d892fbc-e66c-405d-9d42-e306d9b652b8-kube-api-access-xb477\") pod \"horizon-operator-controller-manager-5d86b44686-fdwgn\" (UID: \"1d892fbc-e66c-405d-9d42-e306d9b652b8\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.912761 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hvl5\" (UniqueName: \"kubernetes.io/projected/9304bb08-b481-42e5-89ed-f215f0102662-kube-api-access-6hvl5\") pod \"barbican-operator-controller-manager-7768f8c84f-t6wbg\" (UID: \"9304bb08-b481-42e5-89ed-f215f0102662\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.912897 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kvdb\" (UniqueName: \"kubernetes.io/projected/647ff257-bc23-44e5-9397-2696f04520d4-kube-api-access-8kvdb\") pod \"designate-operator-controller-manager-56dfb6b67f-vhv8d\" (UID: \"647ff257-bc23-44e5-9397-2696f04520d4\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.913025 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdxtf\" (UniqueName: \"kubernetes.io/projected/56043c1e-adb8-4d37-9067-cb28a5103fdd-kube-api-access-zdxtf\") pod \"glance-operator-controller-manager-8667fbf6f6-nkg8q\" (UID: \"56043c1e-adb8-4d37-9067-cb28a5103fdd\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.913141 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr72d\" (UniqueName: \"kubernetes.io/projected/4d93267d-02e5-4045-aa0a-5edf3730b4cf-kube-api-access-lr72d\") pod \"heat-operator-controller-manager-bf4c6585d-jbvkg\" (UID: \"4d93267d-02e5-4045-aa0a-5edf3730b4cf\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.913250 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnbs8\" (UniqueName: \"kubernetes.io/projected/e8805826-cc65-49b4-ace4-44f25c209b4e-kube-api-access-xnbs8\") pod \"infra-operator-controller-manager-769d9c7585-lx4b9\" (UID: \"e8805826-cc65-49b4-ace4-44f25c209b4e\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.914069 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.914686 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g"] Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.921030 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-dhm9x" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.944452 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hvl5\" (UniqueName: \"kubernetes.io/projected/9304bb08-b481-42e5-89ed-f215f0102662-kube-api-access-6hvl5\") pod \"barbican-operator-controller-manager-7768f8c84f-t6wbg\" (UID: \"9304bb08-b481-42e5-89ed-f215f0102662\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.959096 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kvdb\" (UniqueName: \"kubernetes.io/projected/647ff257-bc23-44e5-9397-2696f04520d4-kube-api-access-8kvdb\") pod \"designate-operator-controller-manager-56dfb6b67f-vhv8d\" (UID: \"647ff257-bc23-44e5-9397-2696f04520d4\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d" Nov 21 10:01:01 crc kubenswrapper[4972]: I1121 10:01:01.984597 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-446bh\" (UniqueName: \"kubernetes.io/projected/6aea9dc6-2e85-470c-9897-064111a4661e-kube-api-access-446bh\") pod \"cinder-operator-controller-manager-6d8fd67bf7-g4s6z\" (UID: \"6aea9dc6-2e85-470c-9897-064111a4661e\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.001258 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.026914 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb477\" (UniqueName: \"kubernetes.io/projected/1d892fbc-e66c-405d-9d42-e306d9b652b8-kube-api-access-xb477\") pod \"horizon-operator-controller-manager-5d86b44686-fdwgn\" (UID: \"1d892fbc-e66c-405d-9d42-e306d9b652b8\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.026977 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrph8\" (UniqueName: \"kubernetes.io/projected/40fab427-66b5-4519-b416-0d5f253e2c10-kube-api-access-zrph8\") pod \"manila-operator-controller-manager-7bb88cb858-7cn8g\" (UID: \"40fab427-66b5-4519-b416-0d5f253e2c10\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.038006 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxtf\" (UniqueName: \"kubernetes.io/projected/56043c1e-adb8-4d37-9067-cb28a5103fdd-kube-api-access-zdxtf\") pod \"glance-operator-controller-manager-8667fbf6f6-nkg8q\" (UID: \"56043c1e-adb8-4d37-9067-cb28a5103fdd\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.038089 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr72d\" (UniqueName: \"kubernetes.io/projected/4d93267d-02e5-4045-aa0a-5edf3730b4cf-kube-api-access-lr72d\") pod \"heat-operator-controller-manager-bf4c6585d-jbvkg\" (UID: \"4d93267d-02e5-4045-aa0a-5edf3730b4cf\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.038115 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnbs8\" (UniqueName: \"kubernetes.io/projected/e8805826-cc65-49b4-ace4-44f25c209b4e-kube-api-access-xnbs8\") pod \"infra-operator-controller-manager-769d9c7585-lx4b9\" (UID: \"e8805826-cc65-49b4-ace4-44f25c209b4e\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.038172 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.040822 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.041635 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.041710 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcbg9\" (UniqueName: \"kubernetes.io/projected/0828ae93-40ce-46ac-a769-f5e4e735b186-kube-api-access-kcbg9\") pod \"keystone-operator-controller-manager-7879fb76fd-n4vrp\" (UID: \"0828ae93-40ce-46ac-a769-f5e4e735b186\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.041845 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhnvn\" (UniqueName: \"kubernetes.io/projected/902981fa-5b52-4436-b685-18372dd43999-kube-api-access-qhnvn\") pod \"ironic-operator-controller-manager-5c75d7c94b-llzpm\" (UID: \"902981fa-5b52-4436-b685-18372dd43999\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.041877 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8805826-cc65-49b4-ace4-44f25c209b4e-cert\") pod \"infra-operator-controller-manager-769d9c7585-lx4b9\" (UID: \"e8805826-cc65-49b4-ace4-44f25c209b4e\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" Nov 21 10:01:02 crc kubenswrapper[4972]: E1121 10:01:02.042054 4972 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 21 10:01:02 crc kubenswrapper[4972]: E1121 10:01:02.042104 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8805826-cc65-49b4-ace4-44f25c209b4e-cert podName:e8805826-cc65-49b4-ace4-44f25c209b4e nodeName:}" failed. No retries permitted until 2025-11-21 10:01:02.542084938 +0000 UTC m=+1207.651227436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e8805826-cc65-49b4-ace4-44f25c209b4e-cert") pod "infra-operator-controller-manager-769d9c7585-lx4b9" (UID: "e8805826-cc65-49b4-ace4-44f25c209b4e") : secret "infra-operator-webhook-server-cert" not found Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.054897 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.062710 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-hl7g2" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.066780 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.074141 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.075379 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.077519 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-ln9w5" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.083156 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnbs8\" (UniqueName: \"kubernetes.io/projected/e8805826-cc65-49b4-ace4-44f25c209b4e-kube-api-access-xnbs8\") pod \"infra-operator-controller-manager-769d9c7585-lx4b9\" (UID: \"e8805826-cc65-49b4-ace4-44f25c209b4e\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.084969 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr72d\" (UniqueName: \"kubernetes.io/projected/4d93267d-02e5-4045-aa0a-5edf3730b4cf-kube-api-access-lr72d\") pod \"heat-operator-controller-manager-bf4c6585d-jbvkg\" (UID: \"4d93267d-02e5-4045-aa0a-5edf3730b4cf\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.094222 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb477\" (UniqueName: \"kubernetes.io/projected/1d892fbc-e66c-405d-9d42-e306d9b652b8-kube-api-access-xb477\") pod \"horizon-operator-controller-manager-5d86b44686-fdwgn\" (UID: \"1d892fbc-e66c-405d-9d42-e306d9b652b8\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.103901 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.105008 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.108963 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-vjqzp" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.112952 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdxtf\" (UniqueName: \"kubernetes.io/projected/56043c1e-adb8-4d37-9067-cb28a5103fdd-kube-api-access-zdxtf\") pod \"glance-operator-controller-manager-8667fbf6f6-nkg8q\" (UID: \"56043c1e-adb8-4d37-9067-cb28a5103fdd\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.120721 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.121030 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.138380 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.141398 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.142800 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhnvn\" (UniqueName: \"kubernetes.io/projected/902981fa-5b52-4436-b685-18372dd43999-kube-api-access-qhnvn\") pod \"ironic-operator-controller-manager-5c75d7c94b-llzpm\" (UID: \"902981fa-5b52-4436-b685-18372dd43999\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.142929 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff9l7\" (UniqueName: \"kubernetes.io/projected/2a54b288-8941-4f22-bfe1-d99802311c60-kube-api-access-ff9l7\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-chz4z\" (UID: \"2a54b288-8941-4f22-bfe1-d99802311c60\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.142971 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrph8\" (UniqueName: \"kubernetes.io/projected/40fab427-66b5-4519-b416-0d5f253e2c10-kube-api-access-zrph8\") pod \"manila-operator-controller-manager-7bb88cb858-7cn8g\" (UID: \"40fab427-66b5-4519-b416-0d5f253e2c10\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.143019 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcbg9\" (UniqueName: \"kubernetes.io/projected/0828ae93-40ce-46ac-a769-f5e4e735b186-kube-api-access-kcbg9\") pod \"keystone-operator-controller-manager-7879fb76fd-n4vrp\" (UID: \"0828ae93-40ce-46ac-a769-f5e4e735b186\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.146404 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.147724 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.165238 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.170790 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-nc6xt" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.176312 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrph8\" (UniqueName: \"kubernetes.io/projected/40fab427-66b5-4519-b416-0d5f253e2c10-kube-api-access-zrph8\") pod \"manila-operator-controller-manager-7bb88cb858-7cn8g\" (UID: \"40fab427-66b5-4519-b416-0d5f253e2c10\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.182250 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.183908 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.184446 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcbg9\" (UniqueName: \"kubernetes.io/projected/0828ae93-40ce-46ac-a769-f5e4e735b186-kube-api-access-kcbg9\") pod \"keystone-operator-controller-manager-7879fb76fd-n4vrp\" (UID: \"0828ae93-40ce-46ac-a769-f5e4e735b186\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.190132 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-ghc47" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.190362 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.191354 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhnvn\" (UniqueName: \"kubernetes.io/projected/902981fa-5b52-4436-b685-18372dd43999-kube-api-access-qhnvn\") pod \"ironic-operator-controller-manager-5c75d7c94b-llzpm\" (UID: \"902981fa-5b52-4436-b685-18372dd43999\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.203350 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.215920 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.217383 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.219351 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-lgz2n" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.222518 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.233769 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.244530 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff9l7\" (UniqueName: \"kubernetes.io/projected/2a54b288-8941-4f22-bfe1-d99802311c60-kube-api-access-ff9l7\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-chz4z\" (UID: \"2a54b288-8941-4f22-bfe1-d99802311c60\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.244618 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9czn\" (UniqueName: \"kubernetes.io/projected/6bbfd974-9b27-4143-9c5c-031c5a4f28b2-kube-api-access-k9czn\") pod \"neutron-operator-controller-manager-66b7d6f598-7dh7w\" (UID: \"6bbfd974-9b27-4143-9c5c-031c5a4f28b2\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.244655 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqj6x\" (UniqueName: \"kubernetes.io/projected/2bfc0746-c4ed-4b34-991b-3b63f96614e6-kube-api-access-fqj6x\") pod \"nova-operator-controller-manager-86d796d84d-ddgk9\" (UID: \"2bfc0746-c4ed-4b34-991b-3b63f96614e6\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.244689 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ngzs\" (UniqueName: \"kubernetes.io/projected/237d24ba-9f0a-4d48-b416-7a7aa7692bbf-kube-api-access-6ngzs\") pod \"octavia-operator-controller-manager-6fdc856c5d-lcbxj\" (UID: \"237d24ba-9f0a-4d48-b416-7a7aa7692bbf\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.275061 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.279491 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff9l7\" (UniqueName: \"kubernetes.io/projected/2a54b288-8941-4f22-bfe1-d99802311c60-kube-api-access-ff9l7\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-chz4z\" (UID: \"2a54b288-8941-4f22-bfe1-d99802311c60\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.290471 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.291942 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.307041 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-v4xmt" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.314918 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.343599 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.346079 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ngzs\" (UniqueName: \"kubernetes.io/projected/237d24ba-9f0a-4d48-b416-7a7aa7692bbf-kube-api-access-6ngzs\") pod \"octavia-operator-controller-manager-6fdc856c5d-lcbxj\" (UID: \"237d24ba-9f0a-4d48-b416-7a7aa7692bbf\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.346166 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-955p5\" (UniqueName: \"kubernetes.io/projected/a354b71d-0aa7-4cac-8022-90de180af97d-kube-api-access-955p5\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-nsdgb\" (UID: \"a354b71d-0aa7-4cac-8022-90de180af97d\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.346210 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvw4g\" (UniqueName: \"kubernetes.io/projected/65d1f303-dffc-4de3-8192-4c74f4c33750-kube-api-access-wvw4g\") pod \"openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk\" (UID: \"65d1f303-dffc-4de3-8192-4c74f4c33750\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.346232 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9czn\" (UniqueName: \"kubernetes.io/projected/6bbfd974-9b27-4143-9c5c-031c5a4f28b2-kube-api-access-k9czn\") pod \"neutron-operator-controller-manager-66b7d6f598-7dh7w\" (UID: \"6bbfd974-9b27-4143-9c5c-031c5a4f28b2\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.346254 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65d1f303-dffc-4de3-8192-4c74f4c33750-cert\") pod \"openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk\" (UID: \"65d1f303-dffc-4de3-8192-4c74f4c33750\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.346276 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqj6x\" (UniqueName: \"kubernetes.io/projected/2bfc0746-c4ed-4b34-991b-3b63f96614e6-kube-api-access-fqj6x\") pod \"nova-operator-controller-manager-86d796d84d-ddgk9\" (UID: \"2bfc0746-c4ed-4b34-991b-3b63f96614e6\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.360263 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.367236 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.377181 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-x6r2h" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.397134 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.404997 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ngzs\" (UniqueName: \"kubernetes.io/projected/237d24ba-9f0a-4d48-b416-7a7aa7692bbf-kube-api-access-6ngzs\") pod \"octavia-operator-controller-manager-6fdc856c5d-lcbxj\" (UID: \"237d24ba-9f0a-4d48-b416-7a7aa7692bbf\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.414883 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.415682 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqj6x\" (UniqueName: \"kubernetes.io/projected/2bfc0746-c4ed-4b34-991b-3b63f96614e6-kube-api-access-fqj6x\") pod \"nova-operator-controller-manager-86d796d84d-ddgk9\" (UID: \"2bfc0746-c4ed-4b34-991b-3b63f96614e6\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.437346 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.439599 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9czn\" (UniqueName: \"kubernetes.io/projected/6bbfd974-9b27-4143-9c5c-031c5a4f28b2-kube-api-access-k9czn\") pod \"neutron-operator-controller-manager-66b7d6f598-7dh7w\" (UID: \"6bbfd974-9b27-4143-9c5c-031c5a4f28b2\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.447299 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.447500 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.452312 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-66m2r"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.453097 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvw4g\" (UniqueName: \"kubernetes.io/projected/65d1f303-dffc-4de3-8192-4c74f4c33750-kube-api-access-wvw4g\") pod \"openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk\" (UID: \"65d1f303-dffc-4de3-8192-4c74f4c33750\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.453132 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65d1f303-dffc-4de3-8192-4c74f4c33750-cert\") pod \"openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk\" (UID: \"65d1f303-dffc-4de3-8192-4c74f4c33750\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.453235 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-955p5\" (UniqueName: \"kubernetes.io/projected/a354b71d-0aa7-4cac-8022-90de180af97d-kube-api-access-955p5\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-nsdgb\" (UID: \"a354b71d-0aa7-4cac-8022-90de180af97d\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.453272 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj5xx\" (UniqueName: \"kubernetes.io/projected/7a574f0d-6d00-487d-af74-19d886ccc174-kube-api-access-sj5xx\") pod \"placement-operator-controller-manager-6dc664666c-bxcdb\" (UID: \"7a574f0d-6d00-487d-af74-19d886ccc174\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb" Nov 21 10:01:02 crc kubenswrapper[4972]: E1121 10:01:02.453726 4972 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 10:01:02 crc kubenswrapper[4972]: E1121 10:01:02.453783 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65d1f303-dffc-4de3-8192-4c74f4c33750-cert podName:65d1f303-dffc-4de3-8192-4c74f4c33750 nodeName:}" failed. No retries permitted until 2025-11-21 10:01:02.953755131 +0000 UTC m=+1208.062897629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/65d1f303-dffc-4de3-8192-4c74f4c33750-cert") pod "openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" (UID: "65d1f303-dffc-4de3-8192-4c74f4c33750") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.454363 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-d99kk" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.455068 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.460702 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-g5jlf" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.484291 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.501917 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-66m2r"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.504709 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvw4g\" (UniqueName: \"kubernetes.io/projected/65d1f303-dffc-4de3-8192-4c74f4c33750-kube-api-access-wvw4g\") pod \"openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk\" (UID: \"65d1f303-dffc-4de3-8192-4c74f4c33750\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.506145 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.529649 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-955p5\" (UniqueName: \"kubernetes.io/projected/a354b71d-0aa7-4cac-8022-90de180af97d-kube-api-access-955p5\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-nsdgb\" (UID: \"a354b71d-0aa7-4cac-8022-90de180af97d\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.543125 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.563699 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.565875 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbpfl\" (UniqueName: \"kubernetes.io/projected/220363ab-e1b7-44ee-962d-e2de79b22ab6-kube-api-access-hbpfl\") pod \"telemetry-operator-controller-manager-7798859c74-b2644\" (UID: \"220363ab-e1b7-44ee-962d-e2de79b22ab6\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.566011 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj5xx\" (UniqueName: \"kubernetes.io/projected/7a574f0d-6d00-487d-af74-19d886ccc174-kube-api-access-sj5xx\") pod \"placement-operator-controller-manager-6dc664666c-bxcdb\" (UID: \"7a574f0d-6d00-487d-af74-19d886ccc174\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.566048 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbs4\" (UniqueName: \"kubernetes.io/projected/12710259-8a41-4aa6-8842-54b6ac9aad22-kube-api-access-5qbs4\") pod \"test-operator-controller-manager-8464cf66df-66m2r\" (UID: \"12710259-8a41-4aa6-8842-54b6ac9aad22\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.566082 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jjbc\" (UniqueName: \"kubernetes.io/projected/43051777-1676-4b54-8ddb-ba534e3d1b51-kube-api-access-7jjbc\") pod \"swift-operator-controller-manager-799cb6ffd6-92vm8\" (UID: \"43051777-1676-4b54-8ddb-ba534e3d1b51\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.566152 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8805826-cc65-49b4-ace4-44f25c209b4e-cert\") pod \"infra-operator-controller-manager-769d9c7585-lx4b9\" (UID: \"e8805826-cc65-49b4-ace4-44f25c209b4e\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.615006 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8805826-cc65-49b4-ace4-44f25c209b4e-cert\") pod \"infra-operator-controller-manager-769d9c7585-lx4b9\" (UID: \"e8805826-cc65-49b4-ace4-44f25c209b4e\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.620947 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.622154 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.624886 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-tmhgt" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.627953 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj5xx\" (UniqueName: \"kubernetes.io/projected/7a574f0d-6d00-487d-af74-19d886ccc174-kube-api-access-sj5xx\") pod \"placement-operator-controller-manager-6dc664666c-bxcdb\" (UID: \"7a574f0d-6d00-487d-af74-19d886ccc174\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.633169 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.688692 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.693441 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.694042 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qbs4\" (UniqueName: \"kubernetes.io/projected/12710259-8a41-4aa6-8842-54b6ac9aad22-kube-api-access-5qbs4\") pod \"test-operator-controller-manager-8464cf66df-66m2r\" (UID: \"12710259-8a41-4aa6-8842-54b6ac9aad22\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.694104 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jjbc\" (UniqueName: \"kubernetes.io/projected/43051777-1676-4b54-8ddb-ba534e3d1b51-kube-api-access-7jjbc\") pod \"swift-operator-controller-manager-799cb6ffd6-92vm8\" (UID: \"43051777-1676-4b54-8ddb-ba534e3d1b51\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.694240 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbpfl\" (UniqueName: \"kubernetes.io/projected/220363ab-e1b7-44ee-962d-e2de79b22ab6-kube-api-access-hbpfl\") pod \"telemetry-operator-controller-manager-7798859c74-b2644\" (UID: \"220363ab-e1b7-44ee-962d-e2de79b22ab6\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.763749 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qbs4\" (UniqueName: \"kubernetes.io/projected/12710259-8a41-4aa6-8842-54b6ac9aad22-kube-api-access-5qbs4\") pod \"test-operator-controller-manager-8464cf66df-66m2r\" (UID: \"12710259-8a41-4aa6-8842-54b6ac9aad22\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.777914 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.811957 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jjbc\" (UniqueName: \"kubernetes.io/projected/43051777-1676-4b54-8ddb-ba534e3d1b51-kube-api-access-7jjbc\") pod \"swift-operator-controller-manager-799cb6ffd6-92vm8\" (UID: \"43051777-1676-4b54-8ddb-ba534e3d1b51\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.814024 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbpfl\" (UniqueName: \"kubernetes.io/projected/220363ab-e1b7-44ee-962d-e2de79b22ab6-kube-api-access-hbpfl\") pod \"telemetry-operator-controller-manager-7798859c74-b2644\" (UID: \"220363ab-e1b7-44ee-962d-e2de79b22ab6\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.819243 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krnr6\" (UniqueName: \"kubernetes.io/projected/3efec5f8-69ac-4971-a9e5-2c53352cabed-kube-api-access-krnr6\") pod \"watcher-operator-controller-manager-7cd4fb6f79-2kq6z\" (UID: \"3efec5f8-69ac-4971-a9e5-2c53352cabed\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.832503 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.833183 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.849692 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.851638 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.858978 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.859164 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.859471 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-547r4" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.873333 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.875025 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.878016 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-csxsb" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.885145 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.905115 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d"] Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.920299 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krnr6\" (UniqueName: \"kubernetes.io/projected/3efec5f8-69ac-4971-a9e5-2c53352cabed-kube-api-access-krnr6\") pod \"watcher-operator-controller-manager-7cd4fb6f79-2kq6z\" (UID: \"3efec5f8-69ac-4971-a9e5-2c53352cabed\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.920645 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdzwx\" (UniqueName: \"kubernetes.io/projected/06f6d748-6a75-4e4a-b642-790efd655fac-kube-api-access-vdzwx\") pod \"openstack-operator-controller-manager-7755d5f8cc-z86dh\" (UID: \"06f6d748-6a75-4e4a-b642-790efd655fac\") " pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.920771 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06f6d748-6a75-4e4a-b642-790efd655fac-cert\") pod \"openstack-operator-controller-manager-7755d5f8cc-z86dh\" (UID: \"06f6d748-6a75-4e4a-b642-790efd655fac\") " pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.920860 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rqqq\" (UniqueName: \"kubernetes.io/projected/a4700bb1-902c-40bc-b02f-c8efe4893180-kube-api-access-7rqqq\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k\" (UID: \"a4700bb1-902c-40bc-b02f-c8efe4893180\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.940472 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krnr6\" (UniqueName: \"kubernetes.io/projected/3efec5f8-69ac-4971-a9e5-2c53352cabed-kube-api-access-krnr6\") pod \"watcher-operator-controller-manager-7cd4fb6f79-2kq6z\" (UID: \"3efec5f8-69ac-4971-a9e5-2c53352cabed\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.960406 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 10:01:02 crc kubenswrapper[4972]: I1121 10:01:02.964360 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.022025 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdzwx\" (UniqueName: \"kubernetes.io/projected/06f6d748-6a75-4e4a-b642-790efd655fac-kube-api-access-vdzwx\") pod \"openstack-operator-controller-manager-7755d5f8cc-z86dh\" (UID: \"06f6d748-6a75-4e4a-b642-790efd655fac\") " pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.022080 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06f6d748-6a75-4e4a-b642-790efd655fac-cert\") pod \"openstack-operator-controller-manager-7755d5f8cc-z86dh\" (UID: \"06f6d748-6a75-4e4a-b642-790efd655fac\") " pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.022115 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65d1f303-dffc-4de3-8192-4c74f4c33750-cert\") pod \"openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk\" (UID: \"65d1f303-dffc-4de3-8192-4c74f4c33750\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.022140 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rqqq\" (UniqueName: \"kubernetes.io/projected/a4700bb1-902c-40bc-b02f-c8efe4893180-kube-api-access-7rqqq\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k\" (UID: \"a4700bb1-902c-40bc-b02f-c8efe4893180\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k" Nov 21 10:01:03 crc kubenswrapper[4972]: E1121 10:01:03.022300 4972 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 21 10:01:03 crc kubenswrapper[4972]: E1121 10:01:03.022346 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06f6d748-6a75-4e4a-b642-790efd655fac-cert podName:06f6d748-6a75-4e4a-b642-790efd655fac nodeName:}" failed. No retries permitted until 2025-11-21 10:01:03.522331589 +0000 UTC m=+1208.631474077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06f6d748-6a75-4e4a-b642-790efd655fac-cert") pod "openstack-operator-controller-manager-7755d5f8cc-z86dh" (UID: "06f6d748-6a75-4e4a-b642-790efd655fac") : secret "webhook-server-cert" not found Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.027184 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65d1f303-dffc-4de3-8192-4c74f4c33750-cert\") pod \"openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk\" (UID: \"65d1f303-dffc-4de3-8192-4c74f4c33750\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.036638 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.042514 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rqqq\" (UniqueName: \"kubernetes.io/projected/a4700bb1-902c-40bc-b02f-c8efe4893180-kube-api-access-7rqqq\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k\" (UID: \"a4700bb1-902c-40bc-b02f-c8efe4893180\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.047611 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdzwx\" (UniqueName: \"kubernetes.io/projected/06f6d748-6a75-4e4a-b642-790efd655fac-kube-api-access-vdzwx\") pod \"openstack-operator-controller-manager-7755d5f8cc-z86dh\" (UID: \"06f6d748-6a75-4e4a-b642-790efd655fac\") " pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.175393 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.254987 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.419867 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d" event={"ID":"647ff257-bc23-44e5-9397-2696f04520d4","Type":"ContainerStarted","Data":"f5cca619efa763e2bcf75a2ca7d2649aecd4d1e03086a60fd1add8feca232394"} Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.528916 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06f6d748-6a75-4e4a-b642-790efd655fac-cert\") pod \"openstack-operator-controller-manager-7755d5f8cc-z86dh\" (UID: \"06f6d748-6a75-4e4a-b642-790efd655fac\") " pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.533157 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06f6d748-6a75-4e4a-b642-790efd655fac-cert\") pod \"openstack-operator-controller-manager-7755d5f8cc-z86dh\" (UID: \"06f6d748-6a75-4e4a-b642-790efd655fac\") " pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.809521 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.850912 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp"] Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.858681 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg"] Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.881167 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg"] Nov 21 10:01:03 crc kubenswrapper[4972]: W1121 10:01:03.884079 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0828ae93_40ce_46ac_a769_f5e4e735b186.slice/crio-0acfbe57cd9471a011eaee19f27c986b6e41624b336be457c5b9ae45d223b92d WatchSource:0}: Error finding container 0acfbe57cd9471a011eaee19f27c986b6e41624b336be457c5b9ae45d223b92d: Status 404 returned error can't find the container with id 0acfbe57cd9471a011eaee19f27c986b6e41624b336be457c5b9ae45d223b92d Nov 21 10:01:03 crc kubenswrapper[4972]: W1121 10:01:03.916982 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d892fbc_e66c_405d_9d42_e306d9b652b8.slice/crio-4e67763f40479ddbb9edf23402cce589be74775d26a9cc4083b130a49f4b6e6c WatchSource:0}: Error finding container 4e67763f40479ddbb9edf23402cce589be74775d26a9cc4083b130a49f4b6e6c: Status 404 returned error can't find the container with id 4e67763f40479ddbb9edf23402cce589be74775d26a9cc4083b130a49f4b6e6c Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.917919 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q"] Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.931942 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn"] Nov 21 10:01:03 crc kubenswrapper[4972]: W1121 10:01:03.936147 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aea9dc6_2e85_470c_9897_064111a4661e.slice/crio-8ccd3ccbe56095ea916360b98f105906eb5d0278245ae54724a5dc76df9b8b22 WatchSource:0}: Error finding container 8ccd3ccbe56095ea916360b98f105906eb5d0278245ae54724a5dc76df9b8b22: Status 404 returned error can't find the container with id 8ccd3ccbe56095ea916360b98f105906eb5d0278245ae54724a5dc76df9b8b22 Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.940800 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w"] Nov 21 10:01:03 crc kubenswrapper[4972]: W1121 10:01:03.943859 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bbfd974_9b27_4143_9c5c_031c5a4f28b2.slice/crio-42c35bec5f920e9a70f04161d042036d2ed38c71c6ba0c78343d487e7fb8304c WatchSource:0}: Error finding container 42c35bec5f920e9a70f04161d042036d2ed38c71c6ba0c78343d487e7fb8304c: Status 404 returned error can't find the container with id 42c35bec5f920e9a70f04161d042036d2ed38c71c6ba0c78343d487e7fb8304c Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.949285 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z"] Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.955042 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj"] Nov 21 10:01:03 crc kubenswrapper[4972]: W1121 10:01:03.955672 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8805826_cc65_49b4_ace4_44f25c209b4e.slice/crio-477f841e6036a515545209d622da8c4bdc6920242aee6860398794090d42965b WatchSource:0}: Error finding container 477f841e6036a515545209d622da8c4bdc6920242aee6860398794090d42965b: Status 404 returned error can't find the container with id 477f841e6036a515545209d622da8c4bdc6920242aee6860398794090d42965b Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.960637 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9"] Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.966061 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb"] Nov 21 10:01:03 crc kubenswrapper[4972]: E1121 10:01:03.971207 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krnr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7cd4fb6f79-2kq6z_openstack-operators(3efec5f8-69ac-4971-a9e5-2c53352cabed): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.971525 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm"] Nov 21 10:01:03 crc kubenswrapper[4972]: I1121 10:01:03.975360 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g"] Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.001464 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zrph8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7bb88cb858-7cn8g_openstack-operators(40fab427-66b5-4519-b416-0d5f253e2c10): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.001617 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ff9l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6f8c5b86cb-chz4z_openstack-operators(2a54b288-8941-4f22-bfe1-d99802311c60): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.002412 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhnvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5c75d7c94b-llzpm_openstack-operators(902981fa-5b52-4436-b685-18372dd43999): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.040319 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z"] Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.048926 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z"] Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.116907 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8"] Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.134286 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk"] Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.140463 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-66m2r"] Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.155346 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9"] Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.161025 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k"] Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.167864 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb"] Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.171716 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644"] Nov 21 10:01:04 crc kubenswrapper[4972]: W1121 10:01:04.182008 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43051777_1676_4b54_8ddb_ba534e3d1b51.slice/crio-eaff8f1d1a6b7a8d604544d6d1b892d7595d9f74c1057cdb268a0bbfdfb86b69 WatchSource:0}: Error finding container eaff8f1d1a6b7a8d604544d6d1b892d7595d9f74c1057cdb268a0bbfdfb86b69: Status 404 returned error can't find the container with id eaff8f1d1a6b7a8d604544d6d1b892d7595d9f74c1057cdb268a0bbfdfb86b69 Nov 21 10:01:04 crc kubenswrapper[4972]: W1121 10:01:04.191809 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65d1f303_dffc_4de3_8192_4c74f4c33750.slice/crio-f0fd6377b020c9021e3110e70805a52a43beccbf5c4ce7c933c49001eddd6d5c WatchSource:0}: Error finding container f0fd6377b020c9021e3110e70805a52a43beccbf5c4ce7c933c49001eddd6d5c: Status 404 returned error can't find the container with id f0fd6377b020c9021e3110e70805a52a43beccbf5c4ce7c933c49001eddd6d5c Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.194055 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:7dbadf7b98f2f305f9f1382f55a084c8ca404f4263f76b28e56bd0dc437e2192,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:0473ff9eec0da231e2d0a10bf1abbe1dfa1a0f95b8f619e3a07605386951449a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:0b01822660d01c348c5dfb2514e36d18ee8e62af93b3701022d7a1da1292f523,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:6abb4f939a264a8d5dae90605b987d623e84aef9dfe8656534314747d4aba5bc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:4af3a62ffa4399d44c9bb3e8a92df4e926b45a471afbd6603d3246ed28ae72da,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:bd19a2bf617202a6cec41dc2ad770731503c9586b1bd80baf05a40ffbc4bc80f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24@sha256:8536169e5537fe6c330eba814248abdcf39cdd8f7e7336034d74e6fda9544050,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:09513b0e7384092548dd654fa2356d64e243315cf59fa8857bd6c4a3ae4037c4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:45be724bad0edfc2b4ee1b1e73b36ae03e43058f3efd984ac5baba5f94c71416,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:8072bd781b1bebf89ae6b24b81175c11090d1996aab86767ffd6b25bae57df99,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:7b50597c6dbca671784fa98d7317a18980f1a1ac729254460e6d769d70dfefc5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:31b63255661d23eb7478c8a22f357e1da5a25a62d92ec47f9adab2d0ac52fac9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:c0a2e0a3f7f2bdf7eec1c7c3e73f874c159a9857e25861f575c0ad1ddf5d8823,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:cb7b59ee26e63eba841543d716424cd69b3ba4f0fc27b208d3ab3d315e4feac5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:25d0e71e0464df9502ad8bd3af0f73caeaca1bae11d89b4b5992b4fe712eda3a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:0e8588aca751bde5059e3840ffbdf64ce88f4b039896b269ab43984df33813dd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:9e907091034575b4133799eff52b662f28e9ef2a0dd0dbabb45f1ae37d5d53a0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:33dfd28630aefc6c4445c4b88545604d693e42948077a1d82d25ad0681945f5b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api@sha256:de480b9a8fc7987d8b604bd1ea6ec0dc067f29b9adf4a059c3de5d05c5723e6c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor@sha256:5ec051551190526b71903c6f8d3350bf7b3534f8de3a5d3e14526c9a13d2d31d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api@sha256:d8a3a65a802c32ab522af42eb014d769aa1a8fcfaae2a290a4b2e7273f854a3b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:4eb6ce54f9ac730022d656df8811d5f3b378492b7bd834ce0c3b87d8298196e2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central@sha256:edc1e595520c8343ca4f7dc0d82989e9c71c40d30ffb3b2ea9b55dc6bc3b58c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:b885e03262ed683f50147a1d608e00490b456fdc9549f98ff7ab2fb1ad6f3ba2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:e15a8afdcde5ea2a062367d8050379614380becdb0aa12259a6b40a35d3f549e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound@sha256:ae8454dd413dad333e842c0d0c65cd002e626fc0b83d2bd73b499277b6f7b2ca,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:e2f9bdb51e014e7539d4e256a18f6cb0057dedcf58618b7f15eadd10aa89d333,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr@sha256:b6a3181c148a034c61a9cfa66350f2099f3a3059b6efcc294cad3f5baf3ca7f4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid@sha256:2ca2f0d178f7e6b6fbdb87f728faf6a2e41a3104a5631dd38ffe7712da6734c8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron@sha256:280e747514c209448fea075c0dc18176ae7f616e11a9241c0eb5fa75af318cf2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd@sha256:70a380cf725ccf6987fd10b44dd8d8807e200b16b8aac2a842f5b370cdd58df7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:f3d5ecc50e8bc5739fbe7d38d56eb2a0cf3581eef3fa5c8dcd698edb1b73121d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:6eebf6cf424c5301641247c897052de8ef7493ba2f01e275090f8d9e6313b9d3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:2cb4f9d22ddf7067a8d840c9b2c34000e8b0a148a20731ea41ea039e18d8caf0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:80a14ed2cb8189891feda53680b05c64aea7725416b7271317652de1470e3f10,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:4ec37c32887a63b65e29eea74af1d12a563307e13c889983e1242f681832c108,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:8c7ecaaf282fb3dd419c02a3e017d5f190e1e0831965f1ce366b9763700b4e4a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api@sha256:14a9575be32d7d8c7bf233a8d7cdf5e0067b2a7944c46b97c09c1f835a73902a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:8cdabbe1caabd5eee32a6b0ca37bc80f0cbdabe7621031d9e60193e4ca0e9a05,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:c7e57e8bde805b8943714bfa0c98b2fdf77ecd02311dd7d3be8edeee11311cd6,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon@sha256:52247334a2f9162f9b7a78302447e80deb93fa7c4025375012f59fa49bb767a6,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached@sha256:0b7651290cfea3b21b9232a1343849a0fbfe605e1aa85e22c5090a38139d6015,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis@sha256:7ee23e0d3c8de6e852cc4ec33e4ed7946cea345e8fc68dfc9d17ba2c3ee55fd0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:88d396276e4eee9b965504b60aa0d192aa5b72012f438dfc057ff4ac9d2b42f4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:ab6ca66e487c2473042dd263a2deecad6520dde8c2f592ca7e0d74754155935b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:2fe6e08337666e2d35ac5b5cac35b0bab0eee274105b1febba4de4441ae30883,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:15057dd1b30deb9fe992e6df65b3a10e3c4f6e041582a82de40ba4fb14a6b2f4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:0e643ca2a5a61a2e8155d2e2ac5f43c8b7114c238b2f6728453b8ddd95872ea9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent@sha256:73b164a8d974d2acd052690f5ce36387b9c6603cc5531ef47edc636d250483d7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone@sha256:c54b00fd39e9221423904356e54d25937102cee6eab40ffcbe9be50c3211c998,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api@sha256:5ee35f3a2d86632c1c5faa090cc2adbbaf97a11f57f9112217f07e718f70dfa2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:e4df658fe3853c3b3307f5b4770344234954751bff03d9b2e1972bef7f6816c2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share@sha256:27525b2fd15a6454106116fa33af54a8f81dea3af61ad3305f22d4b2ebc9f768,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:3db27fbfc3fabe2f62c68ab1b9f24383a73554f2d6d1f178147088832619013a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils@sha256:0ae27dc0d5ffd5933936f4f42d332bc7f4b870ebd03fec10f6d03fae88650a30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:a0e6062f505fbc848d62675995abc3806bc5c12530d3d41ed16066e07f71b2d3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api@sha256:c204f18c5ff6657cd13b59be40476c4b6791c74592aafbc480c974e9a6de9c5f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5cdaeb65c87876031aef31dd91bea0d05ff3c6d00085809f94cce6a7469a6934,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:fed8bb29eb85284462a99cb1013100f375ae83e1d68ef0954dbc3c2d7074f911,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:a6fb0cb1412380c623aaeafa5cb1d016b831fc9d68f21f187b3728f1c40b02ed,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:d316f6e58667b3a319be82ed594bffdd76df221be07ce686a634383a68d8fbb7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:ab7f58d75373e26c020d12fc845c30b64ba0f236a163efd1880c523a3321054c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:37c702f09d13273b414b7dd87f9ea08bc78dbc4c8bda9a83597d68d41276be68,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:f1bbe5a8199a7c1839b00b475f562506797b980a019649b116eea4e071751ce7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:f2928dddda751a404f5c5b7c73f04c0229a293773ef20165d9861cf02d54fc0c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:865faa8b740748e2675dd7efeaa31b210e681fa04cc0a7abb69256746019dca3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:fed8e34e2c12b6bae77e207c0620bf237aee86c22f0e5e8dc5a39d1920d422cb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather@sha256:0a98e8f5c83522ca6c8e40c5e9561f6628d2d5e69f0e8a64279c541c989d3d8b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:48e0946c24a3f577a2f4e894e10ebba7301e2adc3c8a6139e6ba52d2b37f13e1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:88aa29318415550987bdf3847f0d60e53f13b34fa029fd2b1f429622b5fa5528,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ff4e61688bf3998015f81f1b91e53dfd48de83bb3192d6563689eb32e088c2eb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:df1a26ffbe6152e142cfeed2500ca85782319d0cb4ec058b5b4e2f3145492e77,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:b50791356d119186382dcc1928e509b4e306ad808a66ff6a694e89fc04f18bcd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:e2de6853508fca6f1e6442757834a66baec36a9cf5f92450dd293fb507aef0c5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:ce8908c54afc567e5791f2940656436928afa6f135ebae31274e5283cf13f448,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:8f8ade19bc904e0b06eaa2d55539cdebb1df40512845962a7aa672223332df90,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:b2edf73a93d944944a1b4cfea9a707f786255f279f0fe618462184f45a1c07be,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container@sha256:ac05f3bb94183e19aff696b4108e5302d45de1e84c1eea736c14bbdffb66467b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object@sha256:2736a99c1da01339d4da3a3697fa1e845651dd1c246b75cf6e0de62e6763fb23,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:5d3268a6af8e505ab901049cb8d643a6f3de7e4d7b1cb4820255c11eff9a7bd0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:e256f776486dd11e186b847ee547d7fc0ac452b72aeb5d00bc0073e70b929e03,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api@sha256:e697d287395c2f4261f846c6818c1a2c999ab84640baa57c695bc17ee332cead,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier@sha256:3b2f9a900e35cb2ae7a56dd52c0cc44866bb0083bc8bc984fb61ed2440145e3f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:3deb7d37852add64cf3bb636ea77265cb58242fde18650d30b5a6755ae259ee8,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvw4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk_openstack-operators(65d1f303-dffc-4de3-8192-4c74f4c33750): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 10:01:04 crc kubenswrapper[4972]: W1121 10:01:04.199107 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12710259_8a41_4aa6_8842_54b6ac9aad22.slice/crio-03b2e72c3be91ac588bae4c0a9dfe348cbd0cac0638bb92c91afb3a202fde597 WatchSource:0}: Error finding container 03b2e72c3be91ac588bae4c0a9dfe348cbd0cac0638bb92c91afb3a202fde597: Status 404 returned error can't find the container with id 03b2e72c3be91ac588bae4c0a9dfe348cbd0cac0638bb92c91afb3a202fde597 Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.206442 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5qbs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8464cf66df-66m2r_openstack-operators(12710259-8a41-4aa6-8842-54b6ac9aad22): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.206809 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqj6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-86d796d84d-ddgk9_openstack-operators(2bfc0746-c4ed-4b34-991b-3b63f96614e6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.220164 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-955p5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5bdf4f7f7f-nsdgb_openstack-operators(a354b71d-0aa7-4cac-8022-90de180af97d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.228513 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbpfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7798859c74-b2644_openstack-operators(220363ab-e1b7-44ee-962d-e2de79b22ab6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 10:01:04 crc kubenswrapper[4972]: W1121 10:01:04.233823 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4700bb1_902c_40bc_b02f_c8efe4893180.slice/crio-e0cdfb5510217882897256762cdc85f3751c009679b9933a96a14b0816e2c369 WatchSource:0}: Error finding container e0cdfb5510217882897256762cdc85f3751c009679b9933a96a14b0816e2c369: Status 404 returned error can't find the container with id e0cdfb5510217882897256762cdc85f3751c009679b9933a96a14b0816e2c369 Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.238601 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rqqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k_openstack-operators(a4700bb1-902c-40bc-b02f-c8efe4893180): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.239707 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k" podUID="a4700bb1-902c-40bc-b02f-c8efe4893180" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.271321 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" podUID="3efec5f8-69ac-4971-a9e5-2c53352cabed" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.359187 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" podUID="902981fa-5b52-4436-b685-18372dd43999" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.369309 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" podUID="2a54b288-8941-4f22-bfe1-d99802311c60" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.376350 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" podUID="40fab427-66b5-4519-b416-0d5f253e2c10" Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.441611 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" event={"ID":"a354b71d-0aa7-4cac-8022-90de180af97d","Type":"ContainerStarted","Data":"6cd572ecf7a0a72bef7672a841168cb0d5855d12581d2b0f240ad0a5f8cf0717"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.447199 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" event={"ID":"40fab427-66b5-4519-b416-0d5f253e2c10","Type":"ContainerStarted","Data":"1472fc8ff84e8819e257bf1a8354888518358292e2c420ea083f4f327b63ceda"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.447256 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" event={"ID":"40fab427-66b5-4519-b416-0d5f253e2c10","Type":"ContainerStarted","Data":"3925a327c65837f85e18aa11b43e334ba08481472fb99596a3f9f03742a4c6d0"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.448553 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w" event={"ID":"6bbfd974-9b27-4143-9c5c-031c5a4f28b2","Type":"ContainerStarted","Data":"42c35bec5f920e9a70f04161d042036d2ed38c71c6ba0c78343d487e7fb8304c"} Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.449226 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" podUID="40fab427-66b5-4519-b416-0d5f253e2c10" Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.450227 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" event={"ID":"220363ab-e1b7-44ee-962d-e2de79b22ab6","Type":"ContainerStarted","Data":"74603b16e2b257a74a23564706789deaad7e6a32d547af1e9b696bd8e7c91927"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.452389 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" event={"ID":"3efec5f8-69ac-4971-a9e5-2c53352cabed","Type":"ContainerStarted","Data":"3dd9ba78f77b69e4eca4975be29f4dbb1dda8e7fdd087790e2b6ce367ea6cf17"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.452424 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" event={"ID":"3efec5f8-69ac-4971-a9e5-2c53352cabed","Type":"ContainerStarted","Data":"31c0f5fee252ec81aac417ac4c80f8d95d9d5a04f600c2fcbc95597f180da458"} Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.456786 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" podUID="3efec5f8-69ac-4971-a9e5-2c53352cabed" Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.469315 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn" event={"ID":"1d892fbc-e66c-405d-9d42-e306d9b652b8","Type":"ContainerStarted","Data":"4e67763f40479ddbb9edf23402cce589be74775d26a9cc4083b130a49f4b6e6c"} Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.475360 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" podUID="a354b71d-0aa7-4cac-8022-90de180af97d" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.477207 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" podUID="65d1f303-dffc-4de3-8192-4c74f4c33750" Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.482007 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg" event={"ID":"4d93267d-02e5-4045-aa0a-5edf3730b4cf","Type":"ContainerStarted","Data":"57f4862fb90d88e96d23873f765eb03c561fce72480296e0d9120bf085fe86de"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.496785 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp" event={"ID":"0828ae93-40ce-46ac-a769-f5e4e735b186","Type":"ContainerStarted","Data":"0acfbe57cd9471a011eaee19f27c986b6e41624b336be457c5b9ae45d223b92d"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.508574 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb" event={"ID":"7a574f0d-6d00-487d-af74-19d886ccc174","Type":"ContainerStarted","Data":"21b79ae977455f8f8edbac8247883e77d0ce4add0855d44e2c04ac2eaa410c19"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.516983 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" event={"ID":"e8805826-cc65-49b4-ace4-44f25c209b4e","Type":"ContainerStarted","Data":"477f841e6036a515545209d622da8c4bdc6920242aee6860398794090d42965b"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.518275 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k" event={"ID":"a4700bb1-902c-40bc-b02f-c8efe4893180","Type":"ContainerStarted","Data":"e0cdfb5510217882897256762cdc85f3751c009679b9933a96a14b0816e2c369"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.521453 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh"] Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.523079 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k" podUID="a4700bb1-902c-40bc-b02f-c8efe4893180" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.524250 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" podUID="220363ab-e1b7-44ee-962d-e2de79b22ab6" Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.554116 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" podUID="12710259-8a41-4aa6-8842-54b6ac9aad22" Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.555908 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" event={"ID":"65d1f303-dffc-4de3-8192-4c74f4c33750","Type":"ContainerStarted","Data":"f0fd6377b020c9021e3110e70805a52a43beccbf5c4ce7c933c49001eddd6d5c"} Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.558880 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" podUID="65d1f303-dffc-4de3-8192-4c74f4c33750" Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.614947 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" event={"ID":"902981fa-5b52-4436-b685-18372dd43999","Type":"ContainerStarted","Data":"1882f50fedcbd876341c4354902333a1b997943d1862577cf859faed79b1004d"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.614990 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" event={"ID":"902981fa-5b52-4436-b685-18372dd43999","Type":"ContainerStarted","Data":"69e470df2d6f15073df8f8fb365ff1b4099ea78a20707670b87897734ad75d7d"} Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.622067 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" podUID="902981fa-5b52-4436-b685-18372dd43999" Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.625272 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" event={"ID":"2bfc0746-c4ed-4b34-991b-3b63f96614e6","Type":"ContainerStarted","Data":"52f3987b68e5eef6ca86be1b593c25043cf2c9a5ce531e56c068d908e0ec1b14"} Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.626853 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" podUID="2bfc0746-c4ed-4b34-991b-3b63f96614e6" Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.631621 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q" event={"ID":"56043c1e-adb8-4d37-9067-cb28a5103fdd","Type":"ContainerStarted","Data":"ce28545d2eea7381c72810a29cb87203bd0004062dea6acb9ec3bc9e4e78b8df"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.648692 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj" event={"ID":"237d24ba-9f0a-4d48-b416-7a7aa7692bbf","Type":"ContainerStarted","Data":"69e3f63bf078a82cdb14e6fb8582686a722bcf410d4bc672471585c9b6d0777e"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.655420 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z" event={"ID":"6aea9dc6-2e85-470c-9897-064111a4661e","Type":"ContainerStarted","Data":"8ccd3ccbe56095ea916360b98f105906eb5d0278245ae54724a5dc76df9b8b22"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.657629 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg" event={"ID":"9304bb08-b481-42e5-89ed-f215f0102662","Type":"ContainerStarted","Data":"282ebbc2ba79998ee5a86410035d8dc9a85ac30eead28fe74721c6acd4313b64"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.663562 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8" event={"ID":"43051777-1676-4b54-8ddb-ba534e3d1b51","Type":"ContainerStarted","Data":"eaff8f1d1a6b7a8d604544d6d1b892d7595d9f74c1057cdb268a0bbfdfb86b69"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.669326 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" event={"ID":"2a54b288-8941-4f22-bfe1-d99802311c60","Type":"ContainerStarted","Data":"e92f34aa064a7f92b99f838ddfbf4f0ac426eb8eebed46d26279438f836be7da"} Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.669369 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" event={"ID":"2a54b288-8941-4f22-bfe1-d99802311c60","Type":"ContainerStarted","Data":"1513e1892442ee7a9fd05861353cda1891162cd237d89b5f98f8a3ff21bab83f"} Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.674498 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" podUID="2a54b288-8941-4f22-bfe1-d99802311c60" Nov 21 10:01:04 crc kubenswrapper[4972]: I1121 10:01:04.679801 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" event={"ID":"12710259-8a41-4aa6-8842-54b6ac9aad22","Type":"ContainerStarted","Data":"03b2e72c3be91ac588bae4c0a9dfe348cbd0cac0638bb92c91afb3a202fde597"} Nov 21 10:01:04 crc kubenswrapper[4972]: E1121 10:01:04.682467 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" podUID="12710259-8a41-4aa6-8842-54b6ac9aad22" Nov 21 10:01:05 crc kubenswrapper[4972]: I1121 10:01:05.700374 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" event={"ID":"220363ab-e1b7-44ee-962d-e2de79b22ab6","Type":"ContainerStarted","Data":"592f1e9b2ff40ad079f1bf7ff646a191da01012c570e23b6a0d294f3168edc0b"} Nov 21 10:01:05 crc kubenswrapper[4972]: E1121 10:01:05.703524 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" podUID="220363ab-e1b7-44ee-962d-e2de79b22ab6" Nov 21 10:01:05 crc kubenswrapper[4972]: I1121 10:01:05.704205 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" event={"ID":"2bfc0746-c4ed-4b34-991b-3b63f96614e6","Type":"ContainerStarted","Data":"813e2f5c3d105831937276ab354c63c32dc072c5d30ad6330641f68f4f2fb2e0"} Nov 21 10:01:05 crc kubenswrapper[4972]: I1121 10:01:05.706321 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" event={"ID":"a354b71d-0aa7-4cac-8022-90de180af97d","Type":"ContainerStarted","Data":"edbeea80c36239826e4cfa4cd776adcfb7ec7e2e4aa33a96bde2838e3ac47930"} Nov 21 10:01:05 crc kubenswrapper[4972]: I1121 10:01:05.715291 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" event={"ID":"12710259-8a41-4aa6-8842-54b6ac9aad22","Type":"ContainerStarted","Data":"a690fe866f51e11290e86fb6f1e9f2093482e0bad578087dcfdcdaf2e1d8c045"} Nov 21 10:01:05 crc kubenswrapper[4972]: E1121 10:01:05.718053 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\"" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" podUID="2bfc0746-c4ed-4b34-991b-3b63f96614e6" Nov 21 10:01:05 crc kubenswrapper[4972]: I1121 10:01:05.719145 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" event={"ID":"65d1f303-dffc-4de3-8192-4c74f4c33750","Type":"ContainerStarted","Data":"810c0462913a96869fc9ad8403b6201909b69079d2b4323eb080fbff28efb9df"} Nov 21 10:01:05 crc kubenswrapper[4972]: E1121 10:01:05.720293 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" podUID="a354b71d-0aa7-4cac-8022-90de180af97d" Nov 21 10:01:05 crc kubenswrapper[4972]: E1121 10:01:05.720348 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" podUID="12710259-8a41-4aa6-8842-54b6ac9aad22" Nov 21 10:01:05 crc kubenswrapper[4972]: E1121 10:01:05.730811 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" podUID="65d1f303-dffc-4de3-8192-4c74f4c33750" Nov 21 10:01:05 crc kubenswrapper[4972]: I1121 10:01:05.733194 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" event={"ID":"06f6d748-6a75-4e4a-b642-790efd655fac","Type":"ContainerStarted","Data":"a20a287bf4b0c599d59be32d617d1c81213d7bf58d890d695a00790ab80e5743"} Nov 21 10:01:05 crc kubenswrapper[4972]: I1121 10:01:05.733260 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" event={"ID":"06f6d748-6a75-4e4a-b642-790efd655fac","Type":"ContainerStarted","Data":"e03ad4d95db0940bbcb20c63c32168978c3472c757f0b53c8694b0feae7b35ce"} Nov 21 10:01:05 crc kubenswrapper[4972]: I1121 10:01:05.733274 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" event={"ID":"06f6d748-6a75-4e4a-b642-790efd655fac","Type":"ContainerStarted","Data":"760e97c6299e7868666c1025a90f35b579c79e10b80285f97ef95b899caaa730"} Nov 21 10:01:05 crc kubenswrapper[4972]: E1121 10:01:05.735576 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" podUID="3efec5f8-69ac-4971-a9e5-2c53352cabed" Nov 21 10:01:05 crc kubenswrapper[4972]: E1121 10:01:05.735654 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k" podUID="a4700bb1-902c-40bc-b02f-c8efe4893180" Nov 21 10:01:05 crc kubenswrapper[4972]: E1121 10:01:05.735696 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" podUID="40fab427-66b5-4519-b416-0d5f253e2c10" Nov 21 10:01:05 crc kubenswrapper[4972]: E1121 10:01:05.735740 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" podUID="2a54b288-8941-4f22-bfe1-d99802311c60" Nov 21 10:01:05 crc kubenswrapper[4972]: E1121 10:01:05.744563 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" podUID="902981fa-5b52-4436-b685-18372dd43999" Nov 21 10:01:05 crc kubenswrapper[4972]: I1121 10:01:05.967696 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" podStartSLOduration=3.967658743 podStartE2EDuration="3.967658743s" podCreationTimestamp="2025-11-21 10:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:01:05.964630833 +0000 UTC m=+1211.073773341" watchObservedRunningTime="2025-11-21 10:01:05.967658743 +0000 UTC m=+1211.076801261" Nov 21 10:01:06 crc kubenswrapper[4972]: I1121 10:01:06.739360 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" Nov 21 10:01:06 crc kubenswrapper[4972]: E1121 10:01:06.741717 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" podUID="a354b71d-0aa7-4cac-8022-90de180af97d" Nov 21 10:01:06 crc kubenswrapper[4972]: E1121 10:01:06.741889 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" podUID="65d1f303-dffc-4de3-8192-4c74f4c33750" Nov 21 10:01:06 crc kubenswrapper[4972]: E1121 10:01:06.742074 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" podUID="12710259-8a41-4aa6-8842-54b6ac9aad22" Nov 21 10:01:06 crc kubenswrapper[4972]: E1121 10:01:06.742106 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\"" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" podUID="2bfc0746-c4ed-4b34-991b-3b63f96614e6" Nov 21 10:01:06 crc kubenswrapper[4972]: E1121 10:01:06.742433 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" podUID="220363ab-e1b7-44ee-962d-e2de79b22ab6" Nov 21 10:01:13 crc kubenswrapper[4972]: I1121 10:01:13.821388 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7755d5f8cc-z86dh" Nov 21 10:01:15 crc kubenswrapper[4972]: I1121 10:01:15.825681 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn" event={"ID":"1d892fbc-e66c-405d-9d42-e306d9b652b8","Type":"ContainerStarted","Data":"d96ead5a0c3301d20660017da7dd5a280526e3d3d4f93677a2871275dd28332d"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.865207 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8" event={"ID":"43051777-1676-4b54-8ddb-ba534e3d1b51","Type":"ContainerStarted","Data":"4c0fe7d77fe717fddc170ad2aad4d664dd3262061ceaeb9ca732fd6b9f36a984"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.887420 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" event={"ID":"e8805826-cc65-49b4-ace4-44f25c209b4e","Type":"ContainerStarted","Data":"ab00cc126690fa1e966ae23f57e97043997cae693eb63f48e9e7bae7aaeb62d3"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.887461 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" event={"ID":"e8805826-cc65-49b4-ace4-44f25c209b4e","Type":"ContainerStarted","Data":"7f61c3fe49f403d2eb42bf1bb37e9a963a88fc0e05b7486b83fe7c1a89291405"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.888059 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.896555 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d" event={"ID":"647ff257-bc23-44e5-9397-2696f04520d4","Type":"ContainerStarted","Data":"a59b643bb211264eea9c692e5ad0c27f923a32bfff6167d90291a64c3d3539e1"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.896605 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d" event={"ID":"647ff257-bc23-44e5-9397-2696f04520d4","Type":"ContainerStarted","Data":"a2434a9d5a39bfb5fae6de61071ecfc1566fb81d166fbb527caa11a994adcf27"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.897388 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.912653 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" podStartSLOduration=4.416964241 podStartE2EDuration="15.91263975s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:03.957795098 +0000 UTC m=+1209.066937596" lastFinishedPulling="2025-11-21 10:01:15.453470617 +0000 UTC m=+1220.562613105" observedRunningTime="2025-11-21 10:01:16.912150967 +0000 UTC m=+1222.021293475" watchObservedRunningTime="2025-11-21 10:01:16.91263975 +0000 UTC m=+1222.021782248" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.913659 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj" event={"ID":"237d24ba-9f0a-4d48-b416-7a7aa7692bbf","Type":"ContainerStarted","Data":"2286f2a551606421cacb148633936186133b6c9d76fa701f2573fd6da3dd6968"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.913710 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj" event={"ID":"237d24ba-9f0a-4d48-b416-7a7aa7692bbf","Type":"ContainerStarted","Data":"533e3f09c54964302adea75c93832e19dcd642229798e9c0d273a02de45074a8"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.913959 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.916958 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg" event={"ID":"4d93267d-02e5-4045-aa0a-5edf3730b4cf","Type":"ContainerStarted","Data":"74c92d2c9acd08c58cb73eebfa96d8d42e18945d256b59282942eb26b5523c3d"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.916997 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg" event={"ID":"4d93267d-02e5-4045-aa0a-5edf3730b4cf","Type":"ContainerStarted","Data":"d980b691326280d5a92751f1d888e7b62194f1955cfffad02922248e2b1e91cb"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.917455 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.919439 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w" event={"ID":"6bbfd974-9b27-4143-9c5c-031c5a4f28b2","Type":"ContainerStarted","Data":"475857350ecfc4b4a405e334c1a884b49a5eb66c046b214be0674ba2170ab02d"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.919805 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.920942 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb" event={"ID":"7a574f0d-6d00-487d-af74-19d886ccc174","Type":"ContainerStarted","Data":"b5cbd021c6308f2371a974a79b1ea24d874d8e2d9eef3a32ea732a5e58ceaefd"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.920964 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb" event={"ID":"7a574f0d-6d00-487d-af74-19d886ccc174","Type":"ContainerStarted","Data":"1d10caab9aa7e5768afad22dacfdead957feaf96e3ed44ea1cf3087bedf0b919"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.921329 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.925337 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg" event={"ID":"9304bb08-b481-42e5-89ed-f215f0102662","Type":"ContainerStarted","Data":"eff5c7435a9d8230ad8687d15e330a6c6cbee2fdb53254cbae581eb1ee2ad216"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.925364 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg" event={"ID":"9304bb08-b481-42e5-89ed-f215f0102662","Type":"ContainerStarted","Data":"bcb4542dce2b53528329da25f9f4561d79e4fdfaf6b69178049ff0258c1d2e1a"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.925729 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.926920 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q" event={"ID":"56043c1e-adb8-4d37-9067-cb28a5103fdd","Type":"ContainerStarted","Data":"1f3e8464d7d0af668bb575888e8f8f49ea4df308c0134cd4c7fa8eb6843105c1"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.926944 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q" event={"ID":"56043c1e-adb8-4d37-9067-cb28a5103fdd","Type":"ContainerStarted","Data":"73cdbf3bd21d15b30de1fd9f41179a3eaf4c6ac692de82203e35851e3f2e826e"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.927299 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.928337 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn" event={"ID":"1d892fbc-e66c-405d-9d42-e306d9b652b8","Type":"ContainerStarted","Data":"6f89d3e1d1091430c981c3ff3e48d89b96bd8867f901b4ef66a303b9073f340e"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.928679 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.929720 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z" event={"ID":"6aea9dc6-2e85-470c-9897-064111a4661e","Type":"ContainerStarted","Data":"c07703597c503ba85facd1a33641515e017270240c54e5cba4539b6322f1e110"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.929740 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z" event={"ID":"6aea9dc6-2e85-470c-9897-064111a4661e","Type":"ContainerStarted","Data":"0425e6eb5082e118f750e34ca097c2bf66c31c13d3ffca3698c6637dbd305f5f"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.930155 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.932990 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp" event={"ID":"0828ae93-40ce-46ac-a769-f5e4e735b186","Type":"ContainerStarted","Data":"707d69aa2bffa1db87f4e66e2533d3d275b72a823021ec5501d7eb89de1141dd"} Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.945072 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.950524 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d" podStartSLOduration=3.447549138 podStartE2EDuration="15.950490997s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:02.960176485 +0000 UTC m=+1208.069318983" lastFinishedPulling="2025-11-21 10:01:15.463118344 +0000 UTC m=+1220.572260842" observedRunningTime="2025-11-21 10:01:16.945152845 +0000 UTC m=+1222.054295353" watchObservedRunningTime="2025-11-21 10:01:16.950490997 +0000 UTC m=+1222.059633495" Nov 21 10:01:16 crc kubenswrapper[4972]: I1121 10:01:16.993553 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z" podStartSLOduration=4.416856388 podStartE2EDuration="15.993530872s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:03.944787032 +0000 UTC m=+1209.053929530" lastFinishedPulling="2025-11-21 10:01:15.521461516 +0000 UTC m=+1220.630604014" observedRunningTime="2025-11-21 10:01:16.982031606 +0000 UTC m=+1222.091174114" watchObservedRunningTime="2025-11-21 10:01:16.993530872 +0000 UTC m=+1222.102673370" Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.018806 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q" podStartSLOduration=4.478264672 podStartE2EDuration="16.018788154s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:03.915470842 +0000 UTC m=+1209.024613330" lastFinishedPulling="2025-11-21 10:01:15.455994314 +0000 UTC m=+1220.565136812" observedRunningTime="2025-11-21 10:01:17.016126733 +0000 UTC m=+1222.125269261" watchObservedRunningTime="2025-11-21 10:01:17.018788154 +0000 UTC m=+1222.127930652" Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.045423 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb" podStartSLOduration=3.539024409 podStartE2EDuration="15.045401822s" podCreationTimestamp="2025-11-21 10:01:02 +0000 UTC" firstStartedPulling="2025-11-21 10:01:03.949385345 +0000 UTC m=+1209.058527843" lastFinishedPulling="2025-11-21 10:01:15.455762758 +0000 UTC m=+1220.564905256" observedRunningTime="2025-11-21 10:01:17.040140362 +0000 UTC m=+1222.149282860" watchObservedRunningTime="2025-11-21 10:01:17.045401822 +0000 UTC m=+1222.154544330" Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.068851 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w" podStartSLOduration=4.501050648 podStartE2EDuration="16.068823125s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:03.952857697 +0000 UTC m=+1209.062000195" lastFinishedPulling="2025-11-21 10:01:15.520630174 +0000 UTC m=+1220.629772672" observedRunningTime="2025-11-21 10:01:17.066431472 +0000 UTC m=+1222.175573990" watchObservedRunningTime="2025-11-21 10:01:17.068823125 +0000 UTC m=+1222.177965623" Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.083924 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg" podStartSLOduration=4.489408989 podStartE2EDuration="16.083908797s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:03.912929535 +0000 UTC m=+1209.022072033" lastFinishedPulling="2025-11-21 10:01:15.507429343 +0000 UTC m=+1220.616571841" observedRunningTime="2025-11-21 10:01:17.079666934 +0000 UTC m=+1222.188809442" watchObservedRunningTime="2025-11-21 10:01:17.083908797 +0000 UTC m=+1222.193051295" Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.113952 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp" podStartSLOduration=4.563967543 podStartE2EDuration="16.113909805s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:03.913171391 +0000 UTC m=+1209.022313889" lastFinishedPulling="2025-11-21 10:01:15.463113633 +0000 UTC m=+1220.572256151" observedRunningTime="2025-11-21 10:01:17.108398038 +0000 UTC m=+1222.217540546" watchObservedRunningTime="2025-11-21 10:01:17.113909805 +0000 UTC m=+1222.223052303" Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.139447 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn" podStartSLOduration=4.6204535060000005 podStartE2EDuration="16.139427924s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:03.937217771 +0000 UTC m=+1209.046360269" lastFinishedPulling="2025-11-21 10:01:15.456192179 +0000 UTC m=+1220.565334687" observedRunningTime="2025-11-21 10:01:17.137007139 +0000 UTC m=+1222.246149657" watchObservedRunningTime="2025-11-21 10:01:17.139427924 +0000 UTC m=+1222.248570422" Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.169966 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg" podStartSLOduration=4.523931908 podStartE2EDuration="16.169946396s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:03.887636082 +0000 UTC m=+1208.996778580" lastFinishedPulling="2025-11-21 10:01:15.53365057 +0000 UTC m=+1220.642793068" observedRunningTime="2025-11-21 10:01:17.164710526 +0000 UTC m=+1222.273853034" watchObservedRunningTime="2025-11-21 10:01:17.169946396 +0000 UTC m=+1222.279088894" Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.191445 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj" podStartSLOduration=3.653642328 podStartE2EDuration="15.191426567s" podCreationTimestamp="2025-11-21 10:01:02 +0000 UTC" firstStartedPulling="2025-11-21 10:01:03.970438825 +0000 UTC m=+1209.079581323" lastFinishedPulling="2025-11-21 10:01:15.508223064 +0000 UTC m=+1220.617365562" observedRunningTime="2025-11-21 10:01:17.191034827 +0000 UTC m=+1222.300177345" watchObservedRunningTime="2025-11-21 10:01:17.191426567 +0000 UTC m=+1222.300569065" Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.944038 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp" event={"ID":"0828ae93-40ce-46ac-a769-f5e4e735b186","Type":"ContainerStarted","Data":"cd40394dfe79f5b5401cf9956996d90f65f3b9586c37e96ce6b7bc13f08e220d"} Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.946461 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8" event={"ID":"43051777-1676-4b54-8ddb-ba534e3d1b51","Type":"ContainerStarted","Data":"c0b48a8c9ee82d6029d65812f88a3c601a3fbcb155e5ff7ddca306b4c54bce8c"} Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.947082 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8" Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.948793 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w" event={"ID":"6bbfd974-9b27-4143-9c5c-031c5a4f28b2","Type":"ContainerStarted","Data":"0b382ac884eb1c73254058e546659e063fcf41411e7121aec4ced15fe802c69a"} Nov 21 10:01:17 crc kubenswrapper[4972]: I1121 10:01:17.968877 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8" podStartSLOduration=4.645037676 podStartE2EDuration="15.968856612s" podCreationTimestamp="2025-11-21 10:01:02 +0000 UTC" firstStartedPulling="2025-11-21 10:01:04.18375359 +0000 UTC m=+1209.292896088" lastFinishedPulling="2025-11-21 10:01:15.507572526 +0000 UTC m=+1220.616715024" observedRunningTime="2025-11-21 10:01:17.961232079 +0000 UTC m=+1223.070374577" watchObservedRunningTime="2025-11-21 10:01:17.968856612 +0000 UTC m=+1223.077999110" Nov 21 10:01:18 crc kubenswrapper[4972]: I1121 10:01:18.959644 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" event={"ID":"2a54b288-8941-4f22-bfe1-d99802311c60","Type":"ContainerStarted","Data":"0585c33a1758261647ed2a5d87d5bb51f623cfe759af3af0036ebf0123a69417"} Nov 21 10:01:18 crc kubenswrapper[4972]: I1121 10:01:18.981389 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" podStartSLOduration=3.6028002900000002 podStartE2EDuration="17.981371422s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:04.001483881 +0000 UTC m=+1209.110626379" lastFinishedPulling="2025-11-21 10:01:18.380055013 +0000 UTC m=+1223.489197511" observedRunningTime="2025-11-21 10:01:18.97415688 +0000 UTC m=+1224.083299378" watchObservedRunningTime="2025-11-21 10:01:18.981371422 +0000 UTC m=+1224.090513920" Nov 21 10:01:20 crc kubenswrapper[4972]: I1121 10:01:20.979490 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" event={"ID":"2bfc0746-c4ed-4b34-991b-3b63f96614e6","Type":"ContainerStarted","Data":"aec54a4893b7c5808f7f2e4bce158c4d756279d3bda7249408b8a8b1fd47b80c"} Nov 21 10:01:20 crc kubenswrapper[4972]: I1121 10:01:20.980873 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" Nov 21 10:01:20 crc kubenswrapper[4972]: I1121 10:01:20.983752 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" event={"ID":"12710259-8a41-4aa6-8842-54b6ac9aad22","Type":"ContainerStarted","Data":"6685de9c627e4b54f81abd214aa5ee7dd4e2ab98113e0b75889fb9942aac3f94"} Nov 21 10:01:20 crc kubenswrapper[4972]: I1121 10:01:20.984228 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" Nov 21 10:01:21 crc kubenswrapper[4972]: I1121 10:01:21.000086 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" podStartSLOduration=4.105367462 podStartE2EDuration="20.000070802s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:04.206680461 +0000 UTC m=+1209.315822959" lastFinishedPulling="2025-11-21 10:01:20.101383801 +0000 UTC m=+1225.210526299" observedRunningTime="2025-11-21 10:01:20.994155994 +0000 UTC m=+1226.103298512" watchObservedRunningTime="2025-11-21 10:01:21.000070802 +0000 UTC m=+1226.109213310" Nov 21 10:01:21 crc kubenswrapper[4972]: I1121 10:01:21.023411 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" podStartSLOduration=3.098241122 podStartE2EDuration="19.023394902s" podCreationTimestamp="2025-11-21 10:01:02 +0000 UTC" firstStartedPulling="2025-11-21 10:01:04.206323881 +0000 UTC m=+1209.315466379" lastFinishedPulling="2025-11-21 10:01:20.131477671 +0000 UTC m=+1225.240620159" observedRunningTime="2025-11-21 10:01:21.019428207 +0000 UTC m=+1226.128570715" watchObservedRunningTime="2025-11-21 10:01:21.023394902 +0000 UTC m=+1226.132537400" Nov 21 10:01:22 crc kubenswrapper[4972]: I1121 10:01:22.044755 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-g4s6z" Nov 21 10:01:22 crc kubenswrapper[4972]: I1121 10:01:22.063799 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-vhv8d" Nov 21 10:01:22 crc kubenswrapper[4972]: I1121 10:01:22.072564 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-t6wbg" Nov 21 10:01:22 crc kubenswrapper[4972]: I1121 10:01:22.125928 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-jbvkg" Nov 21 10:01:22 crc kubenswrapper[4972]: I1121 10:01:22.141886 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-fdwgn" Nov 21 10:01:22 crc kubenswrapper[4972]: I1121 10:01:22.226254 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-n4vrp" Nov 21 10:01:22 crc kubenswrapper[4972]: I1121 10:01:22.405405 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-nkg8q" Nov 21 10:01:22 crc kubenswrapper[4972]: I1121 10:01:22.487980 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" Nov 21 10:01:22 crc kubenswrapper[4972]: I1121 10:01:22.509479 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-7dh7w" Nov 21 10:01:22 crc kubenswrapper[4972]: I1121 10:01:22.569740 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-lcbxj" Nov 21 10:01:22 crc kubenswrapper[4972]: I1121 10:01:22.696435 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-bxcdb" Nov 21 10:01:22 crc kubenswrapper[4972]: I1121 10:01:22.784187 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-lx4b9" Nov 21 10:01:23 crc kubenswrapper[4972]: I1121 10:01:23.039645 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-92vm8" Nov 21 10:01:29 crc kubenswrapper[4972]: I1121 10:01:29.043344 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" event={"ID":"a354b71d-0aa7-4cac-8022-90de180af97d","Type":"ContainerStarted","Data":"5bf8dad9446e4b5b2518892b0b81454910e163cd001acda46e6ca24d8306b659"} Nov 21 10:01:29 crc kubenswrapper[4972]: I1121 10:01:29.045520 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" event={"ID":"65d1f303-dffc-4de3-8192-4c74f4c33750","Type":"ContainerStarted","Data":"416ed7634ee16ab357a1f4c3686433854f368d3828021a61036d0c94f1d382a5"} Nov 21 10:01:29 crc kubenswrapper[4972]: I1121 10:01:29.047233 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" event={"ID":"220363ab-e1b7-44ee-962d-e2de79b22ab6","Type":"ContainerStarted","Data":"137c61da791dd426ae8875d5d9a951c284e05242002b5efc9b635e4aaed64dce"} Nov 21 10:01:29 crc kubenswrapper[4972]: I1121 10:01:29.048680 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k" event={"ID":"a4700bb1-902c-40bc-b02f-c8efe4893180","Type":"ContainerStarted","Data":"bc78cf0b49eb04e7ebeb26053f4ded9a4f9698474a55916fb6c2fd3962c62bb9"} Nov 21 10:01:29 crc kubenswrapper[4972]: I1121 10:01:29.050474 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" event={"ID":"902981fa-5b52-4436-b685-18372dd43999","Type":"ContainerStarted","Data":"8d9828b7082c7ea0fc36c8a5a4bb9b2ec4ae3be2a069b67f9159d63bdd7c8720"} Nov 21 10:01:29 crc kubenswrapper[4972]: I1121 10:01:29.052187 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" event={"ID":"3efec5f8-69ac-4971-a9e5-2c53352cabed","Type":"ContainerStarted","Data":"1669382f71bfbf08c84ee419be91992a663e7221f362ec40ecaccac74b0edd73"} Nov 21 10:01:30 crc kubenswrapper[4972]: I1121 10:01:30.059864 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" event={"ID":"40fab427-66b5-4519-b416-0d5f253e2c10","Type":"ContainerStarted","Data":"b39eae126847b2ebbe304800e9ff4bae0fc357b3c6c06dbe88ac4af3da1fb3d0"} Nov 21 10:01:30 crc kubenswrapper[4972]: I1121 10:01:30.059990 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" Nov 21 10:01:30 crc kubenswrapper[4972]: I1121 10:01:30.078884 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" podStartSLOduration=4.8910027419999995 podStartE2EDuration="28.078862515s" podCreationTimestamp="2025-11-21 10:01:02 +0000 UTC" firstStartedPulling="2025-11-21 10:01:04.228330217 +0000 UTC m=+1209.337472715" lastFinishedPulling="2025-11-21 10:01:27.41618998 +0000 UTC m=+1232.525332488" observedRunningTime="2025-11-21 10:01:30.072773062 +0000 UTC m=+1235.181915580" watchObservedRunningTime="2025-11-21 10:01:30.078862515 +0000 UTC m=+1235.188005013" Nov 21 10:01:32 crc kubenswrapper[4972]: I1121 10:01:32.490272 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-chz4z" Nov 21 10:01:32 crc kubenswrapper[4972]: I1121 10:01:32.546293 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-ddgk9" Nov 21 10:01:32 crc kubenswrapper[4972]: I1121 10:01:32.837364 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8464cf66df-66m2r" Nov 21 10:01:34 crc kubenswrapper[4972]: I1121 10:01:34.087663 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" Nov 21 10:01:34 crc kubenswrapper[4972]: I1121 10:01:34.088098 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" Nov 21 10:01:34 crc kubenswrapper[4972]: I1121 10:01:34.090895 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" Nov 21 10:01:34 crc kubenswrapper[4972]: I1121 10:01:34.093761 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" Nov 21 10:01:34 crc kubenswrapper[4972]: I1121 10:01:34.108776 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2kq6z" podStartSLOduration=8.490095123 podStartE2EDuration="32.10875923s" podCreationTimestamp="2025-11-21 10:01:02 +0000 UTC" firstStartedPulling="2025-11-21 10:01:03.97099525 +0000 UTC m=+1209.080137748" lastFinishedPulling="2025-11-21 10:01:27.589659357 +0000 UTC m=+1232.698801855" observedRunningTime="2025-11-21 10:01:34.103124795 +0000 UTC m=+1239.212267293" watchObservedRunningTime="2025-11-21 10:01:34.10875923 +0000 UTC m=+1239.217901728" Nov 21 10:01:34 crc kubenswrapper[4972]: I1121 10:01:34.130785 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk" podStartSLOduration=8.746853756 podStartE2EDuration="32.130767906s" podCreationTimestamp="2025-11-21 10:01:02 +0000 UTC" firstStartedPulling="2025-11-21 10:01:04.19348996 +0000 UTC m=+1209.302632458" lastFinishedPulling="2025-11-21 10:01:27.57740407 +0000 UTC m=+1232.686546608" observedRunningTime="2025-11-21 10:01:34.124350198 +0000 UTC m=+1239.233492716" watchObservedRunningTime="2025-11-21 10:01:34.130767906 +0000 UTC m=+1239.239910404" Nov 21 10:01:35 crc kubenswrapper[4972]: I1121 10:01:35.094751 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" Nov 21 10:01:35 crc kubenswrapper[4972]: I1121 10:01:35.101077 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" Nov 21 10:01:35 crc kubenswrapper[4972]: I1121 10:01:35.117078 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k" podStartSLOduration=9.81099964 podStartE2EDuration="33.117059946s" podCreationTimestamp="2025-11-21 10:01:02 +0000 UTC" firstStartedPulling="2025-11-21 10:01:04.238467876 +0000 UTC m=+1209.347610374" lastFinishedPulling="2025-11-21 10:01:27.544528172 +0000 UTC m=+1232.653670680" observedRunningTime="2025-11-21 10:01:35.113224895 +0000 UTC m=+1240.222367393" watchObservedRunningTime="2025-11-21 10:01:35.117059946 +0000 UTC m=+1240.226202444" Nov 21 10:01:35 crc kubenswrapper[4972]: I1121 10:01:35.145495 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nsdgb" podStartSLOduration=9.787444864 podStartE2EDuration="33.145462071s" podCreationTimestamp="2025-11-21 10:01:02 +0000 UTC" firstStartedPulling="2025-11-21 10:01:04.220018415 +0000 UTC m=+1209.329160923" lastFinishedPulling="2025-11-21 10:01:27.578035592 +0000 UTC m=+1232.687178130" observedRunningTime="2025-11-21 10:01:35.133675003 +0000 UTC m=+1240.242817521" watchObservedRunningTime="2025-11-21 10:01:35.145462071 +0000 UTC m=+1240.254604609" Nov 21 10:01:35 crc kubenswrapper[4972]: I1121 10:01:35.160293 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" podStartSLOduration=10.7448056 podStartE2EDuration="34.160266415s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:04.001294916 +0000 UTC m=+1209.110437414" lastFinishedPulling="2025-11-21 10:01:27.416755731 +0000 UTC m=+1232.525898229" observedRunningTime="2025-11-21 10:01:35.153847976 +0000 UTC m=+1240.262990514" watchObservedRunningTime="2025-11-21 10:01:35.160266415 +0000 UTC m=+1240.269408943" Nov 21 10:01:35 crc kubenswrapper[4972]: I1121 10:01:35.214960 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" podStartSLOduration=10.67047705 podStartE2EDuration="34.214937975s" podCreationTimestamp="2025-11-21 10:01:01 +0000 UTC" firstStartedPulling="2025-11-21 10:01:04.000534676 +0000 UTC m=+1209.109677184" lastFinishedPulling="2025-11-21 10:01:27.544995571 +0000 UTC m=+1232.654138109" observedRunningTime="2025-11-21 10:01:35.202015857 +0000 UTC m=+1240.311158375" watchObservedRunningTime="2025-11-21 10:01:35.214937975 +0000 UTC m=+1240.324080493" Nov 21 10:01:42 crc kubenswrapper[4972]: I1121 10:01:42.204876 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" Nov 21 10:01:42 crc kubenswrapper[4972]: I1121 10:01:42.207320 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-llzpm" Nov 21 10:01:42 crc kubenswrapper[4972]: I1121 10:01:42.234010 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" Nov 21 10:01:42 crc kubenswrapper[4972]: I1121 10:01:42.236891 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-7cn8g" Nov 21 10:01:42 crc kubenswrapper[4972]: I1121 10:01:42.836950 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-b2644" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.168247 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d58585b49-w5l94"] Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.169887 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d58585b49-w5l94" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.173533 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-gqbgz" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.173693 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.173939 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.174055 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.194196 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d58585b49-w5l94"] Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.218540 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54911f74-850d-4a58-8bc2-56381021ce79-config\") pod \"dnsmasq-dns-5d58585b49-w5l94\" (UID: \"54911f74-850d-4a58-8bc2-56381021ce79\") " pod="openstack/dnsmasq-dns-5d58585b49-w5l94" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.218652 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4shc\" (UniqueName: \"kubernetes.io/projected/54911f74-850d-4a58-8bc2-56381021ce79-kube-api-access-z4shc\") pod \"dnsmasq-dns-5d58585b49-w5l94\" (UID: \"54911f74-850d-4a58-8bc2-56381021ce79\") " pod="openstack/dnsmasq-dns-5d58585b49-w5l94" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.263814 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77966f9df5-gnnwr"] Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.265401 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.268269 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.274070 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77966f9df5-gnnwr"] Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.319786 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t2q8\" (UniqueName: \"kubernetes.io/projected/9efc9658-6b41-4b20-9c12-375ca7133c85-kube-api-access-4t2q8\") pod \"dnsmasq-dns-77966f9df5-gnnwr\" (UID: \"9efc9658-6b41-4b20-9c12-375ca7133c85\") " pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.319879 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efc9658-6b41-4b20-9c12-375ca7133c85-config\") pod \"dnsmasq-dns-77966f9df5-gnnwr\" (UID: \"9efc9658-6b41-4b20-9c12-375ca7133c85\") " pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.319930 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4shc\" (UniqueName: \"kubernetes.io/projected/54911f74-850d-4a58-8bc2-56381021ce79-kube-api-access-z4shc\") pod \"dnsmasq-dns-5d58585b49-w5l94\" (UID: \"54911f74-850d-4a58-8bc2-56381021ce79\") " pod="openstack/dnsmasq-dns-5d58585b49-w5l94" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.320222 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efc9658-6b41-4b20-9c12-375ca7133c85-dns-svc\") pod \"dnsmasq-dns-77966f9df5-gnnwr\" (UID: \"9efc9658-6b41-4b20-9c12-375ca7133c85\") " pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.320289 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54911f74-850d-4a58-8bc2-56381021ce79-config\") pod \"dnsmasq-dns-5d58585b49-w5l94\" (UID: \"54911f74-850d-4a58-8bc2-56381021ce79\") " pod="openstack/dnsmasq-dns-5d58585b49-w5l94" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.321303 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54911f74-850d-4a58-8bc2-56381021ce79-config\") pod \"dnsmasq-dns-5d58585b49-w5l94\" (UID: \"54911f74-850d-4a58-8bc2-56381021ce79\") " pod="openstack/dnsmasq-dns-5d58585b49-w5l94" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.347558 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4shc\" (UniqueName: \"kubernetes.io/projected/54911f74-850d-4a58-8bc2-56381021ce79-kube-api-access-z4shc\") pod \"dnsmasq-dns-5d58585b49-w5l94\" (UID: \"54911f74-850d-4a58-8bc2-56381021ce79\") " pod="openstack/dnsmasq-dns-5d58585b49-w5l94" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.421386 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t2q8\" (UniqueName: \"kubernetes.io/projected/9efc9658-6b41-4b20-9c12-375ca7133c85-kube-api-access-4t2q8\") pod \"dnsmasq-dns-77966f9df5-gnnwr\" (UID: \"9efc9658-6b41-4b20-9c12-375ca7133c85\") " pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.421451 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efc9658-6b41-4b20-9c12-375ca7133c85-config\") pod \"dnsmasq-dns-77966f9df5-gnnwr\" (UID: \"9efc9658-6b41-4b20-9c12-375ca7133c85\") " pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.421582 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efc9658-6b41-4b20-9c12-375ca7133c85-dns-svc\") pod \"dnsmasq-dns-77966f9df5-gnnwr\" (UID: \"9efc9658-6b41-4b20-9c12-375ca7133c85\") " pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.422426 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efc9658-6b41-4b20-9c12-375ca7133c85-config\") pod \"dnsmasq-dns-77966f9df5-gnnwr\" (UID: \"9efc9658-6b41-4b20-9c12-375ca7133c85\") " pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.422514 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efc9658-6b41-4b20-9c12-375ca7133c85-dns-svc\") pod \"dnsmasq-dns-77966f9df5-gnnwr\" (UID: \"9efc9658-6b41-4b20-9c12-375ca7133c85\") " pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.436433 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t2q8\" (UniqueName: \"kubernetes.io/projected/9efc9658-6b41-4b20-9c12-375ca7133c85-kube-api-access-4t2q8\") pod \"dnsmasq-dns-77966f9df5-gnnwr\" (UID: \"9efc9658-6b41-4b20-9c12-375ca7133c85\") " pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.498163 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d58585b49-w5l94" Nov 21 10:01:59 crc kubenswrapper[4972]: I1121 10:01:59.597745 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:02:00 crc kubenswrapper[4972]: I1121 10:02:00.003786 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d58585b49-w5l94"] Nov 21 10:02:00 crc kubenswrapper[4972]: I1121 10:02:00.074055 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77966f9df5-gnnwr"] Nov 21 10:02:00 crc kubenswrapper[4972]: W1121 10:02:00.077712 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9efc9658_6b41_4b20_9c12_375ca7133c85.slice/crio-ba43c26335bc4f999ff089101996892eca1f10d8dd029cc4762bdbc705b8387c WatchSource:0}: Error finding container ba43c26335bc4f999ff089101996892eca1f10d8dd029cc4762bdbc705b8387c: Status 404 returned error can't find the container with id ba43c26335bc4f999ff089101996892eca1f10d8dd029cc4762bdbc705b8387c Nov 21 10:02:00 crc kubenswrapper[4972]: I1121 10:02:00.327724 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" event={"ID":"9efc9658-6b41-4b20-9c12-375ca7133c85","Type":"ContainerStarted","Data":"ba43c26335bc4f999ff089101996892eca1f10d8dd029cc4762bdbc705b8387c"} Nov 21 10:02:00 crc kubenswrapper[4972]: I1121 10:02:00.329684 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d58585b49-w5l94" event={"ID":"54911f74-850d-4a58-8bc2-56381021ce79","Type":"ContainerStarted","Data":"b4ef97df95de67af1ebed4bb7c33ecd3a1fa1d2266b897f440b8fc8b0593304c"} Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.032335 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d58585b49-w5l94"] Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.059579 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c84d8598c-pw9pv"] Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.061282 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.077249 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c84d8598c-pw9pv"] Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.258975 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa2af7d6-efa4-459b-9a15-bdb10778978b-config\") pod \"dnsmasq-dns-7c84d8598c-pw9pv\" (UID: \"fa2af7d6-efa4-459b-9a15-bdb10778978b\") " pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.259044 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa2af7d6-efa4-459b-9a15-bdb10778978b-dns-svc\") pod \"dnsmasq-dns-7c84d8598c-pw9pv\" (UID: \"fa2af7d6-efa4-459b-9a15-bdb10778978b\") " pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.259116 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxthm\" (UniqueName: \"kubernetes.io/projected/fa2af7d6-efa4-459b-9a15-bdb10778978b-kube-api-access-kxthm\") pod \"dnsmasq-dns-7c84d8598c-pw9pv\" (UID: \"fa2af7d6-efa4-459b-9a15-bdb10778978b\") " pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.312755 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77966f9df5-gnnwr"] Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.335743 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85965d46c9-fbww9"] Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.337120 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.347357 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85965d46c9-fbww9"] Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.360244 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa2af7d6-efa4-459b-9a15-bdb10778978b-config\") pod \"dnsmasq-dns-7c84d8598c-pw9pv\" (UID: \"fa2af7d6-efa4-459b-9a15-bdb10778978b\") " pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.360306 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa2af7d6-efa4-459b-9a15-bdb10778978b-dns-svc\") pod \"dnsmasq-dns-7c84d8598c-pw9pv\" (UID: \"fa2af7d6-efa4-459b-9a15-bdb10778978b\") " pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.360377 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ca8eae-ab19-4dbc-80b2-036cc49daf01-config\") pod \"dnsmasq-dns-85965d46c9-fbww9\" (UID: \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\") " pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.360418 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxthm\" (UniqueName: \"kubernetes.io/projected/fa2af7d6-efa4-459b-9a15-bdb10778978b-kube-api-access-kxthm\") pod \"dnsmasq-dns-7c84d8598c-pw9pv\" (UID: \"fa2af7d6-efa4-459b-9a15-bdb10778978b\") " pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.360466 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ca8eae-ab19-4dbc-80b2-036cc49daf01-dns-svc\") pod \"dnsmasq-dns-85965d46c9-fbww9\" (UID: \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\") " pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.360515 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztcb8\" (UniqueName: \"kubernetes.io/projected/43ca8eae-ab19-4dbc-80b2-036cc49daf01-kube-api-access-ztcb8\") pod \"dnsmasq-dns-85965d46c9-fbww9\" (UID: \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\") " pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.361363 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa2af7d6-efa4-459b-9a15-bdb10778978b-dns-svc\") pod \"dnsmasq-dns-7c84d8598c-pw9pv\" (UID: \"fa2af7d6-efa4-459b-9a15-bdb10778978b\") " pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.362330 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa2af7d6-efa4-459b-9a15-bdb10778978b-config\") pod \"dnsmasq-dns-7c84d8598c-pw9pv\" (UID: \"fa2af7d6-efa4-459b-9a15-bdb10778978b\") " pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.394452 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxthm\" (UniqueName: \"kubernetes.io/projected/fa2af7d6-efa4-459b-9a15-bdb10778978b-kube-api-access-kxthm\") pod \"dnsmasq-dns-7c84d8598c-pw9pv\" (UID: \"fa2af7d6-efa4-459b-9a15-bdb10778978b\") " pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.461330 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ca8eae-ab19-4dbc-80b2-036cc49daf01-config\") pod \"dnsmasq-dns-85965d46c9-fbww9\" (UID: \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\") " pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.461390 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ca8eae-ab19-4dbc-80b2-036cc49daf01-dns-svc\") pod \"dnsmasq-dns-85965d46c9-fbww9\" (UID: \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\") " pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.461418 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztcb8\" (UniqueName: \"kubernetes.io/projected/43ca8eae-ab19-4dbc-80b2-036cc49daf01-kube-api-access-ztcb8\") pod \"dnsmasq-dns-85965d46c9-fbww9\" (UID: \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\") " pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.462329 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ca8eae-ab19-4dbc-80b2-036cc49daf01-dns-svc\") pod \"dnsmasq-dns-85965d46c9-fbww9\" (UID: \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\") " pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.462407 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ca8eae-ab19-4dbc-80b2-036cc49daf01-config\") pod \"dnsmasq-dns-85965d46c9-fbww9\" (UID: \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\") " pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.488394 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztcb8\" (UniqueName: \"kubernetes.io/projected/43ca8eae-ab19-4dbc-80b2-036cc49daf01-kube-api-access-ztcb8\") pod \"dnsmasq-dns-85965d46c9-fbww9\" (UID: \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\") " pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.668190 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:02 crc kubenswrapper[4972]: I1121 10:02:02.679696 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.203405 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.212115 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.212599 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.216099 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.216348 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.216562 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2hhwf" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.216716 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.216760 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.216923 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.216971 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.229528 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c84d8598c-pw9pv"] Nov 21 10:02:03 crc kubenswrapper[4972]: W1121 10:02:03.242914 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa2af7d6_efa4_459b_9a15_bdb10778978b.slice/crio-1561b8e84722b64467618073d6dea9971f1ad79f4d5a92a642e1d4326557b07e WatchSource:0}: Error finding container 1561b8e84722b64467618073d6dea9971f1ad79f4d5a92a642e1d4326557b07e: Status 404 returned error can't find the container with id 1561b8e84722b64467618073d6dea9971f1ad79f4d5a92a642e1d4326557b07e Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.314333 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85965d46c9-fbww9"] Nov 21 10:02:03 crc kubenswrapper[4972]: W1121 10:02:03.318479 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ca8eae_ab19_4dbc_80b2_036cc49daf01.slice/crio-15d506f6db2de65bc0d1304310975dd86825b2bc9701fe7969aefaf3267f0fd8 WatchSource:0}: Error finding container 15d506f6db2de65bc0d1304310975dd86825b2bc9701fe7969aefaf3267f0fd8: Status 404 returned error can't find the container with id 15d506f6db2de65bc0d1304310975dd86825b2bc9701fe7969aefaf3267f0fd8 Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.362923 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" event={"ID":"fa2af7d6-efa4-459b-9a15-bdb10778978b","Type":"ContainerStarted","Data":"1561b8e84722b64467618073d6dea9971f1ad79f4d5a92a642e1d4326557b07e"} Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.368588 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85965d46c9-fbww9" event={"ID":"43ca8eae-ab19-4dbc-80b2-036cc49daf01","Type":"ContainerStarted","Data":"15d506f6db2de65bc0d1304310975dd86825b2bc9701fe7969aefaf3267f0fd8"} Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.377129 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-pod-info\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.377199 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.377253 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-server-conf\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.377276 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.377305 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.377334 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.377376 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.377460 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7jvj\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-kube-api-access-c7jvj\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.377496 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.377517 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.377538 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.483719 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7jvj\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-kube-api-access-c7jvj\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.484030 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.484051 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.484072 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.484127 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-pod-info\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.484148 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.484173 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-server-conf\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.484190 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.484205 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.484225 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.484252 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.524244 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.526859 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.527491 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.529393 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-pod-info\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.530417 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.530669 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.530751 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.530814 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-server-conf\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.531861 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.532738 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.534561 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.535142 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.535304 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.535696 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.536222 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nmp8h" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.536377 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.536694 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.537107 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.539258 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.554488 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7jvj\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-kube-api-access-c7jvj\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.560117 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.562445 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.691416 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.691482 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.691555 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.691586 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2bc44abc-7710-432b-b503-fd54e3afeede-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.691620 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.691660 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.691728 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.691801 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltknk\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-kube-api-access-ltknk\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.691877 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.692231 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.692306 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2bc44abc-7710-432b-b503-fd54e3afeede-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.796052 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2bc44abc-7710-432b-b503-fd54e3afeede-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.796126 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.796181 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.796214 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.796262 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2bc44abc-7710-432b-b503-fd54e3afeede-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.796286 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.796301 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.796347 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.796375 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltknk\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-kube-api-access-ltknk\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.796391 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.796428 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.797796 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.798783 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.798953 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.799644 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.800429 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.800486 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.801666 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.812321 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2bc44abc-7710-432b-b503-fd54e3afeede-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.812995 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.816768 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2bc44abc-7710-432b-b503-fd54e3afeede-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.818182 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltknk\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-kube-api-access-ltknk\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.829166 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.833964 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 10:02:03 crc kubenswrapper[4972]: I1121 10:02:03.919083 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.316710 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.383029 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"392b5094-f8ef-47b8-8dc5-9e1d2dbef612","Type":"ContainerStarted","Data":"af52000d929d09038b9c9513fb117f07e78c19b6a2715e2afbbe9ecf4b69b07f"} Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.572328 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 10:02:04 crc kubenswrapper[4972]: W1121 10:02:04.582090 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bc44abc_7710_432b_b503_fd54e3afeede.slice/crio-a3ce3ee1c820c06cf672fdfc25dc02c7cd4b5101b0db3597d6e585aad5886b89 WatchSource:0}: Error finding container a3ce3ee1c820c06cf672fdfc25dc02c7cd4b5101b0db3597d6e585aad5886b89: Status 404 returned error can't find the container with id a3ce3ee1c820c06cf672fdfc25dc02c7cd4b5101b0db3597d6e585aad5886b89 Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.814507 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.816249 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.820134 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.820251 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.820386 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7mcmr" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.820573 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.827811 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.842499 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.919369 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.919946 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.920267 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.920554 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.920747 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.921042 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9558f\" (UniqueName: \"kubernetes.io/projected/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-kube-api-access-9558f\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.924575 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-kolla-config\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:04 crc kubenswrapper[4972]: I1121 10:02:04.924912 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-config-data-default\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.027992 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.028034 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.028056 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.028081 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.028114 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9558f\" (UniqueName: \"kubernetes.io/projected/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-kube-api-access-9558f\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.028145 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-kolla-config\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.028172 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-config-data-default\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.028214 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.028692 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.029416 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-kolla-config\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.030061 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-config-data-default\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.030714 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.030989 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.039019 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.050617 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.065890 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.075791 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9558f\" (UniqueName: \"kubernetes.io/projected/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-kube-api-access-9558f\") pod \"openstack-galera-0\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.158700 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.421558 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2bc44abc-7710-432b-b503-fd54e3afeede","Type":"ContainerStarted","Data":"a3ce3ee1c820c06cf672fdfc25dc02c7cd4b5101b0db3597d6e585aad5886b89"} Nov 21 10:02:05 crc kubenswrapper[4972]: I1121 10:02:05.835247 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.079640 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.081749 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.086494 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.086799 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rt8p9" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.086975 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.087134 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.111245 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.153179 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.153251 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed54a06-08b9-41a2-92d9-a745631e053c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.153288 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed54a06-08b9-41a2-92d9-a745631e053c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.153366 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvt2p\" (UniqueName: \"kubernetes.io/projected/8ed54a06-08b9-41a2-92d9-a745631e053c-kube-api-access-zvt2p\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.153426 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8ed54a06-08b9-41a2-92d9-a745631e053c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.153478 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.153507 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.153545 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.267511 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8ed54a06-08b9-41a2-92d9-a745631e053c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.267590 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.267614 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.267634 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.267710 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.267761 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed54a06-08b9-41a2-92d9-a745631e053c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.267800 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed54a06-08b9-41a2-92d9-a745631e053c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.267896 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvt2p\" (UniqueName: \"kubernetes.io/projected/8ed54a06-08b9-41a2-92d9-a745631e053c-kube-api-access-zvt2p\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.268554 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8ed54a06-08b9-41a2-92d9-a745631e053c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.269248 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.269516 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.274546 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.276193 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.276960 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.281356 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.284485 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.284742 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.285007 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed54a06-08b9-41a2-92d9-a745631e053c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.285120 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-gqm6t" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.287080 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed54a06-08b9-41a2-92d9-a745631e053c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.296291 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.310768 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvt2p\" (UniqueName: \"kubernetes.io/projected/8ed54a06-08b9-41a2-92d9-a745631e053c-kube-api-access-zvt2p\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.319937 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.372803 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz4dm\" (UniqueName: \"kubernetes.io/projected/481cc370-a05a-4516-99f2-f94a0056a70e-kube-api-access-kz4dm\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.373058 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/481cc370-a05a-4516-99f2-f94a0056a70e-config-data\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.373174 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481cc370-a05a-4516-99f2-f94a0056a70e-kolla-config\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.373246 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/481cc370-a05a-4516-99f2-f94a0056a70e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.373341 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481cc370-a05a-4516-99f2-f94a0056a70e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.406724 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.437053 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8027f46e-1fe2-46ad-9226-11b2cc3f8da6","Type":"ContainerStarted","Data":"7d69305495d9d3b2d9a52587a6ae09d35762d7a68f360075751ea5685f4b08ca"} Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.482324 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz4dm\" (UniqueName: \"kubernetes.io/projected/481cc370-a05a-4516-99f2-f94a0056a70e-kube-api-access-kz4dm\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.482427 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/481cc370-a05a-4516-99f2-f94a0056a70e-config-data\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.482455 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481cc370-a05a-4516-99f2-f94a0056a70e-kolla-config\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.482492 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/481cc370-a05a-4516-99f2-f94a0056a70e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.482532 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481cc370-a05a-4516-99f2-f94a0056a70e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.483803 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481cc370-a05a-4516-99f2-f94a0056a70e-kolla-config\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.486570 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481cc370-a05a-4516-99f2-f94a0056a70e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.487128 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/481cc370-a05a-4516-99f2-f94a0056a70e-config-data\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.490210 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/481cc370-a05a-4516-99f2-f94a0056a70e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.541126 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz4dm\" (UniqueName: \"kubernetes.io/projected/481cc370-a05a-4516-99f2-f94a0056a70e-kube-api-access-kz4dm\") pod \"memcached-0\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.705225 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 21 10:02:06 crc kubenswrapper[4972]: I1121 10:02:06.976313 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 10:02:08 crc kubenswrapper[4972]: I1121 10:02:08.070973 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 10:02:08 crc kubenswrapper[4972]: I1121 10:02:08.072187 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 10:02:08 crc kubenswrapper[4972]: I1121 10:02:08.081147 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-87ftf" Nov 21 10:02:08 crc kubenswrapper[4972]: I1121 10:02:08.087113 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 10:02:08 crc kubenswrapper[4972]: I1121 10:02:08.215044 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67ns5\" (UniqueName: \"kubernetes.io/projected/73b2b355-c8e4-496c-8d3c-2927280fed38-kube-api-access-67ns5\") pod \"kube-state-metrics-0\" (UID: \"73b2b355-c8e4-496c-8d3c-2927280fed38\") " pod="openstack/kube-state-metrics-0" Nov 21 10:02:08 crc kubenswrapper[4972]: I1121 10:02:08.316936 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67ns5\" (UniqueName: \"kubernetes.io/projected/73b2b355-c8e4-496c-8d3c-2927280fed38-kube-api-access-67ns5\") pod \"kube-state-metrics-0\" (UID: \"73b2b355-c8e4-496c-8d3c-2927280fed38\") " pod="openstack/kube-state-metrics-0" Nov 21 10:02:08 crc kubenswrapper[4972]: I1121 10:02:08.334722 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67ns5\" (UniqueName: \"kubernetes.io/projected/73b2b355-c8e4-496c-8d3c-2927280fed38-kube-api-access-67ns5\") pod \"kube-state-metrics-0\" (UID: \"73b2b355-c8e4-496c-8d3c-2927280fed38\") " pod="openstack/kube-state-metrics-0" Nov 21 10:02:08 crc kubenswrapper[4972]: I1121 10:02:08.414748 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 10:02:11 crc kubenswrapper[4972]: I1121 10:02:11.508366 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8ed54a06-08b9-41a2-92d9-a745631e053c","Type":"ContainerStarted","Data":"96a0bf3e8d149c7ac801bfa3b8f34dcfb968f9f80764a5fead1c97548120d9d6"} Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.034441 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5q7hj"] Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.057447 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-4z7b5"] Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.061137 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.062199 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.068528 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.068770 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-chdf7" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.069040 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.077326 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5q7hj"] Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.097608 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4z7b5"] Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191419 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ea385c8-0af5-4759-acf1-ee6dee48e488-scripts\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191465 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab92bde-9b45-49ca-a6e9-43c8921b3002-combined-ca-bundle\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191493 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-lib\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191517 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-log-ovn\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191551 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab92bde-9b45-49ca-a6e9-43c8921b3002-scripts\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191587 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-run\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191637 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4h5g\" (UniqueName: \"kubernetes.io/projected/5ea385c8-0af5-4759-acf1-ee6dee48e488-kube-api-access-b4h5g\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191688 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-etc-ovs\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191705 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-run-ovn\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191725 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-log\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191750 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-run\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191773 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab92bde-9b45-49ca-a6e9-43c8921b3002-ovn-controller-tls-certs\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.191790 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdqnl\" (UniqueName: \"kubernetes.io/projected/9ab92bde-9b45-49ca-a6e9-43c8921b3002-kube-api-access-qdqnl\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.292953 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab92bde-9b45-49ca-a6e9-43c8921b3002-ovn-controller-tls-certs\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293010 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdqnl\" (UniqueName: \"kubernetes.io/projected/9ab92bde-9b45-49ca-a6e9-43c8921b3002-kube-api-access-qdqnl\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293056 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ea385c8-0af5-4759-acf1-ee6dee48e488-scripts\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293082 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab92bde-9b45-49ca-a6e9-43c8921b3002-combined-ca-bundle\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293108 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-lib\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293127 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-log-ovn\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293160 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-run\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293181 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab92bde-9b45-49ca-a6e9-43c8921b3002-scripts\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293220 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4h5g\" (UniqueName: \"kubernetes.io/projected/5ea385c8-0af5-4759-acf1-ee6dee48e488-kube-api-access-b4h5g\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293276 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-etc-ovs\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293298 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-run-ovn\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293321 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-log\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293348 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-run\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293848 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-run\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.293922 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-run\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.294237 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-etc-ovs\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.294255 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-run-ovn\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.294237 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-log\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.294304 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-log-ovn\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.294329 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-lib\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.296063 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ea385c8-0af5-4759-acf1-ee6dee48e488-scripts\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.297573 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab92bde-9b45-49ca-a6e9-43c8921b3002-scripts\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.299142 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab92bde-9b45-49ca-a6e9-43c8921b3002-ovn-controller-tls-certs\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.306524 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab92bde-9b45-49ca-a6e9-43c8921b3002-combined-ca-bundle\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.316375 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4h5g\" (UniqueName: \"kubernetes.io/projected/5ea385c8-0af5-4759-acf1-ee6dee48e488-kube-api-access-b4h5g\") pod \"ovn-controller-ovs-4z7b5\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.316904 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdqnl\" (UniqueName: \"kubernetes.io/projected/9ab92bde-9b45-49ca-a6e9-43c8921b3002-kube-api-access-qdqnl\") pod \"ovn-controller-5q7hj\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.408351 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:12 crc kubenswrapper[4972]: I1121 10:02:12.422791 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.236766 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.239796 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.245148 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.245317 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-67ftb" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.245568 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.246004 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.246305 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.254175 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.306381 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.306657 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.306886 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7zzq\" (UniqueName: \"kubernetes.io/projected/c9c438ca-0f93-434d-81ea-29ae82b217bf-kube-api-access-n7zzq\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.306966 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c438ca-0f93-434d-81ea-29ae82b217bf-config\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.307026 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9c438ca-0f93-434d-81ea-29ae82b217bf-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.307079 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9c438ca-0f93-434d-81ea-29ae82b217bf-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.307276 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.307418 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.408730 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.408792 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7zzq\" (UniqueName: \"kubernetes.io/projected/c9c438ca-0f93-434d-81ea-29ae82b217bf-kube-api-access-n7zzq\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.408816 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c438ca-0f93-434d-81ea-29ae82b217bf-config\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.408854 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9c438ca-0f93-434d-81ea-29ae82b217bf-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.408876 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9c438ca-0f93-434d-81ea-29ae82b217bf-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.408904 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.408926 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.408977 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.409200 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.409408 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9c438ca-0f93-434d-81ea-29ae82b217bf-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.410299 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c438ca-0f93-434d-81ea-29ae82b217bf-config\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.413229 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9c438ca-0f93-434d-81ea-29ae82b217bf-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.414544 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.414597 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.415909 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.431212 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7zzq\" (UniqueName: \"kubernetes.io/projected/c9c438ca-0f93-434d-81ea-29ae82b217bf-kube-api-access-n7zzq\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.433289 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:13 crc kubenswrapper[4972]: I1121 10:02:13.578928 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.163199 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.165215 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.169488 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.170099 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6pxz6" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.170133 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.172550 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.176703 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.248457 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.248764 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.248905 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.249014 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.249108 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44805331-e34b-4455-a744-4c8fe27a1b9e-config\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.249223 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg45j\" (UniqueName: \"kubernetes.io/projected/44805331-e34b-4455-a744-4c8fe27a1b9e-kube-api-access-tg45j\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.249339 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44805331-e34b-4455-a744-4c8fe27a1b9e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.249518 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44805331-e34b-4455-a744-4c8fe27a1b9e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.350938 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44805331-e34b-4455-a744-4c8fe27a1b9e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.351000 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.351038 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.351073 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.351113 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.351139 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44805331-e34b-4455-a744-4c8fe27a1b9e-config\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.351175 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg45j\" (UniqueName: \"kubernetes.io/projected/44805331-e34b-4455-a744-4c8fe27a1b9e-kube-api-access-tg45j\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.351200 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44805331-e34b-4455-a744-4c8fe27a1b9e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.351505 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44805331-e34b-4455-a744-4c8fe27a1b9e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.351919 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.352461 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44805331-e34b-4455-a744-4c8fe27a1b9e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.353671 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44805331-e34b-4455-a744-4c8fe27a1b9e-config\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.356714 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.356730 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.358344 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.370210 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.379059 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg45j\" (UniqueName: \"kubernetes.io/projected/44805331-e34b-4455-a744-4c8fe27a1b9e-kube-api-access-tg45j\") pod \"ovsdbserver-sb-0\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:15 crc kubenswrapper[4972]: I1121 10:02:15.531869 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:21 crc kubenswrapper[4972]: E1121 10:02:21.343155 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:8f8ade19bc904e0b06eaa2d55539cdebb1df40512845962a7aa672223332df90" Nov 21 10:02:21 crc kubenswrapper[4972]: E1121 10:02:21.343903 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:8f8ade19bc904e0b06eaa2d55539cdebb1df40512845962a7aa672223332df90,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ltknk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(2bc44abc-7710-432b-b503-fd54e3afeede): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:02:21 crc kubenswrapper[4972]: E1121 10:02:21.345175 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="2bc44abc-7710-432b-b503-fd54e3afeede" Nov 21 10:02:26 crc kubenswrapper[4972]: E1121 10:02:26.415386 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:8f8ade19bc904e0b06eaa2d55539cdebb1df40512845962a7aa672223332df90\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="2bc44abc-7710-432b-b503-fd54e3afeede" Nov 21 10:02:26 crc kubenswrapper[4972]: I1121 10:02:26.868109 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 10:02:34 crc kubenswrapper[4972]: E1121 10:02:34.036742 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:3db27fbfc3fabe2f62c68ab1b9f24383a73554f2d6d1f178147088832619013a" Nov 21 10:02:34 crc kubenswrapper[4972]: E1121 10:02:34.037492 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:3db27fbfc3fabe2f62c68ab1b9f24383a73554f2d6d1f178147088832619013a,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9558f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(8027f46e-1fe2-46ad-9226-11b2cc3f8da6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:02:34 crc kubenswrapper[4972]: E1121 10:02:34.038631 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="8027f46e-1fe2-46ad-9226-11b2cc3f8da6" Nov 21 10:02:34 crc kubenswrapper[4972]: W1121 10:02:34.056165 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73b2b355_c8e4_496c_8d3c_2927280fed38.slice/crio-b68e18b7f59ea9379482a4ca6efa44b540a09742eaab0c1ed3e6b87508930f65 WatchSource:0}: Error finding container b68e18b7f59ea9379482a4ca6efa44b540a09742eaab0c1ed3e6b87508930f65: Status 404 returned error can't find the container with id b68e18b7f59ea9379482a4ca6efa44b540a09742eaab0c1ed3e6b87508930f65 Nov 21 10:02:34 crc kubenswrapper[4972]: I1121 10:02:34.499545 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 21 10:02:34 crc kubenswrapper[4972]: I1121 10:02:34.683781 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"73b2b355-c8e4-496c-8d3c-2927280fed38","Type":"ContainerStarted","Data":"b68e18b7f59ea9379482a4ca6efa44b540a09742eaab0c1ed3e6b87508930f65"} Nov 21 10:02:34 crc kubenswrapper[4972]: E1121 10:02:34.686949 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:3db27fbfc3fabe2f62c68ab1b9f24383a73554f2d6d1f178147088832619013a\\\"\"" pod="openstack/openstack-galera-0" podUID="8027f46e-1fe2-46ad-9226-11b2cc3f8da6" Nov 21 10:02:34 crc kubenswrapper[4972]: W1121 10:02:34.966032 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod481cc370_a05a_4516_99f2_f94a0056a70e.slice/crio-ec318e98104291b3eeb9a0c19c54c14718665dbf6f3b432f62e9ebbde1eaead3 WatchSource:0}: Error finding container ec318e98104291b3eeb9a0c19c54c14718665dbf6f3b432f62e9ebbde1eaead3: Status 404 returned error can't find the container with id ec318e98104291b3eeb9a0c19c54c14718665dbf6f3b432f62e9ebbde1eaead3 Nov 21 10:02:34 crc kubenswrapper[4972]: E1121 10:02:34.980157 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:a0e6062f505fbc848d62675995abc3806bc5c12530d3d41ed16066e07f71b2d3" Nov 21 10:02:34 crc kubenswrapper[4972]: E1121 10:02:34.980394 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:a0e6062f505fbc848d62675995abc3806bc5c12530d3d41ed16066e07f71b2d3,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kxthm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c84d8598c-pw9pv_openstack(fa2af7d6-efa4-459b-9a15-bdb10778978b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:02:34 crc kubenswrapper[4972]: E1121 10:02:34.981891 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" podUID="fa2af7d6-efa4-459b-9a15-bdb10778978b" Nov 21 10:02:35 crc kubenswrapper[4972]: E1121 10:02:35.001414 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:a0e6062f505fbc848d62675995abc3806bc5c12530d3d41ed16066e07f71b2d3" Nov 21 10:02:35 crc kubenswrapper[4972]: E1121 10:02:35.001577 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:a0e6062f505fbc848d62675995abc3806bc5c12530d3d41ed16066e07f71b2d3,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztcb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-85965d46c9-fbww9_openstack(43ca8eae-ab19-4dbc-80b2-036cc49daf01): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:02:35 crc kubenswrapper[4972]: E1121 10:02:35.002852 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-85965d46c9-fbww9" podUID="43ca8eae-ab19-4dbc-80b2-036cc49daf01" Nov 21 10:02:35 crc kubenswrapper[4972]: E1121 10:02:35.062904 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:a0e6062f505fbc848d62675995abc3806bc5c12530d3d41ed16066e07f71b2d3" Nov 21 10:02:35 crc kubenswrapper[4972]: E1121 10:02:35.063075 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:a0e6062f505fbc848d62675995abc3806bc5c12530d3d41ed16066e07f71b2d3,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t2q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-77966f9df5-gnnwr_openstack(9efc9658-6b41-4b20-9c12-375ca7133c85): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:02:35 crc kubenswrapper[4972]: E1121 10:02:35.064844 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" podUID="9efc9658-6b41-4b20-9c12-375ca7133c85" Nov 21 10:02:35 crc kubenswrapper[4972]: E1121 10:02:35.077985 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:a0e6062f505fbc848d62675995abc3806bc5c12530d3d41ed16066e07f71b2d3" Nov 21 10:02:35 crc kubenswrapper[4972]: E1121 10:02:35.078123 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:a0e6062f505fbc848d62675995abc3806bc5c12530d3d41ed16066e07f71b2d3,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4shc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d58585b49-w5l94_openstack(54911f74-850d-4a58-8bc2-56381021ce79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:02:35 crc kubenswrapper[4972]: E1121 10:02:35.079524 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5d58585b49-w5l94" podUID="54911f74-850d-4a58-8bc2-56381021ce79" Nov 21 10:02:35 crc kubenswrapper[4972]: W1121 10:02:35.466760 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ab92bde_9b45_49ca_a6e9_43c8921b3002.slice/crio-2ed5f7e094f6a525bdafa2a2d6503702b9e17e50fc2f306eae145566d1142eea WatchSource:0}: Error finding container 2ed5f7e094f6a525bdafa2a2d6503702b9e17e50fc2f306eae145566d1142eea: Status 404 returned error can't find the container with id 2ed5f7e094f6a525bdafa2a2d6503702b9e17e50fc2f306eae145566d1142eea Nov 21 10:02:35 crc kubenswrapper[4972]: I1121 10:02:35.472186 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5q7hj"] Nov 21 10:02:35 crc kubenswrapper[4972]: I1121 10:02:35.684262 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4z7b5"] Nov 21 10:02:35 crc kubenswrapper[4972]: I1121 10:02:35.693719 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"481cc370-a05a-4516-99f2-f94a0056a70e","Type":"ContainerStarted","Data":"ec318e98104291b3eeb9a0c19c54c14718665dbf6f3b432f62e9ebbde1eaead3"} Nov 21 10:02:35 crc kubenswrapper[4972]: I1121 10:02:35.695845 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q7hj" event={"ID":"9ab92bde-9b45-49ca-a6e9-43c8921b3002","Type":"ContainerStarted","Data":"2ed5f7e094f6a525bdafa2a2d6503702b9e17e50fc2f306eae145566d1142eea"} Nov 21 10:02:35 crc kubenswrapper[4972]: I1121 10:02:35.700159 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8ed54a06-08b9-41a2-92d9-a745631e053c","Type":"ContainerStarted","Data":"7d666958e7088543ec0d51c957e3e53a16be809f029d516ce5c2316c2c498ab9"} Nov 21 10:02:35 crc kubenswrapper[4972]: E1121 10:02:35.701953 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:a0e6062f505fbc848d62675995abc3806bc5c12530d3d41ed16066e07f71b2d3\\\"\"" pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" podUID="fa2af7d6-efa4-459b-9a15-bdb10778978b" Nov 21 10:02:35 crc kubenswrapper[4972]: E1121 10:02:35.702399 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:a0e6062f505fbc848d62675995abc3806bc5c12530d3d41ed16066e07f71b2d3\\\"\"" pod="openstack/dnsmasq-dns-85965d46c9-fbww9" podUID="43ca8eae-ab19-4dbc-80b2-036cc49daf01" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.171087 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.177759 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d58585b49-w5l94" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.230028 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t2q8\" (UniqueName: \"kubernetes.io/projected/9efc9658-6b41-4b20-9c12-375ca7133c85-kube-api-access-4t2q8\") pod \"9efc9658-6b41-4b20-9c12-375ca7133c85\" (UID: \"9efc9658-6b41-4b20-9c12-375ca7133c85\") " Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.230080 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efc9658-6b41-4b20-9c12-375ca7133c85-config\") pod \"9efc9658-6b41-4b20-9c12-375ca7133c85\" (UID: \"9efc9658-6b41-4b20-9c12-375ca7133c85\") " Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.230291 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54911f74-850d-4a58-8bc2-56381021ce79-config\") pod \"54911f74-850d-4a58-8bc2-56381021ce79\" (UID: \"54911f74-850d-4a58-8bc2-56381021ce79\") " Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.230337 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efc9658-6b41-4b20-9c12-375ca7133c85-dns-svc\") pod \"9efc9658-6b41-4b20-9c12-375ca7133c85\" (UID: \"9efc9658-6b41-4b20-9c12-375ca7133c85\") " Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.230364 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4shc\" (UniqueName: \"kubernetes.io/projected/54911f74-850d-4a58-8bc2-56381021ce79-kube-api-access-z4shc\") pod \"54911f74-850d-4a58-8bc2-56381021ce79\" (UID: \"54911f74-850d-4a58-8bc2-56381021ce79\") " Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.230768 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efc9658-6b41-4b20-9c12-375ca7133c85-config" (OuterVolumeSpecName: "config") pod "9efc9658-6b41-4b20-9c12-375ca7133c85" (UID: "9efc9658-6b41-4b20-9c12-375ca7133c85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.230995 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efc9658-6b41-4b20-9c12-375ca7133c85-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.232021 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54911f74-850d-4a58-8bc2-56381021ce79-config" (OuterVolumeSpecName: "config") pod "54911f74-850d-4a58-8bc2-56381021ce79" (UID: "54911f74-850d-4a58-8bc2-56381021ce79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.232127 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efc9658-6b41-4b20-9c12-375ca7133c85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9efc9658-6b41-4b20-9c12-375ca7133c85" (UID: "9efc9658-6b41-4b20-9c12-375ca7133c85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.236276 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9efc9658-6b41-4b20-9c12-375ca7133c85-kube-api-access-4t2q8" (OuterVolumeSpecName: "kube-api-access-4t2q8") pod "9efc9658-6b41-4b20-9c12-375ca7133c85" (UID: "9efc9658-6b41-4b20-9c12-375ca7133c85"). InnerVolumeSpecName "kube-api-access-4t2q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.236469 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54911f74-850d-4a58-8bc2-56381021ce79-kube-api-access-z4shc" (OuterVolumeSpecName: "kube-api-access-z4shc") pod "54911f74-850d-4a58-8bc2-56381021ce79" (UID: "54911f74-850d-4a58-8bc2-56381021ce79"). InnerVolumeSpecName "kube-api-access-z4shc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.332420 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efc9658-6b41-4b20-9c12-375ca7133c85-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.332450 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4shc\" (UniqueName: \"kubernetes.io/projected/54911f74-850d-4a58-8bc2-56381021ce79-kube-api-access-z4shc\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.332460 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t2q8\" (UniqueName: \"kubernetes.io/projected/9efc9658-6b41-4b20-9c12-375ca7133c85-kube-api-access-4t2q8\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.332471 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54911f74-850d-4a58-8bc2-56381021ce79-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.548085 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.659686 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.707121 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" event={"ID":"9efc9658-6b41-4b20-9c12-375ca7133c85","Type":"ContainerDied","Data":"ba43c26335bc4f999ff089101996892eca1f10d8dd029cc4762bdbc705b8387c"} Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.707145 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77966f9df5-gnnwr" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.708262 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4z7b5" event={"ID":"5ea385c8-0af5-4759-acf1-ee6dee48e488","Type":"ContainerStarted","Data":"e2ddc3bc1d6938f973cb6fdd78406930d38a0abdf2eb4cb8cfe33cb6537c9980"} Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.709780 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"392b5094-f8ef-47b8-8dc5-9e1d2dbef612","Type":"ContainerStarted","Data":"a2591e9b6da9f52ba55bc3c5cc658736bb1d86090db5b0174bf09055e27e205d"} Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.710799 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d58585b49-w5l94" event={"ID":"54911f74-850d-4a58-8bc2-56381021ce79","Type":"ContainerDied","Data":"b4ef97df95de67af1ebed4bb7c33ecd3a1fa1d2266b897f440b8fc8b0593304c"} Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.710860 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d58585b49-w5l94" Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.814547 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d58585b49-w5l94"] Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.829952 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d58585b49-w5l94"] Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.846382 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77966f9df5-gnnwr"] Nov 21 10:02:36 crc kubenswrapper[4972]: I1121 10:02:36.853755 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77966f9df5-gnnwr"] Nov 21 10:02:37 crc kubenswrapper[4972]: I1121 10:02:37.724053 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"44805331-e34b-4455-a744-4c8fe27a1b9e","Type":"ContainerStarted","Data":"51437a607ea84e00f81befc0dea14277711d78f788df5d6084c2799f8be0ded0"} Nov 21 10:02:37 crc kubenswrapper[4972]: I1121 10:02:37.725899 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c9c438ca-0f93-434d-81ea-29ae82b217bf","Type":"ContainerStarted","Data":"d5a4b940543951812eed51459974e2b8ecc44e5e525e6a28b1a85924cc617f0f"} Nov 21 10:02:37 crc kubenswrapper[4972]: I1121 10:02:37.772540 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54911f74-850d-4a58-8bc2-56381021ce79" path="/var/lib/kubelet/pods/54911f74-850d-4a58-8bc2-56381021ce79/volumes" Nov 21 10:02:37 crc kubenswrapper[4972]: I1121 10:02:37.773265 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9efc9658-6b41-4b20-9c12-375ca7133c85" path="/var/lib/kubelet/pods/9efc9658-6b41-4b20-9c12-375ca7133c85/volumes" Nov 21 10:02:40 crc kubenswrapper[4972]: I1121 10:02:40.751507 4972 generic.go:334] "Generic (PLEG): container finished" podID="8ed54a06-08b9-41a2-92d9-a745631e053c" containerID="7d666958e7088543ec0d51c957e3e53a16be809f029d516ce5c2316c2c498ab9" exitCode=0 Nov 21 10:02:40 crc kubenswrapper[4972]: I1121 10:02:40.751596 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8ed54a06-08b9-41a2-92d9-a745631e053c","Type":"ContainerDied","Data":"7d666958e7088543ec0d51c957e3e53a16be809f029d516ce5c2316c2c498ab9"} Nov 21 10:02:41 crc kubenswrapper[4972]: I1121 10:02:41.769643 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8ed54a06-08b9-41a2-92d9-a745631e053c","Type":"ContainerStarted","Data":"a55fa1434e8f1c900b3bebdfafac5e43b0fb9083af7325dfb76ac2940d2d38b2"} Nov 21 10:02:41 crc kubenswrapper[4972]: I1121 10:02:41.794785 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=12.771930949 podStartE2EDuration="36.794765687s" podCreationTimestamp="2025-11-21 10:02:05 +0000 UTC" firstStartedPulling="2025-11-21 10:02:11.003387281 +0000 UTC m=+1276.112529779" lastFinishedPulling="2025-11-21 10:02:35.026222019 +0000 UTC m=+1300.135364517" observedRunningTime="2025-11-21 10:02:41.790229936 +0000 UTC m=+1306.899372434" watchObservedRunningTime="2025-11-21 10:02:41.794765687 +0000 UTC m=+1306.903908195" Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.774257 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"73b2b355-c8e4-496c-8d3c-2927280fed38","Type":"ContainerStarted","Data":"7abb8b4c502b2cb32d88ebecf840b58358ed86ae8173e0a5c658fa64af90dfec"} Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.774682 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.780275 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"44805331-e34b-4455-a744-4c8fe27a1b9e","Type":"ContainerStarted","Data":"7a52d5a35cd478c028a14322544e0cedd59d5fc637ca22a8848442a143badc31"} Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.781916 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q7hj" event={"ID":"9ab92bde-9b45-49ca-a6e9-43c8921b3002","Type":"ContainerStarted","Data":"6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207"} Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.782564 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5q7hj" Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.784546 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c9c438ca-0f93-434d-81ea-29ae82b217bf","Type":"ContainerStarted","Data":"b39e49481b4242d63f67036e50dc39fabe6cc04941ad1ad33655c4f1ec8f7121"} Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.789437 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2bc44abc-7710-432b-b503-fd54e3afeede","Type":"ContainerStarted","Data":"695ff7e74d4466cf78e5259f9386de929cd91903ca07545dbfd50157060920ad"} Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.797238 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"481cc370-a05a-4516-99f2-f94a0056a70e","Type":"ContainerStarted","Data":"b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2"} Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.798088 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.800105 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=27.750877606 podStartE2EDuration="34.800086095s" podCreationTimestamp="2025-11-21 10:02:08 +0000 UTC" firstStartedPulling="2025-11-21 10:02:34.061001832 +0000 UTC m=+1299.170144330" lastFinishedPulling="2025-11-21 10:02:41.110210311 +0000 UTC m=+1306.219352819" observedRunningTime="2025-11-21 10:02:42.790718395 +0000 UTC m=+1307.899860903" watchObservedRunningTime="2025-11-21 10:02:42.800086095 +0000 UTC m=+1307.909228593" Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.800675 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerID="613470ef4119922307ef6d589fc1c4ded51811bb8032294cf93c302245167b27" exitCode=0 Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.800705 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4z7b5" event={"ID":"5ea385c8-0af5-4759-acf1-ee6dee48e488","Type":"ContainerDied","Data":"613470ef4119922307ef6d589fc1c4ded51811bb8032294cf93c302245167b27"} Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.839478 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5q7hj" podStartSLOduration=24.973664139 podStartE2EDuration="30.839458014s" podCreationTimestamp="2025-11-21 10:02:12 +0000 UTC" firstStartedPulling="2025-11-21 10:02:35.468783007 +0000 UTC m=+1300.577925505" lastFinishedPulling="2025-11-21 10:02:41.334576892 +0000 UTC m=+1306.443719380" observedRunningTime="2025-11-21 10:02:42.80853924 +0000 UTC m=+1307.917681758" watchObservedRunningTime="2025-11-21 10:02:42.839458014 +0000 UTC m=+1307.948600542" Nov 21 10:02:42 crc kubenswrapper[4972]: I1121 10:02:42.879140 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=30.521781083 podStartE2EDuration="36.879122591s" podCreationTimestamp="2025-11-21 10:02:06 +0000 UTC" firstStartedPulling="2025-11-21 10:02:34.975646681 +0000 UTC m=+1300.084789189" lastFinishedPulling="2025-11-21 10:02:41.332988199 +0000 UTC m=+1306.442130697" observedRunningTime="2025-11-21 10:02:42.866521396 +0000 UTC m=+1307.975663894" watchObservedRunningTime="2025-11-21 10:02:42.879122591 +0000 UTC m=+1307.988265089" Nov 21 10:02:43 crc kubenswrapper[4972]: I1121 10:02:43.816710 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4z7b5" event={"ID":"5ea385c8-0af5-4759-acf1-ee6dee48e488","Type":"ContainerStarted","Data":"00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad"} Nov 21 10:02:43 crc kubenswrapper[4972]: I1121 10:02:43.817275 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:43 crc kubenswrapper[4972]: I1121 10:02:43.817288 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4z7b5" event={"ID":"5ea385c8-0af5-4759-acf1-ee6dee48e488","Type":"ContainerStarted","Data":"463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4"} Nov 21 10:02:43 crc kubenswrapper[4972]: I1121 10:02:43.817303 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:02:43 crc kubenswrapper[4972]: I1121 10:02:43.837542 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-4z7b5" podStartSLOduration=26.195147417 podStartE2EDuration="31.837527568s" podCreationTimestamp="2025-11-21 10:02:12 +0000 UTC" firstStartedPulling="2025-11-21 10:02:35.692271473 +0000 UTC m=+1300.801413971" lastFinishedPulling="2025-11-21 10:02:41.334651624 +0000 UTC m=+1306.443794122" observedRunningTime="2025-11-21 10:02:43.837157889 +0000 UTC m=+1308.946300387" watchObservedRunningTime="2025-11-21 10:02:43.837527568 +0000 UTC m=+1308.946670056" Nov 21 10:02:46 crc kubenswrapper[4972]: I1121 10:02:46.406790 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:46 crc kubenswrapper[4972]: I1121 10:02:46.407178 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:46 crc kubenswrapper[4972]: I1121 10:02:46.706977 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 21 10:02:46 crc kubenswrapper[4972]: I1121 10:02:46.839073 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c9c438ca-0f93-434d-81ea-29ae82b217bf","Type":"ContainerStarted","Data":"02017bfaf39dc941741a5f40c1bacc3f4996ecfcae24cb31b354109768689142"} Nov 21 10:02:46 crc kubenswrapper[4972]: I1121 10:02:46.841986 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"44805331-e34b-4455-a744-4c8fe27a1b9e","Type":"ContainerStarted","Data":"0eb74e778f9330e95160a9380c73ad009f10cca6eb82633cae811a9a159e0d84"} Nov 21 10:02:46 crc kubenswrapper[4972]: I1121 10:02:46.866082 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=25.260270489 podStartE2EDuration="34.866064935s" podCreationTimestamp="2025-11-21 10:02:12 +0000 UTC" firstStartedPulling="2025-11-21 10:02:36.853543658 +0000 UTC m=+1301.962686156" lastFinishedPulling="2025-11-21 10:02:46.459338104 +0000 UTC m=+1311.568480602" observedRunningTime="2025-11-21 10:02:46.861413011 +0000 UTC m=+1311.970555509" watchObservedRunningTime="2025-11-21 10:02:46.866064935 +0000 UTC m=+1311.975207433" Nov 21 10:02:46 crc kubenswrapper[4972]: I1121 10:02:46.899600 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=23.307827918 podStartE2EDuration="32.899573199s" podCreationTimestamp="2025-11-21 10:02:14 +0000 UTC" firstStartedPulling="2025-11-21 10:02:36.853805355 +0000 UTC m=+1301.962947843" lastFinishedPulling="2025-11-21 10:02:46.445550616 +0000 UTC m=+1311.554693124" observedRunningTime="2025-11-21 10:02:46.893974289 +0000 UTC m=+1312.003116807" watchObservedRunningTime="2025-11-21 10:02:46.899573199 +0000 UTC m=+1312.008715697" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.428608 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.476552 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c84d8598c-pw9pv"] Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.519890 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-667df85987-h65qf"] Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.521520 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.533639 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.541318 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667df85987-h65qf"] Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.552300 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd01140d-fceb-4804-8b0b-f7e91fe8791c-config\") pod \"dnsmasq-dns-667df85987-h65qf\" (UID: \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\") " pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.552383 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs7jw\" (UniqueName: \"kubernetes.io/projected/fd01140d-fceb-4804-8b0b-f7e91fe8791c-kube-api-access-hs7jw\") pod \"dnsmasq-dns-667df85987-h65qf\" (UID: \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\") " pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.552442 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd01140d-fceb-4804-8b0b-f7e91fe8791c-dns-svc\") pod \"dnsmasq-dns-667df85987-h65qf\" (UID: \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\") " pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.579356 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.617672 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.655708 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd01140d-fceb-4804-8b0b-f7e91fe8791c-config\") pod \"dnsmasq-dns-667df85987-h65qf\" (UID: \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\") " pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.655848 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs7jw\" (UniqueName: \"kubernetes.io/projected/fd01140d-fceb-4804-8b0b-f7e91fe8791c-kube-api-access-hs7jw\") pod \"dnsmasq-dns-667df85987-h65qf\" (UID: \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\") " pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.655951 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd01140d-fceb-4804-8b0b-f7e91fe8791c-dns-svc\") pod \"dnsmasq-dns-667df85987-h65qf\" (UID: \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\") " pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.656910 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd01140d-fceb-4804-8b0b-f7e91fe8791c-dns-svc\") pod \"dnsmasq-dns-667df85987-h65qf\" (UID: \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\") " pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.657575 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd01140d-fceb-4804-8b0b-f7e91fe8791c-config\") pod \"dnsmasq-dns-667df85987-h65qf\" (UID: \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\") " pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.677042 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs7jw\" (UniqueName: \"kubernetes.io/projected/fd01140d-fceb-4804-8b0b-f7e91fe8791c-kube-api-access-hs7jw\") pod \"dnsmasq-dns-667df85987-h65qf\" (UID: \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\") " pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.851513 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.869153 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.877438 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" event={"ID":"fa2af7d6-efa4-459b-9a15-bdb10778978b","Type":"ContainerDied","Data":"1561b8e84722b64467618073d6dea9971f1ad79f4d5a92a642e1d4326557b07e"} Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.877492 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1561b8e84722b64467618073d6dea9971f1ad79f4d5a92a642e1d4326557b07e" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.880111 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8027f46e-1fe2-46ad-9226-11b2cc3f8da6","Type":"ContainerStarted","Data":"711835bfec997cc5c4cd5bb8aa782593a04256cba1b1b130be09cf0a32345a38"} Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.881320 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.919854 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.942521 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 21 10:02:48 crc kubenswrapper[4972]: I1121 10:02:48.988327 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.071163 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxthm\" (UniqueName: \"kubernetes.io/projected/fa2af7d6-efa4-459b-9a15-bdb10778978b-kube-api-access-kxthm\") pod \"fa2af7d6-efa4-459b-9a15-bdb10778978b\" (UID: \"fa2af7d6-efa4-459b-9a15-bdb10778978b\") " Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.071248 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa2af7d6-efa4-459b-9a15-bdb10778978b-dns-svc\") pod \"fa2af7d6-efa4-459b-9a15-bdb10778978b\" (UID: \"fa2af7d6-efa4-459b-9a15-bdb10778978b\") " Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.071290 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa2af7d6-efa4-459b-9a15-bdb10778978b-config\") pod \"fa2af7d6-efa4-459b-9a15-bdb10778978b\" (UID: \"fa2af7d6-efa4-459b-9a15-bdb10778978b\") " Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.072361 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2af7d6-efa4-459b-9a15-bdb10778978b-config" (OuterVolumeSpecName: "config") pod "fa2af7d6-efa4-459b-9a15-bdb10778978b" (UID: "fa2af7d6-efa4-459b-9a15-bdb10778978b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.074705 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2af7d6-efa4-459b-9a15-bdb10778978b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa2af7d6-efa4-459b-9a15-bdb10778978b" (UID: "fa2af7d6-efa4-459b-9a15-bdb10778978b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.105597 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa2af7d6-efa4-459b-9a15-bdb10778978b-kube-api-access-kxthm" (OuterVolumeSpecName: "kube-api-access-kxthm") pod "fa2af7d6-efa4-459b-9a15-bdb10778978b" (UID: "fa2af7d6-efa4-459b-9a15-bdb10778978b"). InnerVolumeSpecName "kube-api-access-kxthm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.138347 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85965d46c9-fbww9"] Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.172727 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxthm\" (UniqueName: \"kubernetes.io/projected/fa2af7d6-efa4-459b-9a15-bdb10778978b-kube-api-access-kxthm\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.172781 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa2af7d6-efa4-459b-9a15-bdb10778978b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.172794 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa2af7d6-efa4-459b-9a15-bdb10778978b-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.197161 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f55886f45-n6vkh"] Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.198806 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.202258 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.213924 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f55886f45-n6vkh"] Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.251923 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-psvpd"] Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.254516 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.256791 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.265536 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-psvpd"] Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.378870 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-config\") pod \"dnsmasq-dns-7f55886f45-n6vkh\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.378975 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bb7ffc3-501c-420f-834c-0509b4a509eb-config\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.378996 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-dns-svc\") pod \"dnsmasq-dns-7f55886f45-n6vkh\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.379025 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49rr9\" (UniqueName: \"kubernetes.io/projected/fe4c23d4-4b37-45b9-873f-d3f153b492fd-kube-api-access-49rr9\") pod \"dnsmasq-dns-7f55886f45-n6vkh\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.379061 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2bb7ffc3-501c-420f-834c-0509b4a509eb-ovs-rundir\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.379096 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb7ffc3-501c-420f-834c-0509b4a509eb-combined-ca-bundle\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.379112 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2bb7ffc3-501c-420f-834c-0509b4a509eb-ovn-rundir\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.379152 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7f55886f45-n6vkh\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.379197 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bb7ffc3-501c-420f-834c-0509b4a509eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.379220 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4pg\" (UniqueName: \"kubernetes.io/projected/2bb7ffc3-501c-420f-834c-0509b4a509eb-kube-api-access-bj4pg\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.400617 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667df85987-h65qf"] Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.420925 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdfc8db59-mmcsb"] Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.433129 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.435998 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.441734 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdfc8db59-mmcsb"] Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.465922 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667df85987-h65qf"] Nov 21 10:02:49 crc kubenswrapper[4972]: W1121 10:02:49.468116 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd01140d_fceb_4804_8b0b_f7e91fe8791c.slice/crio-5b1cbcb4851c66fb43a56a92b9890e854a763ad591597ae91b79317eea9e0eef WatchSource:0}: Error finding container 5b1cbcb4851c66fb43a56a92b9890e854a763ad591597ae91b79317eea9e0eef: Status 404 returned error can't find the container with id 5b1cbcb4851c66fb43a56a92b9890e854a763ad591597ae91b79317eea9e0eef Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.480679 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-dns-svc\") pod \"dnsmasq-dns-7f55886f45-n6vkh\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.480771 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49rr9\" (UniqueName: \"kubernetes.io/projected/fe4c23d4-4b37-45b9-873f-d3f153b492fd-kube-api-access-49rr9\") pod \"dnsmasq-dns-7f55886f45-n6vkh\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.480847 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2bb7ffc3-501c-420f-834c-0509b4a509eb-ovs-rundir\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.480921 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb7ffc3-501c-420f-834c-0509b4a509eb-combined-ca-bundle\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.480947 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2bb7ffc3-501c-420f-834c-0509b4a509eb-ovn-rundir\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.481009 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7f55886f45-n6vkh\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.481082 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bb7ffc3-501c-420f-834c-0509b4a509eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.481111 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj4pg\" (UniqueName: \"kubernetes.io/projected/2bb7ffc3-501c-420f-834c-0509b4a509eb-kube-api-access-bj4pg\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.481173 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-config\") pod \"dnsmasq-dns-7f55886f45-n6vkh\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.481267 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bb7ffc3-501c-420f-834c-0509b4a509eb-config\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.481322 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2bb7ffc3-501c-420f-834c-0509b4a509eb-ovs-rundir\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.482311 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-config\") pod \"dnsmasq-dns-7f55886f45-n6vkh\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.482514 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2bb7ffc3-501c-420f-834c-0509b4a509eb-ovn-rundir\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.482741 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-dns-svc\") pod \"dnsmasq-dns-7f55886f45-n6vkh\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.486104 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bb7ffc3-501c-420f-834c-0509b4a509eb-config\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.486253 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7f55886f45-n6vkh\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.487536 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb7ffc3-501c-420f-834c-0509b4a509eb-combined-ca-bundle\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.495517 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bb7ffc3-501c-420f-834c-0509b4a509eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.496628 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49rr9\" (UniqueName: \"kubernetes.io/projected/fe4c23d4-4b37-45b9-873f-d3f153b492fd-kube-api-access-49rr9\") pod \"dnsmasq-dns-7f55886f45-n6vkh\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.497340 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj4pg\" (UniqueName: \"kubernetes.io/projected/2bb7ffc3-501c-420f-834c-0509b4a509eb-kube-api-access-bj4pg\") pod \"ovn-controller-metrics-psvpd\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.521601 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.559443 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.566612 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.568770 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-pfg2m" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.569461 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.569607 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.569625 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.570684 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.579620 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.588222 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-config\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.588265 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-dns-svc\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.588305 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.588376 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.588417 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5hdq\" (UniqueName: \"kubernetes.io/projected/093f696e-dee6-47dd-ba6f-07e65f594e60-kube-api-access-t5hdq\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.595999 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.601609 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.645014 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.689572 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ca8eae-ab19-4dbc-80b2-036cc49daf01-dns-svc\") pod \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\" (UID: \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\") " Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.689736 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztcb8\" (UniqueName: \"kubernetes.io/projected/43ca8eae-ab19-4dbc-80b2-036cc49daf01-kube-api-access-ztcb8\") pod \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\" (UID: \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\") " Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.689903 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ca8eae-ab19-4dbc-80b2-036cc49daf01-config\") pod \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\" (UID: \"43ca8eae-ab19-4dbc-80b2-036cc49daf01\") " Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690111 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ca8eae-ab19-4dbc-80b2-036cc49daf01-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43ca8eae-ab19-4dbc-80b2-036cc49daf01" (UID: "43ca8eae-ab19-4dbc-80b2-036cc49daf01"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690330 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ca8eae-ab19-4dbc-80b2-036cc49daf01-config" (OuterVolumeSpecName: "config") pod "43ca8eae-ab19-4dbc-80b2-036cc49daf01" (UID: "43ca8eae-ab19-4dbc-80b2-036cc49daf01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690594 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5hdq\" (UniqueName: \"kubernetes.io/projected/093f696e-dee6-47dd-ba6f-07e65f594e60-kube-api-access-t5hdq\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690645 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/31e140ab-a53a-4af2-864f-4c399d44f217-lock\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690679 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhght\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-kube-api-access-bhght\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690713 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-config\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690733 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-dns-svc\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690761 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690781 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690798 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690863 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/31e140ab-a53a-4af2-864f-4c399d44f217-cache\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690904 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690957 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ca8eae-ab19-4dbc-80b2-036cc49daf01-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.690971 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ca8eae-ab19-4dbc-80b2-036cc49daf01-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.692679 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-dns-svc\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.692751 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.692947 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.693286 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-config\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.706498 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ca8eae-ab19-4dbc-80b2-036cc49daf01-kube-api-access-ztcb8" (OuterVolumeSpecName: "kube-api-access-ztcb8") pod "43ca8eae-ab19-4dbc-80b2-036cc49daf01" (UID: "43ca8eae-ab19-4dbc-80b2-036cc49daf01"). InnerVolumeSpecName "kube-api-access-ztcb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.710159 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5hdq\" (UniqueName: \"kubernetes.io/projected/093f696e-dee6-47dd-ba6f-07e65f594e60-kube-api-access-t5hdq\") pod \"dnsmasq-dns-7bdfc8db59-mmcsb\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.761344 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.792282 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.792332 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.792382 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/31e140ab-a53a-4af2-864f-4c399d44f217-cache\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.792461 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/31e140ab-a53a-4af2-864f-4c399d44f217-lock\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.792498 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhght\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-kube-api-access-bhght\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: E1121 10:02:49.792505 4972 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 10:02:49 crc kubenswrapper[4972]: E1121 10:02:49.792527 4972 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.792553 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztcb8\" (UniqueName: \"kubernetes.io/projected/43ca8eae-ab19-4dbc-80b2-036cc49daf01-kube-api-access-ztcb8\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:49 crc kubenswrapper[4972]: E1121 10:02:49.792572 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift podName:31e140ab-a53a-4af2-864f-4c399d44f217 nodeName:}" failed. No retries permitted until 2025-11-21 10:02:50.292557462 +0000 UTC m=+1315.401699950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift") pod "swift-storage-0" (UID: "31e140ab-a53a-4af2-864f-4c399d44f217") : configmap "swift-ring-files" not found Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.792652 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.792994 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/31e140ab-a53a-4af2-864f-4c399d44f217-cache\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.793097 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/31e140ab-a53a-4af2-864f-4c399d44f217-lock\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.816984 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhght\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-kube-api-access-bhght\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.824166 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.931269 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667df85987-h65qf" event={"ID":"fd01140d-fceb-4804-8b0b-f7e91fe8791c","Type":"ContainerStarted","Data":"5b1cbcb4851c66fb43a56a92b9890e854a763ad591597ae91b79317eea9e0eef"} Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.934521 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-psvpd"] Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.938495 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c84d8598c-pw9pv" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.938875 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85965d46c9-fbww9" event={"ID":"43ca8eae-ab19-4dbc-80b2-036cc49daf01","Type":"ContainerDied","Data":"15d506f6db2de65bc0d1304310975dd86825b2bc9701fe7969aefaf3267f0fd8"} Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.938932 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85965d46c9-fbww9" Nov 21 10:02:49 crc kubenswrapper[4972]: I1121 10:02:49.987394 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.050121 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f55886f45-n6vkh"] Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.117653 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85965d46c9-fbww9"] Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.133772 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85965d46c9-fbww9"] Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.150097 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c84d8598c-pw9pv"] Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.162057 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c84d8598c-pw9pv"] Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.163511 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zr6wn"] Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.165369 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.169704 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zr6wn"] Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.170981 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.171305 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.173034 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.292003 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdfc8db59-mmcsb"] Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.298050 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.299616 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.302274 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wx8cd" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.302489 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.302653 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.303561 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 21 10:02:50 crc kubenswrapper[4972]: W1121 10:02:50.313445 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod093f696e_dee6_47dd_ba6f_07e65f594e60.slice/crio-dd55be8c8459db780abf6ef086a079d584b7b50491cd949a3712a28d97665de4 WatchSource:0}: Error finding container dd55be8c8459db780abf6ef086a079d584b7b50491cd949a3712a28d97665de4: Status 404 returned error can't find the container with id dd55be8c8459db780abf6ef086a079d584b7b50491cd949a3712a28d97665de4 Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.320105 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.323128 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-dispersionconf\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.323193 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/795431b0-73d4-4c09-95ec-59c039a001d4-scripts\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.323211 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-swiftconf\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.323258 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/795431b0-73d4-4c09-95ec-59c039a001d4-ring-data-devices\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.323301 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.323363 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-combined-ca-bundle\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.323384 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzgjw\" (UniqueName: \"kubernetes.io/projected/795431b0-73d4-4c09-95ec-59c039a001d4-kube-api-access-kzgjw\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.323418 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/795431b0-73d4-4c09-95ec-59c039a001d4-etc-swift\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: E1121 10:02:50.323537 4972 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 10:02:50 crc kubenswrapper[4972]: E1121 10:02:50.323550 4972 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 10:02:50 crc kubenswrapper[4972]: E1121 10:02:50.323579 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift podName:31e140ab-a53a-4af2-864f-4c399d44f217 nodeName:}" failed. No retries permitted until 2025-11-21 10:02:51.323567207 +0000 UTC m=+1316.432709695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift") pod "swift-storage-0" (UID: "31e140ab-a53a-4af2-864f-4c399d44f217") : configmap "swift-ring-files" not found Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424465 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424514 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/795431b0-73d4-4c09-95ec-59c039a001d4-scripts\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424530 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-swiftconf\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424562 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/795431b0-73d4-4c09-95ec-59c039a001d4-ring-data-devices\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424608 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cf3edebd-74ab-4b7d-8706-2eda69d91aea-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424673 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-combined-ca-bundle\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424699 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzgjw\" (UniqueName: \"kubernetes.io/projected/795431b0-73d4-4c09-95ec-59c039a001d4-kube-api-access-kzgjw\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424722 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3edebd-74ab-4b7d-8706-2eda69d91aea-config\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424754 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k5pg\" (UniqueName: \"kubernetes.io/projected/cf3edebd-74ab-4b7d-8706-2eda69d91aea-kube-api-access-7k5pg\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424780 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/795431b0-73d4-4c09-95ec-59c039a001d4-etc-swift\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424807 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf3edebd-74ab-4b7d-8706-2eda69d91aea-scripts\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424943 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.424972 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-dispersionconf\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.425008 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.425229 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/795431b0-73d4-4c09-95ec-59c039a001d4-scripts\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.425663 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/795431b0-73d4-4c09-95ec-59c039a001d4-ring-data-devices\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.426339 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/795431b0-73d4-4c09-95ec-59c039a001d4-etc-swift\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.429157 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-swiftconf\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.431234 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-dispersionconf\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.431772 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-combined-ca-bundle\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.443367 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzgjw\" (UniqueName: \"kubernetes.io/projected/795431b0-73d4-4c09-95ec-59c039a001d4-kube-api-access-kzgjw\") pod \"swift-ring-rebalance-zr6wn\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.493813 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.526108 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.526181 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cf3edebd-74ab-4b7d-8706-2eda69d91aea-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.526272 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3edebd-74ab-4b7d-8706-2eda69d91aea-config\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.526299 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k5pg\" (UniqueName: \"kubernetes.io/projected/cf3edebd-74ab-4b7d-8706-2eda69d91aea-kube-api-access-7k5pg\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.526325 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf3edebd-74ab-4b7d-8706-2eda69d91aea-scripts\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.526345 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.526368 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.527621 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3edebd-74ab-4b7d-8706-2eda69d91aea-config\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.527725 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cf3edebd-74ab-4b7d-8706-2eda69d91aea-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.528168 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf3edebd-74ab-4b7d-8706-2eda69d91aea-scripts\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.536620 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.536927 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.537633 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.545377 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k5pg\" (UniqueName: \"kubernetes.io/projected/cf3edebd-74ab-4b7d-8706-2eda69d91aea-kube-api-access-7k5pg\") pod \"ovn-northd-0\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.647009 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.939312 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zr6wn"] Nov 21 10:02:50 crc kubenswrapper[4972]: W1121 10:02:50.948624 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795431b0_73d4_4c09_95ec_59c039a001d4.slice/crio-babee6a6f39febcd37d1afc162e951e9cd44a4409126c0931c62dcab5344f933 WatchSource:0}: Error finding container babee6a6f39febcd37d1afc162e951e9cd44a4409126c0931c62dcab5344f933: Status 404 returned error can't find the container with id babee6a6f39febcd37d1afc162e951e9cd44a4409126c0931c62dcab5344f933 Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.951581 4972 generic.go:334] "Generic (PLEG): container finished" podID="fd01140d-fceb-4804-8b0b-f7e91fe8791c" containerID="22fa1942a125ddc597b536b9c82be5fe0f710dc21f8a6b111bbc7bc51708c4fc" exitCode=0 Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.951652 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667df85987-h65qf" event={"ID":"fd01140d-fceb-4804-8b0b-f7e91fe8791c","Type":"ContainerDied","Data":"22fa1942a125ddc597b536b9c82be5fe0f710dc21f8a6b111bbc7bc51708c4fc"} Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.975610 4972 generic.go:334] "Generic (PLEG): container finished" podID="fe4c23d4-4b37-45b9-873f-d3f153b492fd" containerID="94e74984013c059b986a855521c25f5d946c709ddaf0c92ab76ba93775a1bb1f" exitCode=0 Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.975696 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" event={"ID":"fe4c23d4-4b37-45b9-873f-d3f153b492fd","Type":"ContainerDied","Data":"94e74984013c059b986a855521c25f5d946c709ddaf0c92ab76ba93775a1bb1f"} Nov 21 10:02:50 crc kubenswrapper[4972]: I1121 10:02:50.975727 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" event={"ID":"fe4c23d4-4b37-45b9-873f-d3f153b492fd","Type":"ContainerStarted","Data":"3f418c5d7438c783c3e28ef6c519cb2cfa16977842a8562aebb2ba59abe8d9c2"} Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.004190 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-psvpd" event={"ID":"2bb7ffc3-501c-420f-834c-0509b4a509eb","Type":"ContainerStarted","Data":"4789958ba38111e46a66c03e780a501dd267a5f7418fac71da4781d35d79c30d"} Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.004261 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-psvpd" event={"ID":"2bb7ffc3-501c-420f-834c-0509b4a509eb","Type":"ContainerStarted","Data":"cb48925dc46f82ba1a9ca834aa51628abaf1d740a523d607f82307edbfb32b1e"} Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.027679 4972 generic.go:334] "Generic (PLEG): container finished" podID="093f696e-dee6-47dd-ba6f-07e65f594e60" containerID="2987bfc8c3433f1d4cc0836276916780ba23045d1c20f275a6b22907e06e8fe3" exitCode=0 Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.028944 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" event={"ID":"093f696e-dee6-47dd-ba6f-07e65f594e60","Type":"ContainerDied","Data":"2987bfc8c3433f1d4cc0836276916780ba23045d1c20f275a6b22907e06e8fe3"} Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.029014 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" event={"ID":"093f696e-dee6-47dd-ba6f-07e65f594e60","Type":"ContainerStarted","Data":"dd55be8c8459db780abf6ef086a079d584b7b50491cd949a3712a28d97665de4"} Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.127706 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-psvpd" podStartSLOduration=2.127681511 podStartE2EDuration="2.127681511s" podCreationTimestamp="2025-11-21 10:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:02:51.082430945 +0000 UTC m=+1316.191573443" watchObservedRunningTime="2025-11-21 10:02:51.127681511 +0000 UTC m=+1316.236824009" Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.191565 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 10:02:51 crc kubenswrapper[4972]: W1121 10:02:51.204998 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf3edebd_74ab_4b7d_8706_2eda69d91aea.slice/crio-3c918d0cee85f88f64195c902f07128cfef54a4b6d436a5109a80299cb794599 WatchSource:0}: Error finding container 3c918d0cee85f88f64195c902f07128cfef54a4b6d436a5109a80299cb794599: Status 404 returned error can't find the container with id 3c918d0cee85f88f64195c902f07128cfef54a4b6d436a5109a80299cb794599 Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.350306 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:51 crc kubenswrapper[4972]: E1121 10:02:51.350524 4972 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 10:02:51 crc kubenswrapper[4972]: E1121 10:02:51.350550 4972 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 10:02:51 crc kubenswrapper[4972]: E1121 10:02:51.350609 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift podName:31e140ab-a53a-4af2-864f-4c399d44f217 nodeName:}" failed. No retries permitted until 2025-11-21 10:02:53.350591903 +0000 UTC m=+1318.459734401 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift") pod "swift-storage-0" (UID: "31e140ab-a53a-4af2-864f-4c399d44f217") : configmap "swift-ring-files" not found Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.394052 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.451989 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs7jw\" (UniqueName: \"kubernetes.io/projected/fd01140d-fceb-4804-8b0b-f7e91fe8791c-kube-api-access-hs7jw\") pod \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\" (UID: \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\") " Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.452053 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd01140d-fceb-4804-8b0b-f7e91fe8791c-config\") pod \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\" (UID: \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\") " Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.452171 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd01140d-fceb-4804-8b0b-f7e91fe8791c-dns-svc\") pod \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\" (UID: \"fd01140d-fceb-4804-8b0b-f7e91fe8791c\") " Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.457792 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd01140d-fceb-4804-8b0b-f7e91fe8791c-kube-api-access-hs7jw" (OuterVolumeSpecName: "kube-api-access-hs7jw") pod "fd01140d-fceb-4804-8b0b-f7e91fe8791c" (UID: "fd01140d-fceb-4804-8b0b-f7e91fe8791c"). InnerVolumeSpecName "kube-api-access-hs7jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.472310 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd01140d-fceb-4804-8b0b-f7e91fe8791c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd01140d-fceb-4804-8b0b-f7e91fe8791c" (UID: "fd01140d-fceb-4804-8b0b-f7e91fe8791c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.472509 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd01140d-fceb-4804-8b0b-f7e91fe8791c-config" (OuterVolumeSpecName: "config") pod "fd01140d-fceb-4804-8b0b-f7e91fe8791c" (UID: "fd01140d-fceb-4804-8b0b-f7e91fe8791c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.554011 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd01140d-fceb-4804-8b0b-f7e91fe8791c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.554041 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs7jw\" (UniqueName: \"kubernetes.io/projected/fd01140d-fceb-4804-8b0b-f7e91fe8791c-kube-api-access-hs7jw\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.554051 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd01140d-fceb-4804-8b0b-f7e91fe8791c-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.777280 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43ca8eae-ab19-4dbc-80b2-036cc49daf01" path="/var/lib/kubelet/pods/43ca8eae-ab19-4dbc-80b2-036cc49daf01/volumes" Nov 21 10:02:51 crc kubenswrapper[4972]: I1121 10:02:51.777942 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa2af7d6-efa4-459b-9a15-bdb10778978b" path="/var/lib/kubelet/pods/fa2af7d6-efa4-459b-9a15-bdb10778978b/volumes" Nov 21 10:02:52 crc kubenswrapper[4972]: I1121 10:02:52.037317 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667df85987-h65qf" event={"ID":"fd01140d-fceb-4804-8b0b-f7e91fe8791c","Type":"ContainerDied","Data":"5b1cbcb4851c66fb43a56a92b9890e854a763ad591597ae91b79317eea9e0eef"} Nov 21 10:02:52 crc kubenswrapper[4972]: I1121 10:02:52.037372 4972 scope.go:117] "RemoveContainer" containerID="22fa1942a125ddc597b536b9c82be5fe0f710dc21f8a6b111bbc7bc51708c4fc" Nov 21 10:02:52 crc kubenswrapper[4972]: I1121 10:02:52.037515 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667df85987-h65qf" Nov 21 10:02:52 crc kubenswrapper[4972]: I1121 10:02:52.040805 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" event={"ID":"fe4c23d4-4b37-45b9-873f-d3f153b492fd","Type":"ContainerStarted","Data":"cabf277b1913e0a7d02e4e3d93b2624a784d996a27ed853cf9acf276688cf10c"} Nov 21 10:02:52 crc kubenswrapper[4972]: I1121 10:02:52.040929 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:52 crc kubenswrapper[4972]: I1121 10:02:52.043760 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zr6wn" event={"ID":"795431b0-73d4-4c09-95ec-59c039a001d4","Type":"ContainerStarted","Data":"babee6a6f39febcd37d1afc162e951e9cd44a4409126c0931c62dcab5344f933"} Nov 21 10:02:52 crc kubenswrapper[4972]: I1121 10:02:52.061499 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" podStartSLOduration=3.061477422 podStartE2EDuration="3.061477422s" podCreationTimestamp="2025-11-21 10:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:02:52.061245066 +0000 UTC m=+1317.170387574" watchObservedRunningTime="2025-11-21 10:02:52.061477422 +0000 UTC m=+1317.170619920" Nov 21 10:02:52 crc kubenswrapper[4972]: I1121 10:02:52.065524 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cf3edebd-74ab-4b7d-8706-2eda69d91aea","Type":"ContainerStarted","Data":"3c918d0cee85f88f64195c902f07128cfef54a4b6d436a5109a80299cb794599"} Nov 21 10:02:52 crc kubenswrapper[4972]: I1121 10:02:52.069527 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" event={"ID":"093f696e-dee6-47dd-ba6f-07e65f594e60","Type":"ContainerStarted","Data":"39748cd594210f7c13af38928aceccdc179584a99252069659d800a228a58a60"} Nov 21 10:02:52 crc kubenswrapper[4972]: I1121 10:02:52.102065 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667df85987-h65qf"] Nov 21 10:02:52 crc kubenswrapper[4972]: I1121 10:02:52.110249 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-667df85987-h65qf"] Nov 21 10:02:52 crc kubenswrapper[4972]: I1121 10:02:52.110410 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" podStartSLOduration=3.110401286 podStartE2EDuration="3.110401286s" podCreationTimestamp="2025-11-21 10:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:02:52.104221431 +0000 UTC m=+1317.213363929" watchObservedRunningTime="2025-11-21 10:02:52.110401286 +0000 UTC m=+1317.219543784" Nov 21 10:02:53 crc kubenswrapper[4972]: I1121 10:02:53.079762 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:53 crc kubenswrapper[4972]: I1121 10:02:53.390363 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:53 crc kubenswrapper[4972]: E1121 10:02:53.390559 4972 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 10:02:53 crc kubenswrapper[4972]: E1121 10:02:53.390592 4972 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 10:02:53 crc kubenswrapper[4972]: E1121 10:02:53.390677 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift podName:31e140ab-a53a-4af2-864f-4c399d44f217 nodeName:}" failed. No retries permitted until 2025-11-21 10:02:57.390649491 +0000 UTC m=+1322.499792009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift") pod "swift-storage-0" (UID: "31e140ab-a53a-4af2-864f-4c399d44f217") : configmap "swift-ring-files" not found Nov 21 10:02:53 crc kubenswrapper[4972]: I1121 10:02:53.770998 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd01140d-fceb-4804-8b0b-f7e91fe8791c" path="/var/lib/kubelet/pods/fd01140d-fceb-4804-8b0b-f7e91fe8791c/volumes" Nov 21 10:02:54 crc kubenswrapper[4972]: I1121 10:02:54.095142 4972 generic.go:334] "Generic (PLEG): container finished" podID="8027f46e-1fe2-46ad-9226-11b2cc3f8da6" containerID="711835bfec997cc5c4cd5bb8aa782593a04256cba1b1b130be09cf0a32345a38" exitCode=0 Nov 21 10:02:54 crc kubenswrapper[4972]: I1121 10:02:54.095634 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8027f46e-1fe2-46ad-9226-11b2cc3f8da6","Type":"ContainerDied","Data":"711835bfec997cc5c4cd5bb8aa782593a04256cba1b1b130be09cf0a32345a38"} Nov 21 10:02:56 crc kubenswrapper[4972]: I1121 10:02:56.179510 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:02:56 crc kubenswrapper[4972]: I1121 10:02:56.180602 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:02:57 crc kubenswrapper[4972]: I1121 10:02:57.482131 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:02:57 crc kubenswrapper[4972]: E1121 10:02:57.482342 4972 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 10:02:57 crc kubenswrapper[4972]: E1121 10:02:57.482382 4972 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 10:02:57 crc kubenswrapper[4972]: E1121 10:02:57.482481 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift podName:31e140ab-a53a-4af2-864f-4c399d44f217 nodeName:}" failed. No retries permitted until 2025-11-21 10:03:05.48245154 +0000 UTC m=+1330.591594078 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift") pod "swift-storage-0" (UID: "31e140ab-a53a-4af2-864f-4c399d44f217") : configmap "swift-ring-files" not found Nov 21 10:02:59 crc kubenswrapper[4972]: I1121 10:02:59.524182 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:02:59 crc kubenswrapper[4972]: I1121 10:02:59.770775 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:02:59 crc kubenswrapper[4972]: I1121 10:02:59.830816 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f55886f45-n6vkh"] Nov 21 10:03:00 crc kubenswrapper[4972]: I1121 10:03:00.156817 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" podUID="fe4c23d4-4b37-45b9-873f-d3f153b492fd" containerName="dnsmasq-dns" containerID="cri-o://cabf277b1913e0a7d02e4e3d93b2624a784d996a27ed853cf9acf276688cf10c" gracePeriod=10 Nov 21 10:03:04 crc kubenswrapper[4972]: I1121 10:03:04.522688 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" podUID="fe4c23d4-4b37-45b9-873f-d3f153b492fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Nov 21 10:03:05 crc kubenswrapper[4972]: I1121 10:03:05.532958 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:03:05 crc kubenswrapper[4972]: E1121 10:03:05.533167 4972 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 10:03:05 crc kubenswrapper[4972]: E1121 10:03:05.533193 4972 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 10:03:05 crc kubenswrapper[4972]: E1121 10:03:05.533259 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift podName:31e140ab-a53a-4af2-864f-4c399d44f217 nodeName:}" failed. No retries permitted until 2025-11-21 10:03:21.533239007 +0000 UTC m=+1346.642381515 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift") pod "swift-storage-0" (UID: "31e140ab-a53a-4af2-864f-4c399d44f217") : configmap "swift-ring-files" not found Nov 21 10:03:05 crc kubenswrapper[4972]: E1121 10:03:05.664822 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:5d3268a6af8e505ab901049cb8d643a6f3de7e4d7b1cb4820255c11eff9a7bd0" Nov 21 10:03:05 crc kubenswrapper[4972]: E1121 10:03:05.665351 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:swift-ring-rebalance,Image:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:5d3268a6af8e505ab901049cb8d643a6f3de7e4d7b1cb4820255c11eff9a7bd0,Command:[/usr/local/bin/swift-ring-tool all],Args:[],WorkingDir:/etc/swift,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CM_NAME,Value:swift-ring-files,ValueFrom:nil,},EnvVar{Name:NAMESPACE,Value:openstack,ValueFrom:nil,},EnvVar{Name:OWNER_APIVERSION,Value:swift.openstack.org/v1beta1,ValueFrom:nil,},EnvVar{Name:OWNER_KIND,Value:SwiftRing,ValueFrom:nil,},EnvVar{Name:OWNER_NAME,Value:swift-ring,ValueFrom:nil,},EnvVar{Name:OWNER_UID,Value:6b6dcaff-503d-4769-953c-119d639a3f63,ValueFrom:nil,},EnvVar{Name:SWIFT_MIN_PART_HOURS,Value:1,ValueFrom:nil,},EnvVar{Name:SWIFT_PART_POWER,Value:10,ValueFrom:nil,},EnvVar{Name:SWIFT_REPLICAS,Value:1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/swift-ring-tool,SubPath:swift-ring-tool,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:swiftconf,ReadOnly:true,MountPath:/etc/swift/swift.conf,SubPath:swift.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ring-data-devices,ReadOnly:true,MountPath:/var/lib/config-data/ring-devices,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dispersionconf,ReadOnly:true,MountPath:/etc/swift/dispersion.conf,SubPath:dispersion.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kzgjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-ring-rebalance-zr6wn_openstack(795431b0-73d4-4c09-95ec-59c039a001d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:03:05 crc kubenswrapper[4972]: E1121 10:03:05.666825 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/swift-ring-rebalance-zr6wn" podUID="795431b0-73d4-4c09-95ec-59c039a001d4" Nov 21 10:03:06 crc kubenswrapper[4972]: E1121 10:03:06.073424 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:b50791356d119186382dcc1928e509b4e306ad808a66ff6a694e89fc04f18bcd" Nov 21 10:03:06 crc kubenswrapper[4972]: E1121 10:03:06.074130 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-northd,Image:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:b50791356d119186382dcc1928e509b4e306ad808a66ff6a694e89fc04f18bcd,Command:[/usr/bin/ovn-northd],Args:[-vfile:off -vconsole:info --n-threads=1 --ovnnb-db=ssl:ovsdbserver-nb-0.openstack.svc.cluster.local:6641 --ovnsb-db=ssl:ovsdbserver-sb-0.openstack.svc.cluster.local:6642 --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n85h64bh7bh66fh659h588h5c5hdh59fh585h68fh5ch64ch545h597h587h5bdh58dhb6h54bh546h558h5cch9dh669h96hcdh54dh8h555h5f5h57cq,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:certs,Value:n644h67dh5c7h8dh5bdh5ddh59hfbh5b6h5fbhbdhb7h576h545h564h54bh55fh5c6hb7h5chbh598h7fh695h699h589hb8h66fh589h5dbh78h68bq,ValueFrom:nil,},EnvVar{Name:certs_metrics,Value:n579h577h8hffh65bhf4h56fhfhcbh549hd9hbbh57bh64h56ch697h8dh64fh56bhdhc5hf7h6chc4h98h99h5c9h654h76h667h576h89q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-config,Value:n5c8h7ch56bh8dh8hc4h5dch9dh68h6bhb7h598h549h5dbh66fh6bh5b4h5cch5d6h55ch57fhfch588h89h5ddh5d6h65bh65bh8dhc4h67dh569q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-scripts,Value:n664hd8h66ch58dh64hc9h66bhd4h558h697h67bh557hdch664h567h669h555h696h556h556h5fh5bh569hbh665h9dh4h9bh564hc8h5b7h5c4q,ValueFrom:nil,},EnvVar{Name:tls-ca-bundle.pem,Value:n55bh5b5h5f9hcfh9fh59bh75h554h568h687h588h5bdh559h59dh58dh59fh5cch68fh6bh685hf5h5d4h57ch6fh5c9hd7h4hcch88h699h55fhbbq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7k5pg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-northd-0_openstack(cf3edebd-74ab-4b7d-8706-2eda69d91aea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.212305 4972 generic.go:334] "Generic (PLEG): container finished" podID="fe4c23d4-4b37-45b9-873f-d3f153b492fd" containerID="cabf277b1913e0a7d02e4e3d93b2624a784d996a27ed853cf9acf276688cf10c" exitCode=0 Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.212391 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" event={"ID":"fe4c23d4-4b37-45b9-873f-d3f153b492fd","Type":"ContainerDied","Data":"cabf277b1913e0a7d02e4e3d93b2624a784d996a27ed853cf9acf276688cf10c"} Nov 21 10:03:06 crc kubenswrapper[4972]: E1121 10:03:06.215740 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:5d3268a6af8e505ab901049cb8d643a6f3de7e4d7b1cb4820255c11eff9a7bd0\\\"\"" pod="openstack/swift-ring-rebalance-zr6wn" podUID="795431b0-73d4-4c09-95ec-59c039a001d4" Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.371618 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:03:06 crc kubenswrapper[4972]: E1121 10:03:06.413824 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-northd-0" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.447888 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49rr9\" (UniqueName: \"kubernetes.io/projected/fe4c23d4-4b37-45b9-873f-d3f153b492fd-kube-api-access-49rr9\") pod \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.448025 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-dns-svc\") pod \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.448080 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-ovsdbserver-sb\") pod \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.448129 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-config\") pod \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\" (UID: \"fe4c23d4-4b37-45b9-873f-d3f153b492fd\") " Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.456057 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4c23d4-4b37-45b9-873f-d3f153b492fd-kube-api-access-49rr9" (OuterVolumeSpecName: "kube-api-access-49rr9") pod "fe4c23d4-4b37-45b9-873f-d3f153b492fd" (UID: "fe4c23d4-4b37-45b9-873f-d3f153b492fd"). InnerVolumeSpecName "kube-api-access-49rr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.489505 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe4c23d4-4b37-45b9-873f-d3f153b492fd" (UID: "fe4c23d4-4b37-45b9-873f-d3f153b492fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.490602 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-config" (OuterVolumeSpecName: "config") pod "fe4c23d4-4b37-45b9-873f-d3f153b492fd" (UID: "fe4c23d4-4b37-45b9-873f-d3f153b492fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.491436 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe4c23d4-4b37-45b9-873f-d3f153b492fd" (UID: "fe4c23d4-4b37-45b9-873f-d3f153b492fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.550124 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.550156 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.550168 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe4c23d4-4b37-45b9-873f-d3f153b492fd-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.550177 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49rr9\" (UniqueName: \"kubernetes.io/projected/fe4c23d4-4b37-45b9-873f-d3f153b492fd-kube-api-access-49rr9\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:06 crc kubenswrapper[4972]: I1121 10:03:06.984869 4972 scope.go:117] "RemoveContainer" containerID="fe1d5d2919f859c0de5641cf4964065bee97d51a122c157839efa09d86dabc7c" Nov 21 10:03:07 crc kubenswrapper[4972]: I1121 10:03:07.015131 4972 scope.go:117] "RemoveContainer" containerID="85d5fd7ad1351651afc383acae3502996574553281db5c41b6fb66d48d7be31a" Nov 21 10:03:07 crc kubenswrapper[4972]: I1121 10:03:07.045504 4972 scope.go:117] "RemoveContainer" containerID="57057652bf5b08dc825b1d7a1a6726686f8c3fe5b6df1a2aeb0f8792367f49f0" Nov 21 10:03:07 crc kubenswrapper[4972]: I1121 10:03:07.221949 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8027f46e-1fe2-46ad-9226-11b2cc3f8da6","Type":"ContainerStarted","Data":"2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae"} Nov 21 10:03:07 crc kubenswrapper[4972]: I1121 10:03:07.223821 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cf3edebd-74ab-4b7d-8706-2eda69d91aea","Type":"ContainerStarted","Data":"8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37"} Nov 21 10:03:07 crc kubenswrapper[4972]: E1121 10:03:07.225751 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:b50791356d119186382dcc1928e509b4e306ad808a66ff6a694e89fc04f18bcd\\\"\"" pod="openstack/ovn-northd-0" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" Nov 21 10:03:07 crc kubenswrapper[4972]: I1121 10:03:07.228282 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" Nov 21 10:03:07 crc kubenswrapper[4972]: I1121 10:03:07.228375 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f55886f45-n6vkh" event={"ID":"fe4c23d4-4b37-45b9-873f-d3f153b492fd","Type":"ContainerDied","Data":"3f418c5d7438c783c3e28ef6c519cb2cfa16977842a8562aebb2ba59abe8d9c2"} Nov 21 10:03:07 crc kubenswrapper[4972]: I1121 10:03:07.228466 4972 scope.go:117] "RemoveContainer" containerID="cabf277b1913e0a7d02e4e3d93b2624a784d996a27ed853cf9acf276688cf10c" Nov 21 10:03:07 crc kubenswrapper[4972]: I1121 10:03:07.249635 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371972.60516 podStartE2EDuration="1m4.249616427s" podCreationTimestamp="2025-11-21 10:02:03 +0000 UTC" firstStartedPulling="2025-11-21 10:02:05.885302126 +0000 UTC m=+1270.994444624" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:03:07.245649071 +0000 UTC m=+1332.354791589" watchObservedRunningTime="2025-11-21 10:03:07.249616427 +0000 UTC m=+1332.358758925" Nov 21 10:03:07 crc kubenswrapper[4972]: I1121 10:03:07.251918 4972 scope.go:117] "RemoveContainer" containerID="94e74984013c059b986a855521c25f5d946c709ddaf0c92ab76ba93775a1bb1f" Nov 21 10:03:07 crc kubenswrapper[4972]: I1121 10:03:07.297950 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f55886f45-n6vkh"] Nov 21 10:03:07 crc kubenswrapper[4972]: I1121 10:03:07.300490 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f55886f45-n6vkh"] Nov 21 10:03:07 crc kubenswrapper[4972]: I1121 10:03:07.771312 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe4c23d4-4b37-45b9-873f-d3f153b492fd" path="/var/lib/kubelet/pods/fe4c23d4-4b37-45b9-873f-d3f153b492fd/volumes" Nov 21 10:03:08 crc kubenswrapper[4972]: E1121 10:03:08.237582 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:b50791356d119186382dcc1928e509b4e306ad808a66ff6a694e89fc04f18bcd\\\"\"" pod="openstack/ovn-northd-0" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" Nov 21 10:03:09 crc kubenswrapper[4972]: I1121 10:03:09.250436 4972 generic.go:334] "Generic (PLEG): container finished" podID="392b5094-f8ef-47b8-8dc5-9e1d2dbef612" containerID="a2591e9b6da9f52ba55bc3c5cc658736bb1d86090db5b0174bf09055e27e205d" exitCode=0 Nov 21 10:03:09 crc kubenswrapper[4972]: I1121 10:03:09.250580 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"392b5094-f8ef-47b8-8dc5-9e1d2dbef612","Type":"ContainerDied","Data":"a2591e9b6da9f52ba55bc3c5cc658736bb1d86090db5b0174bf09055e27e205d"} Nov 21 10:03:10 crc kubenswrapper[4972]: I1121 10:03:10.264175 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"392b5094-f8ef-47b8-8dc5-9e1d2dbef612","Type":"ContainerStarted","Data":"40fd57bb0048a573eb9c5e1aa41727272375095e934fe8e65459e974a94e41af"} Nov 21 10:03:10 crc kubenswrapper[4972]: I1121 10:03:10.264478 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 21 10:03:10 crc kubenswrapper[4972]: I1121 10:03:10.299036 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.577773188 podStartE2EDuration="1m8.29901471s" podCreationTimestamp="2025-11-21 10:02:02 +0000 UTC" firstStartedPulling="2025-11-21 10:02:04.33190574 +0000 UTC m=+1269.441048238" lastFinishedPulling="2025-11-21 10:02:34.053147262 +0000 UTC m=+1299.162289760" observedRunningTime="2025-11-21 10:03:10.291443228 +0000 UTC m=+1335.400585756" watchObservedRunningTime="2025-11-21 10:03:10.29901471 +0000 UTC m=+1335.408157208" Nov 21 10:03:12 crc kubenswrapper[4972]: I1121 10:03:12.442754 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5q7hj" podUID="9ab92bde-9b45-49ca-a6e9-43c8921b3002" containerName="ovn-controller" probeResult="failure" output=< Nov 21 10:03:12 crc kubenswrapper[4972]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 21 10:03:12 crc kubenswrapper[4972]: > Nov 21 10:03:15 crc kubenswrapper[4972]: I1121 10:03:15.159799 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 21 10:03:15 crc kubenswrapper[4972]: I1121 10:03:15.160183 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 21 10:03:15 crc kubenswrapper[4972]: I1121 10:03:15.264662 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 21 10:03:15 crc kubenswrapper[4972]: I1121 10:03:15.308572 4972 generic.go:334] "Generic (PLEG): container finished" podID="2bc44abc-7710-432b-b503-fd54e3afeede" containerID="695ff7e74d4466cf78e5259f9386de929cd91903ca07545dbfd50157060920ad" exitCode=0 Nov 21 10:03:15 crc kubenswrapper[4972]: I1121 10:03:15.308684 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2bc44abc-7710-432b-b503-fd54e3afeede","Type":"ContainerDied","Data":"695ff7e74d4466cf78e5259f9386de929cd91903ca07545dbfd50157060920ad"} Nov 21 10:03:15 crc kubenswrapper[4972]: I1121 10:03:15.390179 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.293261 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qth84"] Nov 21 10:03:16 crc kubenswrapper[4972]: E1121 10:03:16.293624 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4c23d4-4b37-45b9-873f-d3f153b492fd" containerName="init" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.293636 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4c23d4-4b37-45b9-873f-d3f153b492fd" containerName="init" Nov 21 10:03:16 crc kubenswrapper[4972]: E1121 10:03:16.293678 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd01140d-fceb-4804-8b0b-f7e91fe8791c" containerName="init" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.293686 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd01140d-fceb-4804-8b0b-f7e91fe8791c" containerName="init" Nov 21 10:03:16 crc kubenswrapper[4972]: E1121 10:03:16.293702 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4c23d4-4b37-45b9-873f-d3f153b492fd" containerName="dnsmasq-dns" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.293710 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4c23d4-4b37-45b9-873f-d3f153b492fd" containerName="dnsmasq-dns" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.293955 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd01140d-fceb-4804-8b0b-f7e91fe8791c" containerName="init" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.293977 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4c23d4-4b37-45b9-873f-d3f153b492fd" containerName="dnsmasq-dns" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.294532 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qth84" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.301280 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-894c-account-create-jgvz7"] Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.302626 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-894c-account-create-jgvz7" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.305746 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.310115 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qth84"] Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.318881 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-894c-account-create-jgvz7"] Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.319966 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2bc44abc-7710-432b-b503-fd54e3afeede","Type":"ContainerStarted","Data":"c8fbc9ceb2b6148e29eeae60a7cccd8704bb5b0088efc4a03700f71500ec7ef2"} Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.320874 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.360311 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371962.494486 podStartE2EDuration="1m14.360290095s" podCreationTimestamp="2025-11-21 10:02:02 +0000 UTC" firstStartedPulling="2025-11-21 10:02:04.58476116 +0000 UTC m=+1269.693903658" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:03:16.352850166 +0000 UTC m=+1341.461992674" watchObservedRunningTime="2025-11-21 10:03:16.360290095 +0000 UTC m=+1341.469432593" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.425287 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c689e1-6896-490f-a5ad-ab34ffdd5b4d-operator-scripts\") pod \"keystone-db-create-qth84\" (UID: \"86c689e1-6896-490f-a5ad-ab34ffdd5b4d\") " pod="openstack/keystone-db-create-qth84" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.425368 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ab5244-5c8e-4699-bc66-7e74b8875520-operator-scripts\") pod \"keystone-894c-account-create-jgvz7\" (UID: \"b3ab5244-5c8e-4699-bc66-7e74b8875520\") " pod="openstack/keystone-894c-account-create-jgvz7" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.425484 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58wn9\" (UniqueName: \"kubernetes.io/projected/86c689e1-6896-490f-a5ad-ab34ffdd5b4d-kube-api-access-58wn9\") pod \"keystone-db-create-qth84\" (UID: \"86c689e1-6896-490f-a5ad-ab34ffdd5b4d\") " pod="openstack/keystone-db-create-qth84" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.425513 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbpx\" (UniqueName: \"kubernetes.io/projected/b3ab5244-5c8e-4699-bc66-7e74b8875520-kube-api-access-5cbpx\") pod \"keystone-894c-account-create-jgvz7\" (UID: \"b3ab5244-5c8e-4699-bc66-7e74b8875520\") " pod="openstack/keystone-894c-account-create-jgvz7" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.514420 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-ndz5q"] Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.516170 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ndz5q" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.527429 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c689e1-6896-490f-a5ad-ab34ffdd5b4d-operator-scripts\") pod \"keystone-db-create-qth84\" (UID: \"86c689e1-6896-490f-a5ad-ab34ffdd5b4d\") " pod="openstack/keystone-db-create-qth84" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.527506 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ab5244-5c8e-4699-bc66-7e74b8875520-operator-scripts\") pod \"keystone-894c-account-create-jgvz7\" (UID: \"b3ab5244-5c8e-4699-bc66-7e74b8875520\") " pod="openstack/keystone-894c-account-create-jgvz7" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.527553 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58wn9\" (UniqueName: \"kubernetes.io/projected/86c689e1-6896-490f-a5ad-ab34ffdd5b4d-kube-api-access-58wn9\") pod \"keystone-db-create-qth84\" (UID: \"86c689e1-6896-490f-a5ad-ab34ffdd5b4d\") " pod="openstack/keystone-db-create-qth84" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.527578 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cbpx\" (UniqueName: \"kubernetes.io/projected/b3ab5244-5c8e-4699-bc66-7e74b8875520-kube-api-access-5cbpx\") pod \"keystone-894c-account-create-jgvz7\" (UID: \"b3ab5244-5c8e-4699-bc66-7e74b8875520\") " pod="openstack/keystone-894c-account-create-jgvz7" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.528433 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c689e1-6896-490f-a5ad-ab34ffdd5b4d-operator-scripts\") pod \"keystone-db-create-qth84\" (UID: \"86c689e1-6896-490f-a5ad-ab34ffdd5b4d\") " pod="openstack/keystone-db-create-qth84" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.528459 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ab5244-5c8e-4699-bc66-7e74b8875520-operator-scripts\") pod \"keystone-894c-account-create-jgvz7\" (UID: \"b3ab5244-5c8e-4699-bc66-7e74b8875520\") " pod="openstack/keystone-894c-account-create-jgvz7" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.529299 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ndz5q"] Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.547016 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cbpx\" (UniqueName: \"kubernetes.io/projected/b3ab5244-5c8e-4699-bc66-7e74b8875520-kube-api-access-5cbpx\") pod \"keystone-894c-account-create-jgvz7\" (UID: \"b3ab5244-5c8e-4699-bc66-7e74b8875520\") " pod="openstack/keystone-894c-account-create-jgvz7" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.562559 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58wn9\" (UniqueName: \"kubernetes.io/projected/86c689e1-6896-490f-a5ad-ab34ffdd5b4d-kube-api-access-58wn9\") pod \"keystone-db-create-qth84\" (UID: \"86c689e1-6896-490f-a5ad-ab34ffdd5b4d\") " pod="openstack/keystone-db-create-qth84" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.583015 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-66d8-account-create-hnrrl"] Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.588992 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66d8-account-create-hnrrl" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.591408 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.595694 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66d8-account-create-hnrrl"] Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.627892 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qth84" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.628917 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/726b6f39-d584-463b-aa77-f5a9e99b778a-operator-scripts\") pod \"placement-db-create-ndz5q\" (UID: \"726b6f39-d584-463b-aa77-f5a9e99b778a\") " pod="openstack/placement-db-create-ndz5q" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.628956 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkllx\" (UniqueName: \"kubernetes.io/projected/726b6f39-d584-463b-aa77-f5a9e99b778a-kube-api-access-lkllx\") pod \"placement-db-create-ndz5q\" (UID: \"726b6f39-d584-463b-aa77-f5a9e99b778a\") " pod="openstack/placement-db-create-ndz5q" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.637212 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-894c-account-create-jgvz7" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.732681 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/726b6f39-d584-463b-aa77-f5a9e99b778a-operator-scripts\") pod \"placement-db-create-ndz5q\" (UID: \"726b6f39-d584-463b-aa77-f5a9e99b778a\") " pod="openstack/placement-db-create-ndz5q" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.732750 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkllx\" (UniqueName: \"kubernetes.io/projected/726b6f39-d584-463b-aa77-f5a9e99b778a-kube-api-access-lkllx\") pod \"placement-db-create-ndz5q\" (UID: \"726b6f39-d584-463b-aa77-f5a9e99b778a\") " pod="openstack/placement-db-create-ndz5q" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.732842 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5828f96a-2e2a-416f-b07e-584b5571b87d-operator-scripts\") pod \"placement-66d8-account-create-hnrrl\" (UID: \"5828f96a-2e2a-416f-b07e-584b5571b87d\") " pod="openstack/placement-66d8-account-create-hnrrl" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.732988 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwq6g\" (UniqueName: \"kubernetes.io/projected/5828f96a-2e2a-416f-b07e-584b5571b87d-kube-api-access-zwq6g\") pod \"placement-66d8-account-create-hnrrl\" (UID: \"5828f96a-2e2a-416f-b07e-584b5571b87d\") " pod="openstack/placement-66d8-account-create-hnrrl" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.733937 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/726b6f39-d584-463b-aa77-f5a9e99b778a-operator-scripts\") pod \"placement-db-create-ndz5q\" (UID: \"726b6f39-d584-463b-aa77-f5a9e99b778a\") " pod="openstack/placement-db-create-ndz5q" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.754111 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkllx\" (UniqueName: \"kubernetes.io/projected/726b6f39-d584-463b-aa77-f5a9e99b778a-kube-api-access-lkllx\") pod \"placement-db-create-ndz5q\" (UID: \"726b6f39-d584-463b-aa77-f5a9e99b778a\") " pod="openstack/placement-db-create-ndz5q" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.835092 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5828f96a-2e2a-416f-b07e-584b5571b87d-operator-scripts\") pod \"placement-66d8-account-create-hnrrl\" (UID: \"5828f96a-2e2a-416f-b07e-584b5571b87d\") " pod="openstack/placement-66d8-account-create-hnrrl" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.835596 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwq6g\" (UniqueName: \"kubernetes.io/projected/5828f96a-2e2a-416f-b07e-584b5571b87d-kube-api-access-zwq6g\") pod \"placement-66d8-account-create-hnrrl\" (UID: \"5828f96a-2e2a-416f-b07e-584b5571b87d\") " pod="openstack/placement-66d8-account-create-hnrrl" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.835967 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5828f96a-2e2a-416f-b07e-584b5571b87d-operator-scripts\") pod \"placement-66d8-account-create-hnrrl\" (UID: \"5828f96a-2e2a-416f-b07e-584b5571b87d\") " pod="openstack/placement-66d8-account-create-hnrrl" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.836364 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ndz5q" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.839881 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gw4hh"] Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.841113 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gw4hh" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.850952 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gw4hh"] Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.858949 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwq6g\" (UniqueName: \"kubernetes.io/projected/5828f96a-2e2a-416f-b07e-584b5571b87d-kube-api-access-zwq6g\") pod \"placement-66d8-account-create-hnrrl\" (UID: \"5828f96a-2e2a-416f-b07e-584b5571b87d\") " pod="openstack/placement-66d8-account-create-hnrrl" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.934489 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66d8-account-create-hnrrl" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.937227 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b222104a-d8e3-440b-bcfe-05976686b3dc-operator-scripts\") pod \"glance-db-create-gw4hh\" (UID: \"b222104a-d8e3-440b-bcfe-05976686b3dc\") " pod="openstack/glance-db-create-gw4hh" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.937310 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksk8n\" (UniqueName: \"kubernetes.io/projected/b222104a-d8e3-440b-bcfe-05976686b3dc-kube-api-access-ksk8n\") pod \"glance-db-create-gw4hh\" (UID: \"b222104a-d8e3-440b-bcfe-05976686b3dc\") " pod="openstack/glance-db-create-gw4hh" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.976349 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d431-account-create-q5hbw"] Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.979421 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d431-account-create-q5hbw" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.982370 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 21 10:03:16 crc kubenswrapper[4972]: I1121 10:03:16.995636 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d431-account-create-q5hbw"] Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.039455 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b222104a-d8e3-440b-bcfe-05976686b3dc-operator-scripts\") pod \"glance-db-create-gw4hh\" (UID: \"b222104a-d8e3-440b-bcfe-05976686b3dc\") " pod="openstack/glance-db-create-gw4hh" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.039557 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksk8n\" (UniqueName: \"kubernetes.io/projected/b222104a-d8e3-440b-bcfe-05976686b3dc-kube-api-access-ksk8n\") pod \"glance-db-create-gw4hh\" (UID: \"b222104a-d8e3-440b-bcfe-05976686b3dc\") " pod="openstack/glance-db-create-gw4hh" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.040640 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b222104a-d8e3-440b-bcfe-05976686b3dc-operator-scripts\") pod \"glance-db-create-gw4hh\" (UID: \"b222104a-d8e3-440b-bcfe-05976686b3dc\") " pod="openstack/glance-db-create-gw4hh" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.058738 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksk8n\" (UniqueName: \"kubernetes.io/projected/b222104a-d8e3-440b-bcfe-05976686b3dc-kube-api-access-ksk8n\") pod \"glance-db-create-gw4hh\" (UID: \"b222104a-d8e3-440b-bcfe-05976686b3dc\") " pod="openstack/glance-db-create-gw4hh" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.100333 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qth84"] Nov 21 10:03:17 crc kubenswrapper[4972]: W1121 10:03:17.103413 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86c689e1_6896_490f_a5ad_ab34ffdd5b4d.slice/crio-d06b348ba3aed41261f46e0e1d5cbc5f7c52355d311d56288821cc3e5da6bd0f WatchSource:0}: Error finding container d06b348ba3aed41261f46e0e1d5cbc5f7c52355d311d56288821cc3e5da6bd0f: Status 404 returned error can't find the container with id d06b348ba3aed41261f46e0e1d5cbc5f7c52355d311d56288821cc3e5da6bd0f Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.108754 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-894c-account-create-jgvz7"] Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.140918 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2-operator-scripts\") pod \"glance-d431-account-create-q5hbw\" (UID: \"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2\") " pod="openstack/glance-d431-account-create-q5hbw" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.141187 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb6p2\" (UniqueName: \"kubernetes.io/projected/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2-kube-api-access-tb6p2\") pod \"glance-d431-account-create-q5hbw\" (UID: \"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2\") " pod="openstack/glance-d431-account-create-q5hbw" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.167586 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gw4hh" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.242243 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb6p2\" (UniqueName: \"kubernetes.io/projected/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2-kube-api-access-tb6p2\") pod \"glance-d431-account-create-q5hbw\" (UID: \"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2\") " pod="openstack/glance-d431-account-create-q5hbw" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.242652 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2-operator-scripts\") pod \"glance-d431-account-create-q5hbw\" (UID: \"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2\") " pod="openstack/glance-d431-account-create-q5hbw" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.243571 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2-operator-scripts\") pod \"glance-d431-account-create-q5hbw\" (UID: \"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2\") " pod="openstack/glance-d431-account-create-q5hbw" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.264805 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb6p2\" (UniqueName: \"kubernetes.io/projected/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2-kube-api-access-tb6p2\") pod \"glance-d431-account-create-q5hbw\" (UID: \"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2\") " pod="openstack/glance-d431-account-create-q5hbw" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.306601 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ndz5q"] Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.317429 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d431-account-create-q5hbw" Nov 21 10:03:17 crc kubenswrapper[4972]: W1121 10:03:17.332110 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod726b6f39_d584_463b_aa77_f5a9e99b778a.slice/crio-d6f5f9f24a3edf5939b983e00928e047016d58e9f1bf45b31d6098630a38b255 WatchSource:0}: Error finding container d6f5f9f24a3edf5939b983e00928e047016d58e9f1bf45b31d6098630a38b255: Status 404 returned error can't find the container with id d6f5f9f24a3edf5939b983e00928e047016d58e9f1bf45b31d6098630a38b255 Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.334038 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-894c-account-create-jgvz7" event={"ID":"b3ab5244-5c8e-4699-bc66-7e74b8875520","Type":"ContainerStarted","Data":"77f55bd48f2ffa8e96492bbb83fe188afc99a3622fa28987f4339fc82a7e2d1f"} Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.334076 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-894c-account-create-jgvz7" event={"ID":"b3ab5244-5c8e-4699-bc66-7e74b8875520","Type":"ContainerStarted","Data":"9873d2fc75a5944e68f5566c68effcf393c054c7366ea668f96ce2048a01a432"} Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.338854 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qth84" event={"ID":"86c689e1-6896-490f-a5ad-ab34ffdd5b4d","Type":"ContainerStarted","Data":"a909c70e3b54475753a77772967a465d3b32beecf8c0a0fe9b25def80fdbd717"} Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.338892 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qth84" event={"ID":"86c689e1-6896-490f-a5ad-ab34ffdd5b4d","Type":"ContainerStarted","Data":"d06b348ba3aed41261f46e0e1d5cbc5f7c52355d311d56288821cc3e5da6bd0f"} Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.363810 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-894c-account-create-jgvz7" podStartSLOduration=1.363788553 podStartE2EDuration="1.363788553s" podCreationTimestamp="2025-11-21 10:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:03:17.350485608 +0000 UTC m=+1342.459628096" watchObservedRunningTime="2025-11-21 10:03:17.363788553 +0000 UTC m=+1342.472931051" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.373911 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-qth84" podStartSLOduration=1.373893772 podStartE2EDuration="1.373893772s" podCreationTimestamp="2025-11-21 10:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:03:17.371368035 +0000 UTC m=+1342.480510543" watchObservedRunningTime="2025-11-21 10:03:17.373893772 +0000 UTC m=+1342.483036270" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.418014 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66d8-account-create-hnrrl"] Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.488848 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.503761 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5q7hj" podUID="9ab92bde-9b45-49ca-a6e9-43c8921b3002" containerName="ovn-controller" probeResult="failure" output=< Nov 21 10:03:17 crc kubenswrapper[4972]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 21 10:03:17 crc kubenswrapper[4972]: > Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.518876 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.697070 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d431-account-create-q5hbw"] Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.697451 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gw4hh"] Nov 21 10:03:17 crc kubenswrapper[4972]: W1121 10:03:17.735289 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb222104a_d8e3_440b_bcfe_05976686b3dc.slice/crio-a12c1fa3b6d68d1229803e6a5a6dcaf1d036a36bf2fff2f331914af86ecc9c09 WatchSource:0}: Error finding container a12c1fa3b6d68d1229803e6a5a6dcaf1d036a36bf2fff2f331914af86ecc9c09: Status 404 returned error can't find the container with id a12c1fa3b6d68d1229803e6a5a6dcaf1d036a36bf2fff2f331914af86ecc9c09 Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.817485 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5q7hj-config-gnxdq"] Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.818779 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.821564 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.846650 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5q7hj-config-gnxdq"] Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.964315 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-log-ovn\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.964411 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-run-ovn\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.964499 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/53634031-b308-4f04-958f-807f7872a544-additional-scripts\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.964531 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53634031-b308-4f04-958f-807f7872a544-scripts\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.964566 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-run\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:17 crc kubenswrapper[4972]: I1121 10:03:17.964602 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9x2k\" (UniqueName: \"kubernetes.io/projected/53634031-b308-4f04-958f-807f7872a544-kube-api-access-t9x2k\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.067033 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-run-ovn\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.067164 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/53634031-b308-4f04-958f-807f7872a544-additional-scripts\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.067192 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53634031-b308-4f04-958f-807f7872a544-scripts\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.067237 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-run\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.067262 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9x2k\" (UniqueName: \"kubernetes.io/projected/53634031-b308-4f04-958f-807f7872a544-kube-api-access-t9x2k\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.067401 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-run-ovn\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.067487 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-run\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.067494 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-log-ovn\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.067616 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-log-ovn\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.069922 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53634031-b308-4f04-958f-807f7872a544-scripts\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.070732 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/53634031-b308-4f04-958f-807f7872a544-additional-scripts\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.097594 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9x2k\" (UniqueName: \"kubernetes.io/projected/53634031-b308-4f04-958f-807f7872a544-kube-api-access-t9x2k\") pod \"ovn-controller-5q7hj-config-gnxdq\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.345641 4972 generic.go:334] "Generic (PLEG): container finished" podID="86c689e1-6896-490f-a5ad-ab34ffdd5b4d" containerID="a909c70e3b54475753a77772967a465d3b32beecf8c0a0fe9b25def80fdbd717" exitCode=0 Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.345802 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qth84" event={"ID":"86c689e1-6896-490f-a5ad-ab34ffdd5b4d","Type":"ContainerDied","Data":"a909c70e3b54475753a77772967a465d3b32beecf8c0a0fe9b25def80fdbd717"} Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.348468 4972 generic.go:334] "Generic (PLEG): container finished" podID="5828f96a-2e2a-416f-b07e-584b5571b87d" containerID="3d03e81a2709bb4bb9d8ad9ba4f1732af07bf0dcd30da3511f821d5655ee6022" exitCode=0 Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.348561 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66d8-account-create-hnrrl" event={"ID":"5828f96a-2e2a-416f-b07e-584b5571b87d","Type":"ContainerDied","Data":"3d03e81a2709bb4bb9d8ad9ba4f1732af07bf0dcd30da3511f821d5655ee6022"} Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.348594 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66d8-account-create-hnrrl" event={"ID":"5828f96a-2e2a-416f-b07e-584b5571b87d","Type":"ContainerStarted","Data":"b0ec6671f24369c0cc31abbc2bd2a3689dbaad1a452441f8dd77d319f833e97e"} Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.349491 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.350331 4972 generic.go:334] "Generic (PLEG): container finished" podID="726b6f39-d584-463b-aa77-f5a9e99b778a" containerID="0ebc458b32b9d10b1184b304a5e7fcaa59a0d788defdd71615e2e02413c09037" exitCode=0 Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.350390 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ndz5q" event={"ID":"726b6f39-d584-463b-aa77-f5a9e99b778a","Type":"ContainerDied","Data":"0ebc458b32b9d10b1184b304a5e7fcaa59a0d788defdd71615e2e02413c09037"} Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.350414 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ndz5q" event={"ID":"726b6f39-d584-463b-aa77-f5a9e99b778a","Type":"ContainerStarted","Data":"d6f5f9f24a3edf5939b983e00928e047016d58e9f1bf45b31d6098630a38b255"} Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.351542 4972 generic.go:334] "Generic (PLEG): container finished" podID="b222104a-d8e3-440b-bcfe-05976686b3dc" containerID="33a42aa8758cd57fb1c197ce576cf4fcc274c6934767a362e4b3ba80e0e8193d" exitCode=0 Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.351596 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gw4hh" event={"ID":"b222104a-d8e3-440b-bcfe-05976686b3dc","Type":"ContainerDied","Data":"33a42aa8758cd57fb1c197ce576cf4fcc274c6934767a362e4b3ba80e0e8193d"} Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.351686 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gw4hh" event={"ID":"b222104a-d8e3-440b-bcfe-05976686b3dc","Type":"ContainerStarted","Data":"a12c1fa3b6d68d1229803e6a5a6dcaf1d036a36bf2fff2f331914af86ecc9c09"} Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.353439 4972 generic.go:334] "Generic (PLEG): container finished" podID="3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2" containerID="ce8d0e8927723578b54d7bbfaba904b7ac707ae47d865ec3b2caf2ab8d994389" exitCode=0 Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.353479 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d431-account-create-q5hbw" event={"ID":"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2","Type":"ContainerDied","Data":"ce8d0e8927723578b54d7bbfaba904b7ac707ae47d865ec3b2caf2ab8d994389"} Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.353529 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d431-account-create-q5hbw" event={"ID":"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2","Type":"ContainerStarted","Data":"ae2e2050f3f887ba367b34342a57e7deb4a7ed20b402c22b824a77eb9d8b8a80"} Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.354538 4972 generic.go:334] "Generic (PLEG): container finished" podID="b3ab5244-5c8e-4699-bc66-7e74b8875520" containerID="77f55bd48f2ffa8e96492bbb83fe188afc99a3622fa28987f4339fc82a7e2d1f" exitCode=0 Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.355281 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-894c-account-create-jgvz7" event={"ID":"b3ab5244-5c8e-4699-bc66-7e74b8875520","Type":"ContainerDied","Data":"77f55bd48f2ffa8e96492bbb83fe188afc99a3622fa28987f4339fc82a7e2d1f"} Nov 21 10:03:18 crc kubenswrapper[4972]: I1121 10:03:18.832641 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5q7hj-config-gnxdq"] Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.364942 4972 generic.go:334] "Generic (PLEG): container finished" podID="53634031-b308-4f04-958f-807f7872a544" containerID="540eaef147f61e4660d957279b7e669f8f34f32ded5be72d74f06e156f789e70" exitCode=0 Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.365033 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q7hj-config-gnxdq" event={"ID":"53634031-b308-4f04-958f-807f7872a544","Type":"ContainerDied","Data":"540eaef147f61e4660d957279b7e669f8f34f32ded5be72d74f06e156f789e70"} Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.365362 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q7hj-config-gnxdq" event={"ID":"53634031-b308-4f04-958f-807f7872a544","Type":"ContainerStarted","Data":"5e869d9746d3da463d4e610cfa8818ab7ba1a63bc11abf2e819a0d508f61a2a6"} Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.664971 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d431-account-create-q5hbw" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.794736 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2-operator-scripts\") pod \"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2\" (UID: \"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2\") " Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.794883 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb6p2\" (UniqueName: \"kubernetes.io/projected/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2-kube-api-access-tb6p2\") pod \"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2\" (UID: \"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2\") " Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.795467 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2" (UID: "3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.801487 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2-kube-api-access-tb6p2" (OuterVolumeSpecName: "kube-api-access-tb6p2") pod "3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2" (UID: "3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2"). InnerVolumeSpecName "kube-api-access-tb6p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.829467 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-894c-account-create-jgvz7" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.840717 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qth84" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.903446 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58wn9\" (UniqueName: \"kubernetes.io/projected/86c689e1-6896-490f-a5ad-ab34ffdd5b4d-kube-api-access-58wn9\") pod \"86c689e1-6896-490f-a5ad-ab34ffdd5b4d\" (UID: \"86c689e1-6896-490f-a5ad-ab34ffdd5b4d\") " Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.903556 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cbpx\" (UniqueName: \"kubernetes.io/projected/b3ab5244-5c8e-4699-bc66-7e74b8875520-kube-api-access-5cbpx\") pod \"b3ab5244-5c8e-4699-bc66-7e74b8875520\" (UID: \"b3ab5244-5c8e-4699-bc66-7e74b8875520\") " Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.903592 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ab5244-5c8e-4699-bc66-7e74b8875520-operator-scripts\") pod \"b3ab5244-5c8e-4699-bc66-7e74b8875520\" (UID: \"b3ab5244-5c8e-4699-bc66-7e74b8875520\") " Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.903612 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c689e1-6896-490f-a5ad-ab34ffdd5b4d-operator-scripts\") pod \"86c689e1-6896-490f-a5ad-ab34ffdd5b4d\" (UID: \"86c689e1-6896-490f-a5ad-ab34ffdd5b4d\") " Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.904274 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb6p2\" (UniqueName: \"kubernetes.io/projected/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2-kube-api-access-tb6p2\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.904296 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.904393 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ab5244-5c8e-4699-bc66-7e74b8875520-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3ab5244-5c8e-4699-bc66-7e74b8875520" (UID: "b3ab5244-5c8e-4699-bc66-7e74b8875520"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.905420 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ndz5q" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.905957 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86c689e1-6896-490f-a5ad-ab34ffdd5b4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86c689e1-6896-490f-a5ad-ab34ffdd5b4d" (UID: "86c689e1-6896-490f-a5ad-ab34ffdd5b4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.910195 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ab5244-5c8e-4699-bc66-7e74b8875520-kube-api-access-5cbpx" (OuterVolumeSpecName: "kube-api-access-5cbpx") pod "b3ab5244-5c8e-4699-bc66-7e74b8875520" (UID: "b3ab5244-5c8e-4699-bc66-7e74b8875520"). InnerVolumeSpecName "kube-api-access-5cbpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.912773 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c689e1-6896-490f-a5ad-ab34ffdd5b4d-kube-api-access-58wn9" (OuterVolumeSpecName: "kube-api-access-58wn9") pod "86c689e1-6896-490f-a5ad-ab34ffdd5b4d" (UID: "86c689e1-6896-490f-a5ad-ab34ffdd5b4d"). InnerVolumeSpecName "kube-api-access-58wn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.919993 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66d8-account-create-hnrrl" Nov 21 10:03:19 crc kubenswrapper[4972]: I1121 10:03:19.942065 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gw4hh" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.005313 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/726b6f39-d584-463b-aa77-f5a9e99b778a-operator-scripts\") pod \"726b6f39-d584-463b-aa77-f5a9e99b778a\" (UID: \"726b6f39-d584-463b-aa77-f5a9e99b778a\") " Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.005446 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5828f96a-2e2a-416f-b07e-584b5571b87d-operator-scripts\") pod \"5828f96a-2e2a-416f-b07e-584b5571b87d\" (UID: \"5828f96a-2e2a-416f-b07e-584b5571b87d\") " Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.005535 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwq6g\" (UniqueName: \"kubernetes.io/projected/5828f96a-2e2a-416f-b07e-584b5571b87d-kube-api-access-zwq6g\") pod \"5828f96a-2e2a-416f-b07e-584b5571b87d\" (UID: \"5828f96a-2e2a-416f-b07e-584b5571b87d\") " Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.005562 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkllx\" (UniqueName: \"kubernetes.io/projected/726b6f39-d584-463b-aa77-f5a9e99b778a-kube-api-access-lkllx\") pod \"726b6f39-d584-463b-aa77-f5a9e99b778a\" (UID: \"726b6f39-d584-463b-aa77-f5a9e99b778a\") " Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.005589 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b222104a-d8e3-440b-bcfe-05976686b3dc-operator-scripts\") pod \"b222104a-d8e3-440b-bcfe-05976686b3dc\" (UID: \"b222104a-d8e3-440b-bcfe-05976686b3dc\") " Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.005622 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksk8n\" (UniqueName: \"kubernetes.io/projected/b222104a-d8e3-440b-bcfe-05976686b3dc-kube-api-access-ksk8n\") pod \"b222104a-d8e3-440b-bcfe-05976686b3dc\" (UID: \"b222104a-d8e3-440b-bcfe-05976686b3dc\") " Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.005987 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58wn9\" (UniqueName: \"kubernetes.io/projected/86c689e1-6896-490f-a5ad-ab34ffdd5b4d-kube-api-access-58wn9\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.006017 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cbpx\" (UniqueName: \"kubernetes.io/projected/b3ab5244-5c8e-4699-bc66-7e74b8875520-kube-api-access-5cbpx\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.006031 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ab5244-5c8e-4699-bc66-7e74b8875520-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.006041 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c689e1-6896-490f-a5ad-ab34ffdd5b4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.006930 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b222104a-d8e3-440b-bcfe-05976686b3dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b222104a-d8e3-440b-bcfe-05976686b3dc" (UID: "b222104a-d8e3-440b-bcfe-05976686b3dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.006987 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726b6f39-d584-463b-aa77-f5a9e99b778a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "726b6f39-d584-463b-aa77-f5a9e99b778a" (UID: "726b6f39-d584-463b-aa77-f5a9e99b778a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.007290 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5828f96a-2e2a-416f-b07e-584b5571b87d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5828f96a-2e2a-416f-b07e-584b5571b87d" (UID: "5828f96a-2e2a-416f-b07e-584b5571b87d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.008956 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b222104a-d8e3-440b-bcfe-05976686b3dc-kube-api-access-ksk8n" (OuterVolumeSpecName: "kube-api-access-ksk8n") pod "b222104a-d8e3-440b-bcfe-05976686b3dc" (UID: "b222104a-d8e3-440b-bcfe-05976686b3dc"). InnerVolumeSpecName "kube-api-access-ksk8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.009807 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5828f96a-2e2a-416f-b07e-584b5571b87d-kube-api-access-zwq6g" (OuterVolumeSpecName: "kube-api-access-zwq6g") pod "5828f96a-2e2a-416f-b07e-584b5571b87d" (UID: "5828f96a-2e2a-416f-b07e-584b5571b87d"). InnerVolumeSpecName "kube-api-access-zwq6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.009900 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726b6f39-d584-463b-aa77-f5a9e99b778a-kube-api-access-lkllx" (OuterVolumeSpecName: "kube-api-access-lkllx") pod "726b6f39-d584-463b-aa77-f5a9e99b778a" (UID: "726b6f39-d584-463b-aa77-f5a9e99b778a"). InnerVolumeSpecName "kube-api-access-lkllx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.108024 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5828f96a-2e2a-416f-b07e-584b5571b87d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.108062 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwq6g\" (UniqueName: \"kubernetes.io/projected/5828f96a-2e2a-416f-b07e-584b5571b87d-kube-api-access-zwq6g\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.108076 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkllx\" (UniqueName: \"kubernetes.io/projected/726b6f39-d584-463b-aa77-f5a9e99b778a-kube-api-access-lkllx\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.108088 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b222104a-d8e3-440b-bcfe-05976686b3dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.108097 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksk8n\" (UniqueName: \"kubernetes.io/projected/b222104a-d8e3-440b-bcfe-05976686b3dc-kube-api-access-ksk8n\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.108106 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/726b6f39-d584-463b-aa77-f5a9e99b778a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.374357 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66d8-account-create-hnrrl" event={"ID":"5828f96a-2e2a-416f-b07e-584b5571b87d","Type":"ContainerDied","Data":"b0ec6671f24369c0cc31abbc2bd2a3689dbaad1a452441f8dd77d319f833e97e"} Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.374406 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ec6671f24369c0cc31abbc2bd2a3689dbaad1a452441f8dd77d319f833e97e" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.374471 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66d8-account-create-hnrrl" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.391982 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ndz5q" event={"ID":"726b6f39-d584-463b-aa77-f5a9e99b778a","Type":"ContainerDied","Data":"d6f5f9f24a3edf5939b983e00928e047016d58e9f1bf45b31d6098630a38b255"} Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.392019 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6f5f9f24a3edf5939b983e00928e047016d58e9f1bf45b31d6098630a38b255" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.392001 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ndz5q" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.398235 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gw4hh" event={"ID":"b222104a-d8e3-440b-bcfe-05976686b3dc","Type":"ContainerDied","Data":"a12c1fa3b6d68d1229803e6a5a6dcaf1d036a36bf2fff2f331914af86ecc9c09"} Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.398275 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12c1fa3b6d68d1229803e6a5a6dcaf1d036a36bf2fff2f331914af86ecc9c09" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.398326 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gw4hh" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.402430 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d431-account-create-q5hbw" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.402500 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d431-account-create-q5hbw" event={"ID":"3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2","Type":"ContainerDied","Data":"ae2e2050f3f887ba367b34342a57e7deb4a7ed20b402c22b824a77eb9d8b8a80"} Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.402792 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae2e2050f3f887ba367b34342a57e7deb4a7ed20b402c22b824a77eb9d8b8a80" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.406675 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-894c-account-create-jgvz7" event={"ID":"b3ab5244-5c8e-4699-bc66-7e74b8875520","Type":"ContainerDied","Data":"9873d2fc75a5944e68f5566c68effcf393c054c7366ea668f96ce2048a01a432"} Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.406719 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9873d2fc75a5944e68f5566c68effcf393c054c7366ea668f96ce2048a01a432" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.406806 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-894c-account-create-jgvz7" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.409496 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qth84" event={"ID":"86c689e1-6896-490f-a5ad-ab34ffdd5b4d","Type":"ContainerDied","Data":"d06b348ba3aed41261f46e0e1d5cbc5f7c52355d311d56288821cc3e5da6bd0f"} Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.409536 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d06b348ba3aed41261f46e0e1d5cbc5f7c52355d311d56288821cc3e5da6bd0f" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.409617 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qth84" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.722315 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.822526 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9x2k\" (UniqueName: \"kubernetes.io/projected/53634031-b308-4f04-958f-807f7872a544-kube-api-access-t9x2k\") pod \"53634031-b308-4f04-958f-807f7872a544\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.822614 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53634031-b308-4f04-958f-807f7872a544-scripts\") pod \"53634031-b308-4f04-958f-807f7872a544\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.822697 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/53634031-b308-4f04-958f-807f7872a544-additional-scripts\") pod \"53634031-b308-4f04-958f-807f7872a544\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.822771 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-run\") pod \"53634031-b308-4f04-958f-807f7872a544\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.822809 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-log-ovn\") pod \"53634031-b308-4f04-958f-807f7872a544\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.822897 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-run-ovn\") pod \"53634031-b308-4f04-958f-807f7872a544\" (UID: \"53634031-b308-4f04-958f-807f7872a544\") " Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.823254 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-run" (OuterVolumeSpecName: "var-run") pod "53634031-b308-4f04-958f-807f7872a544" (UID: "53634031-b308-4f04-958f-807f7872a544"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.823401 4972 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-run\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.823548 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "53634031-b308-4f04-958f-807f7872a544" (UID: "53634031-b308-4f04-958f-807f7872a544"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.824655 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "53634031-b308-4f04-958f-807f7872a544" (UID: "53634031-b308-4f04-958f-807f7872a544"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.824996 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53634031-b308-4f04-958f-807f7872a544-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "53634031-b308-4f04-958f-807f7872a544" (UID: "53634031-b308-4f04-958f-807f7872a544"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.825311 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53634031-b308-4f04-958f-807f7872a544-scripts" (OuterVolumeSpecName: "scripts") pod "53634031-b308-4f04-958f-807f7872a544" (UID: "53634031-b308-4f04-958f-807f7872a544"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.834685 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53634031-b308-4f04-958f-807f7872a544-kube-api-access-t9x2k" (OuterVolumeSpecName: "kube-api-access-t9x2k") pod "53634031-b308-4f04-958f-807f7872a544" (UID: "53634031-b308-4f04-958f-807f7872a544"). InnerVolumeSpecName "kube-api-access-t9x2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.925782 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9x2k\" (UniqueName: \"kubernetes.io/projected/53634031-b308-4f04-958f-807f7872a544-kube-api-access-t9x2k\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.925855 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53634031-b308-4f04-958f-807f7872a544-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.925875 4972 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/53634031-b308-4f04-958f-807f7872a544-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.925892 4972 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:20 crc kubenswrapper[4972]: I1121 10:03:20.925910 4972 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/53634031-b308-4f04-958f-807f7872a544-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.419236 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q7hj-config-gnxdq" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.419225 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q7hj-config-gnxdq" event={"ID":"53634031-b308-4f04-958f-807f7872a544","Type":"ContainerDied","Data":"5e869d9746d3da463d4e610cfa8818ab7ba1a63bc11abf2e819a0d508f61a2a6"} Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.419388 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e869d9746d3da463d4e610cfa8818ab7ba1a63bc11abf2e819a0d508f61a2a6" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.420677 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zr6wn" event={"ID":"795431b0-73d4-4c09-95ec-59c039a001d4","Type":"ContainerStarted","Data":"83552defc6eb86a6d2f1be27f0156209bcf898e7697a7b3a1905020948794f66"} Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.442770 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-zr6wn" podStartSLOduration=2.182591749 podStartE2EDuration="31.442752498s" podCreationTimestamp="2025-11-21 10:02:50 +0000 UTC" firstStartedPulling="2025-11-21 10:02:50.9835901 +0000 UTC m=+1316.092732598" lastFinishedPulling="2025-11-21 10:03:20.243750849 +0000 UTC m=+1345.352893347" observedRunningTime="2025-11-21 10:03:21.439160222 +0000 UTC m=+1346.548302730" watchObservedRunningTime="2025-11-21 10:03:21.442752498 +0000 UTC m=+1346.551894996" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.541318 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:03:21 crc kubenswrapper[4972]: E1121 10:03:21.541792 4972 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 10:03:21 crc kubenswrapper[4972]: E1121 10:03:21.541812 4972 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 10:03:21 crc kubenswrapper[4972]: E1121 10:03:21.541881 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift podName:31e140ab-a53a-4af2-864f-4c399d44f217 nodeName:}" failed. No retries permitted until 2025-11-21 10:03:53.54186423 +0000 UTC m=+1378.651006728 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift") pod "swift-storage-0" (UID: "31e140ab-a53a-4af2-864f-4c399d44f217") : configmap "swift-ring-files" not found Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.824196 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5q7hj-config-gnxdq"] Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.831084 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5q7hj-config-gnxdq"] Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923119 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5q7hj-config-pdnxn"] Nov 21 10:03:21 crc kubenswrapper[4972]: E1121 10:03:21.923425 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53634031-b308-4f04-958f-807f7872a544" containerName="ovn-config" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923441 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="53634031-b308-4f04-958f-807f7872a544" containerName="ovn-config" Nov 21 10:03:21 crc kubenswrapper[4972]: E1121 10:03:21.923454 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b222104a-d8e3-440b-bcfe-05976686b3dc" containerName="mariadb-database-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923460 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b222104a-d8e3-440b-bcfe-05976686b3dc" containerName="mariadb-database-create" Nov 21 10:03:21 crc kubenswrapper[4972]: E1121 10:03:21.923478 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2" containerName="mariadb-account-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923484 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2" containerName="mariadb-account-create" Nov 21 10:03:21 crc kubenswrapper[4972]: E1121 10:03:21.923497 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ab5244-5c8e-4699-bc66-7e74b8875520" containerName="mariadb-account-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923503 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ab5244-5c8e-4699-bc66-7e74b8875520" containerName="mariadb-account-create" Nov 21 10:03:21 crc kubenswrapper[4972]: E1121 10:03:21.923518 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5828f96a-2e2a-416f-b07e-584b5571b87d" containerName="mariadb-account-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923524 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5828f96a-2e2a-416f-b07e-584b5571b87d" containerName="mariadb-account-create" Nov 21 10:03:21 crc kubenswrapper[4972]: E1121 10:03:21.923536 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726b6f39-d584-463b-aa77-f5a9e99b778a" containerName="mariadb-database-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923542 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="726b6f39-d584-463b-aa77-f5a9e99b778a" containerName="mariadb-database-create" Nov 21 10:03:21 crc kubenswrapper[4972]: E1121 10:03:21.923553 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c689e1-6896-490f-a5ad-ab34ffdd5b4d" containerName="mariadb-database-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923559 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c689e1-6896-490f-a5ad-ab34ffdd5b4d" containerName="mariadb-database-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923709 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ab5244-5c8e-4699-bc66-7e74b8875520" containerName="mariadb-account-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923737 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="726b6f39-d584-463b-aa77-f5a9e99b778a" containerName="mariadb-database-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923748 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c689e1-6896-490f-a5ad-ab34ffdd5b4d" containerName="mariadb-database-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923764 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5828f96a-2e2a-416f-b07e-584b5571b87d" containerName="mariadb-account-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923775 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2" containerName="mariadb-account-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923792 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="53634031-b308-4f04-958f-807f7872a544" containerName="ovn-config" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.923810 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b222104a-d8e3-440b-bcfe-05976686b3dc" containerName="mariadb-database-create" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.924291 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.926342 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 21 10:03:21 crc kubenswrapper[4972]: I1121 10:03:21.938362 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5q7hj-config-pdnxn"] Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.048459 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7rnd\" (UniqueName: \"kubernetes.io/projected/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-kube-api-access-h7rnd\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.048501 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-log-ovn\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.048527 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-run\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.048560 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-run-ovn\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.048938 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-additional-scripts\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.049039 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-scripts\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.115086 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-g84kj"] Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.116340 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.120729 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.121895 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-56jfv" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.126011 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-g84kj"] Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.150805 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-scripts\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.150895 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7rnd\" (UniqueName: \"kubernetes.io/projected/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-kube-api-access-h7rnd\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.150928 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-log-ovn\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.150958 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-run\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.150991 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-run-ovn\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.151129 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-additional-scripts\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.151819 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-run\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.151819 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-run-ovn\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.152017 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-additional-scripts\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.152214 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-log-ovn\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.153478 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-scripts\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.176934 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7rnd\" (UniqueName: \"kubernetes.io/projected/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-kube-api-access-h7rnd\") pod \"ovn-controller-5q7hj-config-pdnxn\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.246227 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.252120 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-db-sync-config-data\") pod \"glance-db-sync-g84kj\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.252323 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-combined-ca-bundle\") pod \"glance-db-sync-g84kj\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.252496 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9j8v\" (UniqueName: \"kubernetes.io/projected/f9513939-1a73-46a3-a946-db9b1008314f-kube-api-access-k9j8v\") pod \"glance-db-sync-g84kj\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.252618 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-config-data\") pod \"glance-db-sync-g84kj\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.354261 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-db-sync-config-data\") pod \"glance-db-sync-g84kj\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.354506 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-combined-ca-bundle\") pod \"glance-db-sync-g84kj\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.354575 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9j8v\" (UniqueName: \"kubernetes.io/projected/f9513939-1a73-46a3-a946-db9b1008314f-kube-api-access-k9j8v\") pod \"glance-db-sync-g84kj\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.354626 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-config-data\") pod \"glance-db-sync-g84kj\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.364684 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-config-data\") pod \"glance-db-sync-g84kj\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.369858 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-db-sync-config-data\") pod \"glance-db-sync-g84kj\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.372350 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-combined-ca-bundle\") pod \"glance-db-sync-g84kj\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.376175 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9j8v\" (UniqueName: \"kubernetes.io/projected/f9513939-1a73-46a3-a946-db9b1008314f-kube-api-access-k9j8v\") pod \"glance-db-sync-g84kj\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.434381 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-g84kj" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.454837 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cf3edebd-74ab-4b7d-8706-2eda69d91aea","Type":"ContainerStarted","Data":"2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e"} Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.455796 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.482094 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.346447119 podStartE2EDuration="32.482052532s" podCreationTimestamp="2025-11-21 10:02:50 +0000 UTC" firstStartedPulling="2025-11-21 10:02:51.210444667 +0000 UTC m=+1316.319587165" lastFinishedPulling="2025-11-21 10:03:21.34605008 +0000 UTC m=+1346.455192578" observedRunningTime="2025-11-21 10:03:22.474604133 +0000 UTC m=+1347.583746651" watchObservedRunningTime="2025-11-21 10:03:22.482052532 +0000 UTC m=+1347.591195030" Nov 21 10:03:22 crc kubenswrapper[4972]: I1121 10:03:22.483990 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5q7hj" Nov 21 10:03:23 crc kubenswrapper[4972]: I1121 10:03:22.719677 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5q7hj-config-pdnxn"] Nov 21 10:03:23 crc kubenswrapper[4972]: I1121 10:03:22.819158 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-g84kj"] Nov 21 10:03:23 crc kubenswrapper[4972]: I1121 10:03:23.473256 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-g84kj" event={"ID":"f9513939-1a73-46a3-a946-db9b1008314f","Type":"ContainerStarted","Data":"77df3e1714bd52ece3dd29b5953f4b06a864d04998a3188d3ab6b54a1f15df93"} Nov 21 10:03:23 crc kubenswrapper[4972]: I1121 10:03:23.477658 4972 generic.go:334] "Generic (PLEG): container finished" podID="a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3" containerID="71dca4990d9c5583e8c8a7fe6073ef33d3ed04c9a4b5ebf3eb0c8b7f8769c385" exitCode=0 Nov 21 10:03:23 crc kubenswrapper[4972]: I1121 10:03:23.477817 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q7hj-config-pdnxn" event={"ID":"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3","Type":"ContainerDied","Data":"71dca4990d9c5583e8c8a7fe6073ef33d3ed04c9a4b5ebf3eb0c8b7f8769c385"} Nov 21 10:03:23 crc kubenswrapper[4972]: I1121 10:03:23.477857 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q7hj-config-pdnxn" event={"ID":"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3","Type":"ContainerStarted","Data":"40b06a64c169d508d9a3e39b348d20c280730cb1b49e034d7e81dadc18720fc5"} Nov 21 10:03:23 crc kubenswrapper[4972]: I1121 10:03:23.771395 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53634031-b308-4f04-958f-807f7872a544" path="/var/lib/kubelet/pods/53634031-b308-4f04-958f-807f7872a544/volumes" Nov 21 10:03:23 crc kubenswrapper[4972]: I1121 10:03:23.838091 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.216007 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-52nn8"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.217721 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-52nn8" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.223259 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-04d8-account-create-jcdpc"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.227263 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04d8-account-create-jcdpc" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.228561 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.239855 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-52nn8"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.251377 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04d8-account-create-jcdpc"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.305749 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae72a044-8eb0-450f-84e0-d98165e44377-operator-scripts\") pod \"cinder-db-create-52nn8\" (UID: \"ae72a044-8eb0-450f-84e0-d98165e44377\") " pod="openstack/cinder-db-create-52nn8" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.305803 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb5ct\" (UniqueName: \"kubernetes.io/projected/ae72a044-8eb0-450f-84e0-d98165e44377-kube-api-access-rb5ct\") pod \"cinder-db-create-52nn8\" (UID: \"ae72a044-8eb0-450f-84e0-d98165e44377\") " pod="openstack/cinder-db-create-52nn8" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.305865 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9bb92cc-2905-4800-a31b-1b4fd0e35af3-operator-scripts\") pod \"cinder-04d8-account-create-jcdpc\" (UID: \"f9bb92cc-2905-4800-a31b-1b4fd0e35af3\") " pod="openstack/cinder-04d8-account-create-jcdpc" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.305919 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9d65\" (UniqueName: \"kubernetes.io/projected/f9bb92cc-2905-4800-a31b-1b4fd0e35af3-kube-api-access-s9d65\") pod \"cinder-04d8-account-create-jcdpc\" (UID: \"f9bb92cc-2905-4800-a31b-1b4fd0e35af3\") " pod="openstack/cinder-04d8-account-create-jcdpc" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.334243 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-98ww4"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.335696 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-98ww4" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.344746 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-958e-account-create-b6xqx"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.345780 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-958e-account-create-b6xqx" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.347610 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.356130 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-98ww4"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.364888 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-958e-account-create-b6xqx"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.407844 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae72a044-8eb0-450f-84e0-d98165e44377-operator-scripts\") pod \"cinder-db-create-52nn8\" (UID: \"ae72a044-8eb0-450f-84e0-d98165e44377\") " pod="openstack/cinder-db-create-52nn8" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.407881 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb5ct\" (UniqueName: \"kubernetes.io/projected/ae72a044-8eb0-450f-84e0-d98165e44377-kube-api-access-rb5ct\") pod \"cinder-db-create-52nn8\" (UID: \"ae72a044-8eb0-450f-84e0-d98165e44377\") " pod="openstack/cinder-db-create-52nn8" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.407911 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9bb92cc-2905-4800-a31b-1b4fd0e35af3-operator-scripts\") pod \"cinder-04d8-account-create-jcdpc\" (UID: \"f9bb92cc-2905-4800-a31b-1b4fd0e35af3\") " pod="openstack/cinder-04d8-account-create-jcdpc" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.407950 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9d65\" (UniqueName: \"kubernetes.io/projected/f9bb92cc-2905-4800-a31b-1b4fd0e35af3-kube-api-access-s9d65\") pod \"cinder-04d8-account-create-jcdpc\" (UID: \"f9bb92cc-2905-4800-a31b-1b4fd0e35af3\") " pod="openstack/cinder-04d8-account-create-jcdpc" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.408025 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36cc95f-a1d2-4425-8928-586f8fda4eb8-operator-scripts\") pod \"barbican-958e-account-create-b6xqx\" (UID: \"d36cc95f-a1d2-4425-8928-586f8fda4eb8\") " pod="openstack/barbican-958e-account-create-b6xqx" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.408049 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed-operator-scripts\") pod \"barbican-db-create-98ww4\" (UID: \"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed\") " pod="openstack/barbican-db-create-98ww4" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.408069 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhrp\" (UniqueName: \"kubernetes.io/projected/d36cc95f-a1d2-4425-8928-586f8fda4eb8-kube-api-access-lqhrp\") pod \"barbican-958e-account-create-b6xqx\" (UID: \"d36cc95f-a1d2-4425-8928-586f8fda4eb8\") " pod="openstack/barbican-958e-account-create-b6xqx" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.408098 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktcwf\" (UniqueName: \"kubernetes.io/projected/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed-kube-api-access-ktcwf\") pod \"barbican-db-create-98ww4\" (UID: \"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed\") " pod="openstack/barbican-db-create-98ww4" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.408749 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae72a044-8eb0-450f-84e0-d98165e44377-operator-scripts\") pod \"cinder-db-create-52nn8\" (UID: \"ae72a044-8eb0-450f-84e0-d98165e44377\") " pod="openstack/cinder-db-create-52nn8" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.409433 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9bb92cc-2905-4800-a31b-1b4fd0e35af3-operator-scripts\") pod \"cinder-04d8-account-create-jcdpc\" (UID: \"f9bb92cc-2905-4800-a31b-1b4fd0e35af3\") " pod="openstack/cinder-04d8-account-create-jcdpc" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.434203 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-92cbp"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.435205 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-92cbp" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.437328 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb5ct\" (UniqueName: \"kubernetes.io/projected/ae72a044-8eb0-450f-84e0-d98165e44377-kube-api-access-rb5ct\") pod \"cinder-db-create-52nn8\" (UID: \"ae72a044-8eb0-450f-84e0-d98165e44377\") " pod="openstack/cinder-db-create-52nn8" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.437364 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9d65\" (UniqueName: \"kubernetes.io/projected/f9bb92cc-2905-4800-a31b-1b4fd0e35af3-kube-api-access-s9d65\") pod \"cinder-04d8-account-create-jcdpc\" (UID: \"f9bb92cc-2905-4800-a31b-1b4fd0e35af3\") " pod="openstack/cinder-04d8-account-create-jcdpc" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.451797 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-92cbp"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.499124 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-twfsm"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.500278 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.510761 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36cc95f-a1d2-4425-8928-586f8fda4eb8-operator-scripts\") pod \"barbican-958e-account-create-b6xqx\" (UID: \"d36cc95f-a1d2-4425-8928-586f8fda4eb8\") " pod="openstack/barbican-958e-account-create-b6xqx" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.510811 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkp8s\" (UniqueName: \"kubernetes.io/projected/55bbc0f1-0876-4076-8a24-7e275bda295e-kube-api-access-pkp8s\") pod \"neutron-db-create-92cbp\" (UID: \"55bbc0f1-0876-4076-8a24-7e275bda295e\") " pod="openstack/neutron-db-create-92cbp" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.510900 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed-operator-scripts\") pod \"barbican-db-create-98ww4\" (UID: \"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed\") " pod="openstack/barbican-db-create-98ww4" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.510925 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhrp\" (UniqueName: \"kubernetes.io/projected/d36cc95f-a1d2-4425-8928-586f8fda4eb8-kube-api-access-lqhrp\") pod \"barbican-958e-account-create-b6xqx\" (UID: \"d36cc95f-a1d2-4425-8928-586f8fda4eb8\") " pod="openstack/barbican-958e-account-create-b6xqx" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.510953 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktcwf\" (UniqueName: \"kubernetes.io/projected/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed-kube-api-access-ktcwf\") pod \"barbican-db-create-98ww4\" (UID: \"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed\") " pod="openstack/barbican-db-create-98ww4" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.510986 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55bbc0f1-0876-4076-8a24-7e275bda295e-operator-scripts\") pod \"neutron-db-create-92cbp\" (UID: \"55bbc0f1-0876-4076-8a24-7e275bda295e\") " pod="openstack/neutron-db-create-92cbp" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.511586 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36cc95f-a1d2-4425-8928-586f8fda4eb8-operator-scripts\") pod \"barbican-958e-account-create-b6xqx\" (UID: \"d36cc95f-a1d2-4425-8928-586f8fda4eb8\") " pod="openstack/barbican-958e-account-create-b6xqx" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.511609 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed-operator-scripts\") pod \"barbican-db-create-98ww4\" (UID: \"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed\") " pod="openstack/barbican-db-create-98ww4" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.513412 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.513525 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-twfsm"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.513548 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.513585 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.513640 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-485dm" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.543001 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-52nn8" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.548869 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dbea-account-create-5qfpb"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.549929 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbea-account-create-5qfpb" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.552918 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.555262 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhrp\" (UniqueName: \"kubernetes.io/projected/d36cc95f-a1d2-4425-8928-586f8fda4eb8-kube-api-access-lqhrp\") pod \"barbican-958e-account-create-b6xqx\" (UID: \"d36cc95f-a1d2-4425-8928-586f8fda4eb8\") " pod="openstack/barbican-958e-account-create-b6xqx" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.555994 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04d8-account-create-jcdpc" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.564283 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dbea-account-create-5qfpb"] Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.567184 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktcwf\" (UniqueName: \"kubernetes.io/projected/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed-kube-api-access-ktcwf\") pod \"barbican-db-create-98ww4\" (UID: \"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed\") " pod="openstack/barbican-db-create-98ww4" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.612172 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s8qf\" (UniqueName: \"kubernetes.io/projected/ead4d696-8e60-4d06-8db7-09b9f550b11f-kube-api-access-6s8qf\") pod \"neutron-dbea-account-create-5qfpb\" (UID: \"ead4d696-8e60-4d06-8db7-09b9f550b11f\") " pod="openstack/neutron-dbea-account-create-5qfpb" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.612334 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c86765f-e125-43ac-83ba-99d506750ed5-combined-ca-bundle\") pod \"keystone-db-sync-twfsm\" (UID: \"3c86765f-e125-43ac-83ba-99d506750ed5\") " pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.612387 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkp8s\" (UniqueName: \"kubernetes.io/projected/55bbc0f1-0876-4076-8a24-7e275bda295e-kube-api-access-pkp8s\") pod \"neutron-db-create-92cbp\" (UID: \"55bbc0f1-0876-4076-8a24-7e275bda295e\") " pod="openstack/neutron-db-create-92cbp" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.612418 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ead4d696-8e60-4d06-8db7-09b9f550b11f-operator-scripts\") pod \"neutron-dbea-account-create-5qfpb\" (UID: \"ead4d696-8e60-4d06-8db7-09b9f550b11f\") " pod="openstack/neutron-dbea-account-create-5qfpb" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.612467 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c86765f-e125-43ac-83ba-99d506750ed5-config-data\") pod \"keystone-db-sync-twfsm\" (UID: \"3c86765f-e125-43ac-83ba-99d506750ed5\") " pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.612500 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzqj6\" (UniqueName: \"kubernetes.io/projected/3c86765f-e125-43ac-83ba-99d506750ed5-kube-api-access-rzqj6\") pod \"keystone-db-sync-twfsm\" (UID: \"3c86765f-e125-43ac-83ba-99d506750ed5\") " pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.612553 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55bbc0f1-0876-4076-8a24-7e275bda295e-operator-scripts\") pod \"neutron-db-create-92cbp\" (UID: \"55bbc0f1-0876-4076-8a24-7e275bda295e\") " pod="openstack/neutron-db-create-92cbp" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.613731 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55bbc0f1-0876-4076-8a24-7e275bda295e-operator-scripts\") pod \"neutron-db-create-92cbp\" (UID: \"55bbc0f1-0876-4076-8a24-7e275bda295e\") " pod="openstack/neutron-db-create-92cbp" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.634364 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkp8s\" (UniqueName: \"kubernetes.io/projected/55bbc0f1-0876-4076-8a24-7e275bda295e-kube-api-access-pkp8s\") pod \"neutron-db-create-92cbp\" (UID: \"55bbc0f1-0876-4076-8a24-7e275bda295e\") " pod="openstack/neutron-db-create-92cbp" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.655884 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-98ww4" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.663387 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-958e-account-create-b6xqx" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.688764 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-92cbp" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.714794 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ead4d696-8e60-4d06-8db7-09b9f550b11f-operator-scripts\") pod \"neutron-dbea-account-create-5qfpb\" (UID: \"ead4d696-8e60-4d06-8db7-09b9f550b11f\") " pod="openstack/neutron-dbea-account-create-5qfpb" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.715153 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c86765f-e125-43ac-83ba-99d506750ed5-config-data\") pod \"keystone-db-sync-twfsm\" (UID: \"3c86765f-e125-43ac-83ba-99d506750ed5\") " pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.715265 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzqj6\" (UniqueName: \"kubernetes.io/projected/3c86765f-e125-43ac-83ba-99d506750ed5-kube-api-access-rzqj6\") pod \"keystone-db-sync-twfsm\" (UID: \"3c86765f-e125-43ac-83ba-99d506750ed5\") " pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.715407 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s8qf\" (UniqueName: \"kubernetes.io/projected/ead4d696-8e60-4d06-8db7-09b9f550b11f-kube-api-access-6s8qf\") pod \"neutron-dbea-account-create-5qfpb\" (UID: \"ead4d696-8e60-4d06-8db7-09b9f550b11f\") " pod="openstack/neutron-dbea-account-create-5qfpb" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.715724 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ead4d696-8e60-4d06-8db7-09b9f550b11f-operator-scripts\") pod \"neutron-dbea-account-create-5qfpb\" (UID: \"ead4d696-8e60-4d06-8db7-09b9f550b11f\") " pod="openstack/neutron-dbea-account-create-5qfpb" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.715999 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c86765f-e125-43ac-83ba-99d506750ed5-combined-ca-bundle\") pod \"keystone-db-sync-twfsm\" (UID: \"3c86765f-e125-43ac-83ba-99d506750ed5\") " pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.721721 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c86765f-e125-43ac-83ba-99d506750ed5-config-data\") pod \"keystone-db-sync-twfsm\" (UID: \"3c86765f-e125-43ac-83ba-99d506750ed5\") " pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.722364 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c86765f-e125-43ac-83ba-99d506750ed5-combined-ca-bundle\") pod \"keystone-db-sync-twfsm\" (UID: \"3c86765f-e125-43ac-83ba-99d506750ed5\") " pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.737914 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s8qf\" (UniqueName: \"kubernetes.io/projected/ead4d696-8e60-4d06-8db7-09b9f550b11f-kube-api-access-6s8qf\") pod \"neutron-dbea-account-create-5qfpb\" (UID: \"ead4d696-8e60-4d06-8db7-09b9f550b11f\") " pod="openstack/neutron-dbea-account-create-5qfpb" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.739923 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzqj6\" (UniqueName: \"kubernetes.io/projected/3c86765f-e125-43ac-83ba-99d506750ed5-kube-api-access-rzqj6\") pod \"keystone-db-sync-twfsm\" (UID: \"3c86765f-e125-43ac-83ba-99d506750ed5\") " pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.819598 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.919055 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7rnd\" (UniqueName: \"kubernetes.io/projected/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-kube-api-access-h7rnd\") pod \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.919495 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-run\") pod \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.919529 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-log-ovn\") pod \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.919626 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-additional-scripts\") pod \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.919671 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-run-ovn\") pod \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.919706 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-scripts\") pod \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\" (UID: \"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3\") " Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.919757 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3" (UID: "a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.920391 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-run" (OuterVolumeSpecName: "var-run") pod "a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3" (UID: "a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.920635 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3" (UID: "a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.920922 4972 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-run\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.921120 4972 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.921189 4972 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.921741 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3" (UID: "a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.924326 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-kube-api-access-h7rnd" (OuterVolumeSpecName: "kube-api-access-h7rnd") pod "a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3" (UID: "a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3"). InnerVolumeSpecName "kube-api-access-h7rnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:24 crc kubenswrapper[4972]: I1121 10:03:24.926383 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-scripts" (OuterVolumeSpecName: "scripts") pod "a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3" (UID: "a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.018256 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.023211 4972 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.023245 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.023253 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7rnd\" (UniqueName: \"kubernetes.io/projected/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3-kube-api-access-h7rnd\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.038298 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbea-account-create-5qfpb" Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.155225 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-52nn8"] Nov 21 10:03:25 crc kubenswrapper[4972]: W1121 10:03:25.171481 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae72a044_8eb0_450f_84e0_d98165e44377.slice/crio-b97e5f6feceacfbbb0b46ef438b4af9c8c00a6a5202e1c6cc7a8ad03f9925e68 WatchSource:0}: Error finding container b97e5f6feceacfbbb0b46ef438b4af9c8c00a6a5202e1c6cc7a8ad03f9925e68: Status 404 returned error can't find the container with id b97e5f6feceacfbbb0b46ef438b4af9c8c00a6a5202e1c6cc7a8ad03f9925e68 Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.279160 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-958e-account-create-b6xqx"] Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.313645 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04d8-account-create-jcdpc"] Nov 21 10:03:25 crc kubenswrapper[4972]: W1121 10:03:25.367940 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9bb92cc_2905_4800_a31b_1b4fd0e35af3.slice/crio-1f6946524a02460850f6d99245e225e01b7a5e53a184e073e4a27739dd71ca4a WatchSource:0}: Error finding container 1f6946524a02460850f6d99245e225e01b7a5e53a184e073e4a27739dd71ca4a: Status 404 returned error can't find the container with id 1f6946524a02460850f6d99245e225e01b7a5e53a184e073e4a27739dd71ca4a Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.403741 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-98ww4"] Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.410722 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-92cbp"] Nov 21 10:03:25 crc kubenswrapper[4972]: W1121 10:03:25.422669 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55bbc0f1_0876_4076_8a24_7e275bda295e.slice/crio-91e93a93e8783d347044fa5ebbae1ff83e2c8a71e37e332b31a53c5644a8c465 WatchSource:0}: Error finding container 91e93a93e8783d347044fa5ebbae1ff83e2c8a71e37e332b31a53c5644a8c465: Status 404 returned error can't find the container with id 91e93a93e8783d347044fa5ebbae1ff83e2c8a71e37e332b31a53c5644a8c465 Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.500115 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-958e-account-create-b6xqx" event={"ID":"d36cc95f-a1d2-4425-8928-586f8fda4eb8","Type":"ContainerStarted","Data":"60ac2afc50b007547b7e395ec30e5848ee49827bba33dc902527e63cee3c1c53"} Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.501962 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-98ww4" event={"ID":"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed","Type":"ContainerStarted","Data":"acffd460b1cf0e3e35ecfaa9917a983ac1c835f3b88bad7b09795e8e8736278e"} Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.503977 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q7hj-config-pdnxn" Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.504666 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q7hj-config-pdnxn" event={"ID":"a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3","Type":"ContainerDied","Data":"40b06a64c169d508d9a3e39b348d20c280730cb1b49e034d7e81dadc18720fc5"} Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.504718 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b06a64c169d508d9a3e39b348d20c280730cb1b49e034d7e81dadc18720fc5" Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.506114 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04d8-account-create-jcdpc" event={"ID":"f9bb92cc-2905-4800-a31b-1b4fd0e35af3","Type":"ContainerStarted","Data":"1f6946524a02460850f6d99245e225e01b7a5e53a184e073e4a27739dd71ca4a"} Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.507285 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-52nn8" event={"ID":"ae72a044-8eb0-450f-84e0-d98165e44377","Type":"ContainerStarted","Data":"ccd21252aef50de80ab417e8456d9fe1e4d96060e10f6fc8f8afed0fbd2131d1"} Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.507308 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-52nn8" event={"ID":"ae72a044-8eb0-450f-84e0-d98165e44377","Type":"ContainerStarted","Data":"b97e5f6feceacfbbb0b46ef438b4af9c8c00a6a5202e1c6cc7a8ad03f9925e68"} Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.512689 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-92cbp" event={"ID":"55bbc0f1-0876-4076-8a24-7e275bda295e","Type":"ContainerStarted","Data":"91e93a93e8783d347044fa5ebbae1ff83e2c8a71e37e332b31a53c5644a8c465"} Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.534256 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-52nn8" podStartSLOduration=1.534236519 podStartE2EDuration="1.534236519s" podCreationTimestamp="2025-11-21 10:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:03:25.527468698 +0000 UTC m=+1350.636611226" watchObservedRunningTime="2025-11-21 10:03:25.534236519 +0000 UTC m=+1350.643379017" Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.584574 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dbea-account-create-5qfpb"] Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.615103 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-twfsm"] Nov 21 10:03:25 crc kubenswrapper[4972]: W1121 10:03:25.630400 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c86765f_e125_43ac_83ba_99d506750ed5.slice/crio-5b657898eb15a3193890444a779058d7cbad28fc47a39c665aff4a7f356642ec WatchSource:0}: Error finding container 5b657898eb15a3193890444a779058d7cbad28fc47a39c665aff4a7f356642ec: Status 404 returned error can't find the container with id 5b657898eb15a3193890444a779058d7cbad28fc47a39c665aff4a7f356642ec Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.919262 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5q7hj-config-pdnxn"] Nov 21 10:03:25 crc kubenswrapper[4972]: I1121 10:03:25.933510 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5q7hj-config-pdnxn"] Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.178482 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.178532 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.524230 4972 generic.go:334] "Generic (PLEG): container finished" podID="ae72a044-8eb0-450f-84e0-d98165e44377" containerID="ccd21252aef50de80ab417e8456d9fe1e4d96060e10f6fc8f8afed0fbd2131d1" exitCode=0 Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.524283 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-52nn8" event={"ID":"ae72a044-8eb0-450f-84e0-d98165e44377","Type":"ContainerDied","Data":"ccd21252aef50de80ab417e8456d9fe1e4d96060e10f6fc8f8afed0fbd2131d1"} Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.530294 4972 generic.go:334] "Generic (PLEG): container finished" podID="55bbc0f1-0876-4076-8a24-7e275bda295e" containerID="9689c03b0a8e65aa033cb5caf6a5738e37e8766032cb4c5608cf0d7247a3f626" exitCode=0 Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.530417 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-92cbp" event={"ID":"55bbc0f1-0876-4076-8a24-7e275bda295e","Type":"ContainerDied","Data":"9689c03b0a8e65aa033cb5caf6a5738e37e8766032cb4c5608cf0d7247a3f626"} Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.532958 4972 generic.go:334] "Generic (PLEG): container finished" podID="d36cc95f-a1d2-4425-8928-586f8fda4eb8" containerID="016624325af5539ff9d7e73defed3e0b610a2c3e4ff6a386d9970d3beb0ddb8a" exitCode=0 Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.533024 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-958e-account-create-b6xqx" event={"ID":"d36cc95f-a1d2-4425-8928-586f8fda4eb8","Type":"ContainerDied","Data":"016624325af5539ff9d7e73defed3e0b610a2c3e4ff6a386d9970d3beb0ddb8a"} Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.545661 4972 generic.go:334] "Generic (PLEG): container finished" podID="67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed" containerID="e0dff9e68692c505602b38482f5ad3c36d4b795e8864a675a304157763e3ed7e" exitCode=0 Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.545840 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-98ww4" event={"ID":"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed","Type":"ContainerDied","Data":"e0dff9e68692c505602b38482f5ad3c36d4b795e8864a675a304157763e3ed7e"} Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.547676 4972 generic.go:334] "Generic (PLEG): container finished" podID="f9bb92cc-2905-4800-a31b-1b4fd0e35af3" containerID="941818d891525d6c6ed7988263f09932e3cbafbbf488025ab0072f5debeb0701" exitCode=0 Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.547729 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04d8-account-create-jcdpc" event={"ID":"f9bb92cc-2905-4800-a31b-1b4fd0e35af3","Type":"ContainerDied","Data":"941818d891525d6c6ed7988263f09932e3cbafbbf488025ab0072f5debeb0701"} Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.556174 4972 generic.go:334] "Generic (PLEG): container finished" podID="ead4d696-8e60-4d06-8db7-09b9f550b11f" containerID="d9a7623dd3801db2be940faeb7090155ee653942661cca43e56c0af4e156fbfc" exitCode=0 Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.556230 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbea-account-create-5qfpb" event={"ID":"ead4d696-8e60-4d06-8db7-09b9f550b11f","Type":"ContainerDied","Data":"d9a7623dd3801db2be940faeb7090155ee653942661cca43e56c0af4e156fbfc"} Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.556290 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbea-account-create-5qfpb" event={"ID":"ead4d696-8e60-4d06-8db7-09b9f550b11f","Type":"ContainerStarted","Data":"9732fca84e8cd57eca31d7f8145b950cfe9ac1f68af1cc146cbf81812b0d2aea"} Nov 21 10:03:26 crc kubenswrapper[4972]: I1121 10:03:26.558054 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-twfsm" event={"ID":"3c86765f-e125-43ac-83ba-99d506750ed5","Type":"ContainerStarted","Data":"5b657898eb15a3193890444a779058d7cbad28fc47a39c665aff4a7f356642ec"} Nov 21 10:03:27 crc kubenswrapper[4972]: I1121 10:03:27.571230 4972 generic.go:334] "Generic (PLEG): container finished" podID="795431b0-73d4-4c09-95ec-59c039a001d4" containerID="83552defc6eb86a6d2f1be27f0156209bcf898e7697a7b3a1905020948794f66" exitCode=0 Nov 21 10:03:27 crc kubenswrapper[4972]: I1121 10:03:27.571313 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zr6wn" event={"ID":"795431b0-73d4-4c09-95ec-59c039a001d4","Type":"ContainerDied","Data":"83552defc6eb86a6d2f1be27f0156209bcf898e7697a7b3a1905020948794f66"} Nov 21 10:03:27 crc kubenswrapper[4972]: I1121 10:03:27.773339 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3" path="/var/lib/kubelet/pods/a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3/volumes" Nov 21 10:03:29 crc kubenswrapper[4972]: I1121 10:03:29.882944 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-92cbp" Nov 21 10:03:30 crc kubenswrapper[4972]: I1121 10:03:30.024986 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55bbc0f1-0876-4076-8a24-7e275bda295e-operator-scripts\") pod \"55bbc0f1-0876-4076-8a24-7e275bda295e\" (UID: \"55bbc0f1-0876-4076-8a24-7e275bda295e\") " Nov 21 10:03:30 crc kubenswrapper[4972]: I1121 10:03:30.025064 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkp8s\" (UniqueName: \"kubernetes.io/projected/55bbc0f1-0876-4076-8a24-7e275bda295e-kube-api-access-pkp8s\") pod \"55bbc0f1-0876-4076-8a24-7e275bda295e\" (UID: \"55bbc0f1-0876-4076-8a24-7e275bda295e\") " Nov 21 10:03:30 crc kubenswrapper[4972]: I1121 10:03:30.025895 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55bbc0f1-0876-4076-8a24-7e275bda295e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55bbc0f1-0876-4076-8a24-7e275bda295e" (UID: "55bbc0f1-0876-4076-8a24-7e275bda295e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:30 crc kubenswrapper[4972]: I1121 10:03:30.031384 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55bbc0f1-0876-4076-8a24-7e275bda295e-kube-api-access-pkp8s" (OuterVolumeSpecName: "kube-api-access-pkp8s") pod "55bbc0f1-0876-4076-8a24-7e275bda295e" (UID: "55bbc0f1-0876-4076-8a24-7e275bda295e"). InnerVolumeSpecName "kube-api-access-pkp8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:30 crc kubenswrapper[4972]: I1121 10:03:30.126986 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55bbc0f1-0876-4076-8a24-7e275bda295e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:30 crc kubenswrapper[4972]: I1121 10:03:30.127025 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkp8s\" (UniqueName: \"kubernetes.io/projected/55bbc0f1-0876-4076-8a24-7e275bda295e-kube-api-access-pkp8s\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:30 crc kubenswrapper[4972]: I1121 10:03:30.602393 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-92cbp" event={"ID":"55bbc0f1-0876-4076-8a24-7e275bda295e","Type":"ContainerDied","Data":"91e93a93e8783d347044fa5ebbae1ff83e2c8a71e37e332b31a53c5644a8c465"} Nov 21 10:03:30 crc kubenswrapper[4972]: I1121 10:03:30.602429 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e93a93e8783d347044fa5ebbae1ff83e2c8a71e37e332b31a53c5644a8c465" Nov 21 10:03:30 crc kubenswrapper[4972]: I1121 10:03:30.602434 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-92cbp" Nov 21 10:03:33 crc kubenswrapper[4972]: I1121 10:03:33.922193 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:03:35 crc kubenswrapper[4972]: I1121 10:03:35.713199 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 21 10:03:43 crc kubenswrapper[4972]: E1121 10:03:43.666501 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:8c7ecaaf282fb3dd419c02a3e017d5f190e1e0831965f1ce366b9763700b4e4a" Nov 21 10:03:43 crc kubenswrapper[4972]: E1121 10:03:43.667311 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:8c7ecaaf282fb3dd419c02a3e017d5f190e1e0831965f1ce366b9763700b4e4a,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9j8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-g84kj_openstack(f9513939-1a73-46a3-a946-db9b1008314f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:03:43 crc kubenswrapper[4972]: E1121 10:03:43.669283 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-g84kj" podUID="f9513939-1a73-46a3-a946-db9b1008314f" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.717511 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-958e-account-create-b6xqx" event={"ID":"d36cc95f-a1d2-4425-8928-586f8fda4eb8","Type":"ContainerDied","Data":"60ac2afc50b007547b7e395ec30e5848ee49827bba33dc902527e63cee3c1c53"} Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.717874 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60ac2afc50b007547b7e395ec30e5848ee49827bba33dc902527e63cee3c1c53" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.719601 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-98ww4" event={"ID":"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed","Type":"ContainerDied","Data":"acffd460b1cf0e3e35ecfaa9917a983ac1c835f3b88bad7b09795e8e8736278e"} Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.719624 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acffd460b1cf0e3e35ecfaa9917a983ac1c835f3b88bad7b09795e8e8736278e" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.721718 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04d8-account-create-jcdpc" event={"ID":"f9bb92cc-2905-4800-a31b-1b4fd0e35af3","Type":"ContainerDied","Data":"1f6946524a02460850f6d99245e225e01b7a5e53a184e073e4a27739dd71ca4a"} Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.721771 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f6946524a02460850f6d99245e225e01b7a5e53a184e073e4a27739dd71ca4a" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.723700 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zr6wn" event={"ID":"795431b0-73d4-4c09-95ec-59c039a001d4","Type":"ContainerDied","Data":"babee6a6f39febcd37d1afc162e951e9cd44a4409126c0931c62dcab5344f933"} Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.723814 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="babee6a6f39febcd37d1afc162e951e9cd44a4409126c0931c62dcab5344f933" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.725462 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbea-account-create-5qfpb" event={"ID":"ead4d696-8e60-4d06-8db7-09b9f550b11f","Type":"ContainerDied","Data":"9732fca84e8cd57eca31d7f8145b950cfe9ac1f68af1cc146cbf81812b0d2aea"} Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.725496 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9732fca84e8cd57eca31d7f8145b950cfe9ac1f68af1cc146cbf81812b0d2aea" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.727166 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-52nn8" event={"ID":"ae72a044-8eb0-450f-84e0-d98165e44377","Type":"ContainerDied","Data":"b97e5f6feceacfbbb0b46ef438b4af9c8c00a6a5202e1c6cc7a8ad03f9925e68"} Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.727274 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b97e5f6feceacfbbb0b46ef438b4af9c8c00a6a5202e1c6cc7a8ad03f9925e68" Nov 21 10:03:43 crc kubenswrapper[4972]: E1121 10:03:43.728958 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:8c7ecaaf282fb3dd419c02a3e017d5f190e1e0831965f1ce366b9763700b4e4a\\\"\"" pod="openstack/glance-db-sync-g84kj" podUID="f9513939-1a73-46a3-a946-db9b1008314f" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.883606 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-98ww4" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.891224 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04d8-account-create-jcdpc" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.944948 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.953772 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-52nn8" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.975618 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbea-account-create-5qfpb" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.987497 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-958e-account-create-b6xqx" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.988424 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed-operator-scripts\") pod \"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed\" (UID: \"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed\") " Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.988539 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9bb92cc-2905-4800-a31b-1b4fd0e35af3-operator-scripts\") pod \"f9bb92cc-2905-4800-a31b-1b4fd0e35af3\" (UID: \"f9bb92cc-2905-4800-a31b-1b4fd0e35af3\") " Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.988578 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9d65\" (UniqueName: \"kubernetes.io/projected/f9bb92cc-2905-4800-a31b-1b4fd0e35af3-kube-api-access-s9d65\") pod \"f9bb92cc-2905-4800-a31b-1b4fd0e35af3\" (UID: \"f9bb92cc-2905-4800-a31b-1b4fd0e35af3\") " Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.988629 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktcwf\" (UniqueName: \"kubernetes.io/projected/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed-kube-api-access-ktcwf\") pod \"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed\" (UID: \"67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed\") " Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.989006 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed" (UID: "67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.989602 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9bb92cc-2905-4800-a31b-1b4fd0e35af3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9bb92cc-2905-4800-a31b-1b4fd0e35af3" (UID: "f9bb92cc-2905-4800-a31b-1b4fd0e35af3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.993493 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.993846 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9bb92cc-2905-4800-a31b-1b4fd0e35af3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.993815 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed-kube-api-access-ktcwf" (OuterVolumeSpecName: "kube-api-access-ktcwf") pod "67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed" (UID: "67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed"). InnerVolumeSpecName "kube-api-access-ktcwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:43 crc kubenswrapper[4972]: I1121 10:03:43.996541 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9bb92cc-2905-4800-a31b-1b4fd0e35af3-kube-api-access-s9d65" (OuterVolumeSpecName: "kube-api-access-s9d65") pod "f9bb92cc-2905-4800-a31b-1b4fd0e35af3" (UID: "f9bb92cc-2905-4800-a31b-1b4fd0e35af3"). InnerVolumeSpecName "kube-api-access-s9d65". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.095273 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzgjw\" (UniqueName: \"kubernetes.io/projected/795431b0-73d4-4c09-95ec-59c039a001d4-kube-api-access-kzgjw\") pod \"795431b0-73d4-4c09-95ec-59c039a001d4\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.095342 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/795431b0-73d4-4c09-95ec-59c039a001d4-ring-data-devices\") pod \"795431b0-73d4-4c09-95ec-59c039a001d4\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.095421 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/795431b0-73d4-4c09-95ec-59c039a001d4-etc-swift\") pod \"795431b0-73d4-4c09-95ec-59c039a001d4\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.095445 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s8qf\" (UniqueName: \"kubernetes.io/projected/ead4d696-8e60-4d06-8db7-09b9f550b11f-kube-api-access-6s8qf\") pod \"ead4d696-8e60-4d06-8db7-09b9f550b11f\" (UID: \"ead4d696-8e60-4d06-8db7-09b9f550b11f\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.095471 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ead4d696-8e60-4d06-8db7-09b9f550b11f-operator-scripts\") pod \"ead4d696-8e60-4d06-8db7-09b9f550b11f\" (UID: \"ead4d696-8e60-4d06-8db7-09b9f550b11f\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.095497 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-combined-ca-bundle\") pod \"795431b0-73d4-4c09-95ec-59c039a001d4\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.095541 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-dispersionconf\") pod \"795431b0-73d4-4c09-95ec-59c039a001d4\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.095569 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-swiftconf\") pod \"795431b0-73d4-4c09-95ec-59c039a001d4\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.095647 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqhrp\" (UniqueName: \"kubernetes.io/projected/d36cc95f-a1d2-4425-8928-586f8fda4eb8-kube-api-access-lqhrp\") pod \"d36cc95f-a1d2-4425-8928-586f8fda4eb8\" (UID: \"d36cc95f-a1d2-4425-8928-586f8fda4eb8\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.095689 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae72a044-8eb0-450f-84e0-d98165e44377-operator-scripts\") pod \"ae72a044-8eb0-450f-84e0-d98165e44377\" (UID: \"ae72a044-8eb0-450f-84e0-d98165e44377\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.095787 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36cc95f-a1d2-4425-8928-586f8fda4eb8-operator-scripts\") pod \"d36cc95f-a1d2-4425-8928-586f8fda4eb8\" (UID: \"d36cc95f-a1d2-4425-8928-586f8fda4eb8\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.095814 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb5ct\" (UniqueName: \"kubernetes.io/projected/ae72a044-8eb0-450f-84e0-d98165e44377-kube-api-access-rb5ct\") pod \"ae72a044-8eb0-450f-84e0-d98165e44377\" (UID: \"ae72a044-8eb0-450f-84e0-d98165e44377\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.096803 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/795431b0-73d4-4c09-95ec-59c039a001d4-scripts\") pod \"795431b0-73d4-4c09-95ec-59c039a001d4\" (UID: \"795431b0-73d4-4c09-95ec-59c039a001d4\") " Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.097419 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36cc95f-a1d2-4425-8928-586f8fda4eb8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d36cc95f-a1d2-4425-8928-586f8fda4eb8" (UID: "d36cc95f-a1d2-4425-8928-586f8fda4eb8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.097558 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/795431b0-73d4-4c09-95ec-59c039a001d4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "795431b0-73d4-4c09-95ec-59c039a001d4" (UID: "795431b0-73d4-4c09-95ec-59c039a001d4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.098287 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9d65\" (UniqueName: \"kubernetes.io/projected/f9bb92cc-2905-4800-a31b-1b4fd0e35af3-kube-api-access-s9d65\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.098313 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktcwf\" (UniqueName: \"kubernetes.io/projected/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed-kube-api-access-ktcwf\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.098324 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36cc95f-a1d2-4425-8928-586f8fda4eb8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.098333 4972 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/795431b0-73d4-4c09-95ec-59c039a001d4-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.099277 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/795431b0-73d4-4c09-95ec-59c039a001d4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "795431b0-73d4-4c09-95ec-59c039a001d4" (UID: "795431b0-73d4-4c09-95ec-59c039a001d4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.099724 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae72a044-8eb0-450f-84e0-d98165e44377-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae72a044-8eb0-450f-84e0-d98165e44377" (UID: "ae72a044-8eb0-450f-84e0-d98165e44377"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.100003 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795431b0-73d4-4c09-95ec-59c039a001d4-kube-api-access-kzgjw" (OuterVolumeSpecName: "kube-api-access-kzgjw") pod "795431b0-73d4-4c09-95ec-59c039a001d4" (UID: "795431b0-73d4-4c09-95ec-59c039a001d4"). InnerVolumeSpecName "kube-api-access-kzgjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.100383 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae72a044-8eb0-450f-84e0-d98165e44377-kube-api-access-rb5ct" (OuterVolumeSpecName: "kube-api-access-rb5ct") pod "ae72a044-8eb0-450f-84e0-d98165e44377" (UID: "ae72a044-8eb0-450f-84e0-d98165e44377"). InnerVolumeSpecName "kube-api-access-rb5ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.098376 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead4d696-8e60-4d06-8db7-09b9f550b11f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ead4d696-8e60-4d06-8db7-09b9f550b11f" (UID: "ead4d696-8e60-4d06-8db7-09b9f550b11f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.101658 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36cc95f-a1d2-4425-8928-586f8fda4eb8-kube-api-access-lqhrp" (OuterVolumeSpecName: "kube-api-access-lqhrp") pod "d36cc95f-a1d2-4425-8928-586f8fda4eb8" (UID: "d36cc95f-a1d2-4425-8928-586f8fda4eb8"). InnerVolumeSpecName "kube-api-access-lqhrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.102340 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead4d696-8e60-4d06-8db7-09b9f550b11f-kube-api-access-6s8qf" (OuterVolumeSpecName: "kube-api-access-6s8qf") pod "ead4d696-8e60-4d06-8db7-09b9f550b11f" (UID: "ead4d696-8e60-4d06-8db7-09b9f550b11f"). InnerVolumeSpecName "kube-api-access-6s8qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.103991 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "795431b0-73d4-4c09-95ec-59c039a001d4" (UID: "795431b0-73d4-4c09-95ec-59c039a001d4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.118043 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/795431b0-73d4-4c09-95ec-59c039a001d4-scripts" (OuterVolumeSpecName: "scripts") pod "795431b0-73d4-4c09-95ec-59c039a001d4" (UID: "795431b0-73d4-4c09-95ec-59c039a001d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.118730 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "795431b0-73d4-4c09-95ec-59c039a001d4" (UID: "795431b0-73d4-4c09-95ec-59c039a001d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.124049 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "795431b0-73d4-4c09-95ec-59c039a001d4" (UID: "795431b0-73d4-4c09-95ec-59c039a001d4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.200313 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzgjw\" (UniqueName: \"kubernetes.io/projected/795431b0-73d4-4c09-95ec-59c039a001d4-kube-api-access-kzgjw\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.200359 4972 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/795431b0-73d4-4c09-95ec-59c039a001d4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.200373 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s8qf\" (UniqueName: \"kubernetes.io/projected/ead4d696-8e60-4d06-8db7-09b9f550b11f-kube-api-access-6s8qf\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.200384 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ead4d696-8e60-4d06-8db7-09b9f550b11f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.200395 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.200406 4972 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.200418 4972 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/795431b0-73d4-4c09-95ec-59c039a001d4-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.200428 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqhrp\" (UniqueName: \"kubernetes.io/projected/d36cc95f-a1d2-4425-8928-586f8fda4eb8-kube-api-access-lqhrp\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.200442 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae72a044-8eb0-450f-84e0-d98165e44377-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.200453 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb5ct\" (UniqueName: \"kubernetes.io/projected/ae72a044-8eb0-450f-84e0-d98165e44377-kube-api-access-rb5ct\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.200464 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/795431b0-73d4-4c09-95ec-59c039a001d4-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.736625 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbea-account-create-5qfpb" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.737688 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-958e-account-create-b6xqx" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.739080 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04d8-account-create-jcdpc" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.739111 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-twfsm" event={"ID":"3c86765f-e125-43ac-83ba-99d506750ed5","Type":"ContainerStarted","Data":"8b4f21364893f0f5f283d8468e4025288bceaa720f5ffd92ef7f40c79cbb6d87"} Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.739130 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zr6wn" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.739178 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-98ww4" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.739352 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-52nn8" Nov 21 10:03:44 crc kubenswrapper[4972]: I1121 10:03:44.759727 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-twfsm" podStartSLOduration=2.688891329 podStartE2EDuration="20.759712153s" podCreationTimestamp="2025-11-21 10:03:24 +0000 UTC" firstStartedPulling="2025-11-21 10:03:25.6333363 +0000 UTC m=+1350.742478798" lastFinishedPulling="2025-11-21 10:03:43.704157124 +0000 UTC m=+1368.813299622" observedRunningTime="2025-11-21 10:03:44.75881535 +0000 UTC m=+1369.867957868" watchObservedRunningTime="2025-11-21 10:03:44.759712153 +0000 UTC m=+1369.868854651" Nov 21 10:03:47 crc kubenswrapper[4972]: I1121 10:03:47.761391 4972 generic.go:334] "Generic (PLEG): container finished" podID="3c86765f-e125-43ac-83ba-99d506750ed5" containerID="8b4f21364893f0f5f283d8468e4025288bceaa720f5ffd92ef7f40c79cbb6d87" exitCode=0 Nov 21 10:03:47 crc kubenswrapper[4972]: I1121 10:03:47.770804 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-twfsm" event={"ID":"3c86765f-e125-43ac-83ba-99d506750ed5","Type":"ContainerDied","Data":"8b4f21364893f0f5f283d8468e4025288bceaa720f5ffd92ef7f40c79cbb6d87"} Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.117445 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.296181 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c86765f-e125-43ac-83ba-99d506750ed5-combined-ca-bundle\") pod \"3c86765f-e125-43ac-83ba-99d506750ed5\" (UID: \"3c86765f-e125-43ac-83ba-99d506750ed5\") " Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.296570 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzqj6\" (UniqueName: \"kubernetes.io/projected/3c86765f-e125-43ac-83ba-99d506750ed5-kube-api-access-rzqj6\") pod \"3c86765f-e125-43ac-83ba-99d506750ed5\" (UID: \"3c86765f-e125-43ac-83ba-99d506750ed5\") " Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.296701 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c86765f-e125-43ac-83ba-99d506750ed5-config-data\") pod \"3c86765f-e125-43ac-83ba-99d506750ed5\" (UID: \"3c86765f-e125-43ac-83ba-99d506750ed5\") " Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.302929 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c86765f-e125-43ac-83ba-99d506750ed5-kube-api-access-rzqj6" (OuterVolumeSpecName: "kube-api-access-rzqj6") pod "3c86765f-e125-43ac-83ba-99d506750ed5" (UID: "3c86765f-e125-43ac-83ba-99d506750ed5"). InnerVolumeSpecName "kube-api-access-rzqj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.330139 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c86765f-e125-43ac-83ba-99d506750ed5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c86765f-e125-43ac-83ba-99d506750ed5" (UID: "3c86765f-e125-43ac-83ba-99d506750ed5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.342416 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c86765f-e125-43ac-83ba-99d506750ed5-config-data" (OuterVolumeSpecName: "config-data") pod "3c86765f-e125-43ac-83ba-99d506750ed5" (UID: "3c86765f-e125-43ac-83ba-99d506750ed5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.399115 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c86765f-e125-43ac-83ba-99d506750ed5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.399335 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzqj6\" (UniqueName: \"kubernetes.io/projected/3c86765f-e125-43ac-83ba-99d506750ed5-kube-api-access-rzqj6\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.399413 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c86765f-e125-43ac-83ba-99d506750ed5-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.789934 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-twfsm" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.792565 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-twfsm" event={"ID":"3c86765f-e125-43ac-83ba-99d506750ed5","Type":"ContainerDied","Data":"5b657898eb15a3193890444a779058d7cbad28fc47a39c665aff4a7f356642ec"} Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.792611 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b657898eb15a3193890444a779058d7cbad28fc47a39c665aff4a7f356642ec" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962181 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-676fb4f7f5-fgv7f"] Nov 21 10:03:49 crc kubenswrapper[4972]: E1121 10:03:49.962471 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c86765f-e125-43ac-83ba-99d506750ed5" containerName="keystone-db-sync" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962496 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c86765f-e125-43ac-83ba-99d506750ed5" containerName="keystone-db-sync" Nov 21 10:03:49 crc kubenswrapper[4972]: E1121 10:03:49.962506 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead4d696-8e60-4d06-8db7-09b9f550b11f" containerName="mariadb-account-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962511 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead4d696-8e60-4d06-8db7-09b9f550b11f" containerName="mariadb-account-create" Nov 21 10:03:49 crc kubenswrapper[4972]: E1121 10:03:49.962520 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed" containerName="mariadb-database-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962526 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed" containerName="mariadb-database-create" Nov 21 10:03:49 crc kubenswrapper[4972]: E1121 10:03:49.962541 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3" containerName="ovn-config" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962548 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3" containerName="ovn-config" Nov 21 10:03:49 crc kubenswrapper[4972]: E1121 10:03:49.962555 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9bb92cc-2905-4800-a31b-1b4fd0e35af3" containerName="mariadb-account-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962562 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9bb92cc-2905-4800-a31b-1b4fd0e35af3" containerName="mariadb-account-create" Nov 21 10:03:49 crc kubenswrapper[4972]: E1121 10:03:49.962572 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55bbc0f1-0876-4076-8a24-7e275bda295e" containerName="mariadb-database-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962578 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="55bbc0f1-0876-4076-8a24-7e275bda295e" containerName="mariadb-database-create" Nov 21 10:03:49 crc kubenswrapper[4972]: E1121 10:03:49.962588 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795431b0-73d4-4c09-95ec-59c039a001d4" containerName="swift-ring-rebalance" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962594 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="795431b0-73d4-4c09-95ec-59c039a001d4" containerName="swift-ring-rebalance" Nov 21 10:03:49 crc kubenswrapper[4972]: E1121 10:03:49.962613 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36cc95f-a1d2-4425-8928-586f8fda4eb8" containerName="mariadb-account-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962618 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36cc95f-a1d2-4425-8928-586f8fda4eb8" containerName="mariadb-account-create" Nov 21 10:03:49 crc kubenswrapper[4972]: E1121 10:03:49.962634 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae72a044-8eb0-450f-84e0-d98165e44377" containerName="mariadb-database-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962641 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae72a044-8eb0-450f-84e0-d98165e44377" containerName="mariadb-database-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962780 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae72a044-8eb0-450f-84e0-d98165e44377" containerName="mariadb-database-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962797 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="795431b0-73d4-4c09-95ec-59c039a001d4" containerName="swift-ring-rebalance" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962806 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="55bbc0f1-0876-4076-8a24-7e275bda295e" containerName="mariadb-database-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962817 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d2b43e-f892-45d6-8c5c-6cf5e3d2bad3" containerName="ovn-config" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962838 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36cc95f-a1d2-4425-8928-586f8fda4eb8" containerName="mariadb-account-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962850 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed" containerName="mariadb-database-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962859 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead4d696-8e60-4d06-8db7-09b9f550b11f" containerName="mariadb-account-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962871 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9bb92cc-2905-4800-a31b-1b4fd0e35af3" containerName="mariadb-account-create" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.962883 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c86765f-e125-43ac-83ba-99d506750ed5" containerName="keystone-db-sync" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.963646 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:49 crc kubenswrapper[4972]: I1121 10:03:49.998903 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-676fb4f7f5-fgv7f"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.010555 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cwb8d"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.011900 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.016605 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.016921 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.017080 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-485dm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.017129 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.017353 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.042468 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cwb8d"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.111741 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-ovsdbserver-sb\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.111801 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-dns-svc\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.111861 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v8m2\" (UniqueName: \"kubernetes.io/projected/0d79a7e8-a443-47ad-b364-68491accfa3d-kube-api-access-4v8m2\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.111890 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-fernet-keys\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.111913 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwhxx\" (UniqueName: \"kubernetes.io/projected/4e13b353-3989-46fb-9023-f79e0a4cec68-kube-api-access-nwhxx\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.112016 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-ovsdbserver-nb\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.112072 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-config\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.112236 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-credential-keys\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.112265 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-combined-ca-bundle\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.112374 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-scripts\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.112416 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-config-data\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.191709 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-s77fk"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.192717 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.195159 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hb4zz" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.197922 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.198920 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.214882 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-ovsdbserver-nb\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.214930 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-config\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.215009 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-credential-keys\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.215034 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-combined-ca-bundle\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.215094 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-config-data\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.215116 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-scripts\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.215165 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-ovsdbserver-sb\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.215213 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-dns-svc\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.215239 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v8m2\" (UniqueName: \"kubernetes.io/projected/0d79a7e8-a443-47ad-b364-68491accfa3d-kube-api-access-4v8m2\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.215277 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-fernet-keys\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.215307 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwhxx\" (UniqueName: \"kubernetes.io/projected/4e13b353-3989-46fb-9023-f79e0a4cec68-kube-api-access-nwhxx\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.218631 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-ovsdbserver-sb\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.219703 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-dns-svc\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.220017 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.222453 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.223288 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-ovsdbserver-nb\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.223806 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-config\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.223992 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-fernet-keys\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.228442 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-credential-keys\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.232796 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-s77fk"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.234548 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-combined-ca-bundle\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.235054 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.235783 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-config-data\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.235893 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.245665 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-scripts\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.245742 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-c4vkg"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.246338 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v8m2\" (UniqueName: \"kubernetes.io/projected/0d79a7e8-a443-47ad-b364-68491accfa3d-kube-api-access-4v8m2\") pod \"keystone-bootstrap-cwb8d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.254360 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.260385 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.272995 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hnlwr" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.277453 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.295845 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwhxx\" (UniqueName: \"kubernetes.io/projected/4e13b353-3989-46fb-9023-f79e0a4cec68-kube-api-access-nwhxx\") pod \"dnsmasq-dns-676fb4f7f5-fgv7f\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.301196 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.320801 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349050 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfktn\" (UniqueName: \"kubernetes.io/projected/f23a86fe-e939-4663-b964-454211c5d446-kube-api-access-hfktn\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349133 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/025ad09c-467a-451c-a24d-4bf686469677-etc-machine-id\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349189 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-combined-ca-bundle\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349213 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-config-data\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349285 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmg9\" (UniqueName: \"kubernetes.io/projected/025ad09c-467a-451c-a24d-4bf686469677-kube-api-access-wzmg9\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349305 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349345 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-config-data\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349459 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-db-sync-config-data\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349503 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-scripts\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349557 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349576 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-scripts\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349594 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f23a86fe-e939-4663-b964-454211c5d446-run-httpd\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349631 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f23a86fe-e939-4663-b964-454211c5d446-log-httpd\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.349850 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.383370 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-c4vkg"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.406748 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-676fb4f7f5-fgv7f"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.417520 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2jlvt"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.418528 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.422475 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7tkvw" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.422778 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.426871 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2jlvt"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.442042 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-558565fb6f-25dzm"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.443428 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.457341 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-dns-svc\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.459335 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-db-sync-config-data\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.459461 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-scripts\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.459546 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-ovsdbserver-nb\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.460244 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rztgd\" (UniqueName: \"kubernetes.io/projected/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-kube-api-access-rztgd\") pod \"barbican-db-sync-2jlvt\" (UID: \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\") " pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.461393 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.461472 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-scripts\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.461538 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-db-sync-config-data\") pod \"barbican-db-sync-2jlvt\" (UID: \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\") " pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.461717 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f23a86fe-e939-4663-b964-454211c5d446-run-httpd\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.461866 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f23a86fe-e939-4663-b964-454211c5d446-log-httpd\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.461969 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-config\") pod \"neutron-db-sync-c4vkg\" (UID: \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\") " pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.462042 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdx4m\" (UniqueName: \"kubernetes.io/projected/74c175b7-4f32-4315-ad7b-927f17c4bf1e-kube-api-access-qdx4m\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.462127 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-ovsdbserver-sb\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.462214 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfktn\" (UniqueName: \"kubernetes.io/projected/f23a86fe-e939-4663-b964-454211c5d446-kube-api-access-hfktn\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.462280 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-combined-ca-bundle\") pod \"neutron-db-sync-c4vkg\" (UID: \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\") " pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.462360 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/025ad09c-467a-451c-a24d-4bf686469677-etc-machine-id\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.462951 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f23a86fe-e939-4663-b964-454211c5d446-log-httpd\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.462487 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7tm9c"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.462978 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/025ad09c-467a-451c-a24d-4bf686469677-etc-machine-id\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.463286 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-combined-ca-bundle\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.463367 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-config-data\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.463444 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-combined-ca-bundle\") pod \"barbican-db-sync-2jlvt\" (UID: \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\") " pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.463529 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-config\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.463610 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzmg9\" (UniqueName: \"kubernetes.io/projected/025ad09c-467a-451c-a24d-4bf686469677-kube-api-access-wzmg9\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.463331 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f23a86fe-e939-4663-b964-454211c5d446-run-httpd\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.463716 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.463797 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmsz8\" (UniqueName: \"kubernetes.io/projected/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-kube-api-access-rmsz8\") pod \"neutron-db-sync-c4vkg\" (UID: \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\") " pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.463888 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-config-data\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.463997 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.464329 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-db-sync-config-data\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.467596 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.467813 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jlzhq" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.468241 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.470882 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-scripts\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.472155 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.472254 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-combined-ca-bundle\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.472794 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-scripts\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.477318 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7tm9c"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.478065 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-config-data\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.482055 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-config-data\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.487607 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.488290 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-558565fb6f-25dzm"] Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.493961 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfktn\" (UniqueName: \"kubernetes.io/projected/f23a86fe-e939-4663-b964-454211c5d446-kube-api-access-hfktn\") pod \"ceilometer-0\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.494027 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzmg9\" (UniqueName: \"kubernetes.io/projected/025ad09c-467a-451c-a24d-4bf686469677-kube-api-access-wzmg9\") pod \"cinder-db-sync-s77fk\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.525222 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s77fk" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.565936 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-combined-ca-bundle\") pod \"barbican-db-sync-2jlvt\" (UID: \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\") " pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.565994 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-config\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566037 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmsz8\" (UniqueName: \"kubernetes.io/projected/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-kube-api-access-rmsz8\") pod \"neutron-db-sync-c4vkg\" (UID: \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\") " pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566061 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-dns-svc\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566121 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-logs\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566164 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-scripts\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566198 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slxcc\" (UniqueName: \"kubernetes.io/projected/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-kube-api-access-slxcc\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566215 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-ovsdbserver-nb\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566232 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rztgd\" (UniqueName: \"kubernetes.io/projected/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-kube-api-access-rztgd\") pod \"barbican-db-sync-2jlvt\" (UID: \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\") " pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566257 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-db-sync-config-data\") pod \"barbican-db-sync-2jlvt\" (UID: \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\") " pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566296 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-config\") pod \"neutron-db-sync-c4vkg\" (UID: \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\") " pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566316 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdx4m\" (UniqueName: \"kubernetes.io/projected/74c175b7-4f32-4315-ad7b-927f17c4bf1e-kube-api-access-qdx4m\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566333 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-config-data\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566369 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-ovsdbserver-sb\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566396 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-combined-ca-bundle\") pod \"neutron-db-sync-c4vkg\" (UID: \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\") " pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.566421 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-combined-ca-bundle\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.567536 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-ovsdbserver-nb\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.567579 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-dns-svc\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.568294 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-config\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.572200 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-combined-ca-bundle\") pod \"neutron-db-sync-c4vkg\" (UID: \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\") " pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.572283 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-db-sync-config-data\") pod \"barbican-db-sync-2jlvt\" (UID: \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\") " pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.572547 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-ovsdbserver-sb\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.573121 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-combined-ca-bundle\") pod \"barbican-db-sync-2jlvt\" (UID: \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\") " pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.578594 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-config\") pod \"neutron-db-sync-c4vkg\" (UID: \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\") " pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.592801 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rztgd\" (UniqueName: \"kubernetes.io/projected/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-kube-api-access-rztgd\") pod \"barbican-db-sync-2jlvt\" (UID: \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\") " pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.593223 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmsz8\" (UniqueName: \"kubernetes.io/projected/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-kube-api-access-rmsz8\") pod \"neutron-db-sync-c4vkg\" (UID: \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\") " pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.594200 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdx4m\" (UniqueName: \"kubernetes.io/projected/74c175b7-4f32-4315-ad7b-927f17c4bf1e-kube-api-access-qdx4m\") pod \"dnsmasq-dns-558565fb6f-25dzm\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.667981 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-logs\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.668042 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-scripts\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.668082 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slxcc\" (UniqueName: \"kubernetes.io/projected/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-kube-api-access-slxcc\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.668140 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-config-data\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.668173 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-combined-ca-bundle\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.668397 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-logs\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.673548 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-config-data\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.675101 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-scripts\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.675464 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-combined-ca-bundle\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.685966 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slxcc\" (UniqueName: \"kubernetes.io/projected/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-kube-api-access-slxcc\") pod \"placement-db-sync-7tm9c\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.711911 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.728758 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.799140 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.812070 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.855081 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7tm9c" Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.902468 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-676fb4f7f5-fgv7f"] Nov 21 10:03:50 crc kubenswrapper[4972]: W1121 10:03:50.918394 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e13b353_3989_46fb_9023_f79e0a4cec68.slice/crio-68e30e48359ec10b8205cf8c44fec98b7ece5a7bd2b1d49ccfd59cbbe2456bba WatchSource:0}: Error finding container 68e30e48359ec10b8205cf8c44fec98b7ece5a7bd2b1d49ccfd59cbbe2456bba: Status 404 returned error can't find the container with id 68e30e48359ec10b8205cf8c44fec98b7ece5a7bd2b1d49ccfd59cbbe2456bba Nov 21 10:03:50 crc kubenswrapper[4972]: I1121 10:03:50.985379 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cwb8d"] Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.020722 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-s77fk"] Nov 21 10:03:51 crc kubenswrapper[4972]: W1121 10:03:51.051626 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d79a7e8_a443_47ad_b364_68491accfa3d.slice/crio-2a51bf1a1493f6b2f9c708e788c369a6c90a68a37fb40e7672ad4a72eacf6014 WatchSource:0}: Error finding container 2a51bf1a1493f6b2f9c708e788c369a6c90a68a37fb40e7672ad4a72eacf6014: Status 404 returned error can't find the container with id 2a51bf1a1493f6b2f9c708e788c369a6c90a68a37fb40e7672ad4a72eacf6014 Nov 21 10:03:51 crc kubenswrapper[4972]: W1121 10:03:51.063471 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod025ad09c_467a_451c_a24d_4bf686469677.slice/crio-94f34478dfd43ed647500fe745f03f6a8493142aec0aef905ae82fa49f7e90fe WatchSource:0}: Error finding container 94f34478dfd43ed647500fe745f03f6a8493142aec0aef905ae82fa49f7e90fe: Status 404 returned error can't find the container with id 94f34478dfd43ed647500fe745f03f6a8493142aec0aef905ae82fa49f7e90fe Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.309005 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-c4vkg"] Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.367850 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:03:51 crc kubenswrapper[4972]: W1121 10:03:51.378799 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf23a86fe_e939_4663_b964_454211c5d446.slice/crio-74c549c9b9c773f069b3c126b02b01b3002ae413cc3fb35dc14e1e0019a31ee3 WatchSource:0}: Error finding container 74c549c9b9c773f069b3c126b02b01b3002ae413cc3fb35dc14e1e0019a31ee3: Status 404 returned error can't find the container with id 74c549c9b9c773f069b3c126b02b01b3002ae413cc3fb35dc14e1e0019a31ee3 Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.446667 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2jlvt"] Nov 21 10:03:51 crc kubenswrapper[4972]: W1121 10:03:51.449441 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c8d121f_c5f7_42c3_a8ce_6cbb48064e25.slice/crio-6e3edfd5179c9d9008d06201c382485c94ffcda89d7c3a88789a10824b3790ac WatchSource:0}: Error finding container 6e3edfd5179c9d9008d06201c382485c94ffcda89d7c3a88789a10824b3790ac: Status 404 returned error can't find the container with id 6e3edfd5179c9d9008d06201c382485c94ffcda89d7c3a88789a10824b3790ac Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.554978 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7tm9c"] Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.570248 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-558565fb6f-25dzm"] Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.809391 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f23a86fe-e939-4663-b964-454211c5d446","Type":"ContainerStarted","Data":"74c549c9b9c773f069b3c126b02b01b3002ae413cc3fb35dc14e1e0019a31ee3"} Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.810775 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2jlvt" event={"ID":"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25","Type":"ContainerStarted","Data":"6e3edfd5179c9d9008d06201c382485c94ffcda89d7c3a88789a10824b3790ac"} Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.812756 4972 generic.go:334] "Generic (PLEG): container finished" podID="4e13b353-3989-46fb-9023-f79e0a4cec68" containerID="d5279cad02550e6837feb7444ef28d27dcd11dd19614523392c269cc1dae2e18" exitCode=0 Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.812956 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" event={"ID":"4e13b353-3989-46fb-9023-f79e0a4cec68","Type":"ContainerDied","Data":"d5279cad02550e6837feb7444ef28d27dcd11dd19614523392c269cc1dae2e18"} Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.812983 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" event={"ID":"4e13b353-3989-46fb-9023-f79e0a4cec68","Type":"ContainerStarted","Data":"68e30e48359ec10b8205cf8c44fec98b7ece5a7bd2b1d49ccfd59cbbe2456bba"} Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.816925 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-c4vkg" event={"ID":"1d74b758-4b34-4381-bb6d-ba95a0ce1c62","Type":"ContainerStarted","Data":"bcd8fe3e44217018095632c736ff35f44419a3efd411bc546910c3270f906dfe"} Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.816972 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-c4vkg" event={"ID":"1d74b758-4b34-4381-bb6d-ba95a0ce1c62","Type":"ContainerStarted","Data":"f4eb866cf8e8197d1f6a40f010a1dc61e08d8bda2d1ec6620a5d40c482bccb11"} Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.822425 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s77fk" event={"ID":"025ad09c-467a-451c-a24d-4bf686469677","Type":"ContainerStarted","Data":"94f34478dfd43ed647500fe745f03f6a8493142aec0aef905ae82fa49f7e90fe"} Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.841759 4972 generic.go:334] "Generic (PLEG): container finished" podID="74c175b7-4f32-4315-ad7b-927f17c4bf1e" containerID="f8497958ba95bde86c455d4dc4ef3f8b28e080270a33b2d955a42874f362e1f7" exitCode=0 Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.842856 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" event={"ID":"74c175b7-4f32-4315-ad7b-927f17c4bf1e","Type":"ContainerDied","Data":"f8497958ba95bde86c455d4dc4ef3f8b28e080270a33b2d955a42874f362e1f7"} Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.842908 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" event={"ID":"74c175b7-4f32-4315-ad7b-927f17c4bf1e","Type":"ContainerStarted","Data":"35884de49fda08a0c957ad781a16e3a403e6b6dacb12314b44e678d77f0d6c3c"} Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.849003 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cwb8d" event={"ID":"0d79a7e8-a443-47ad-b364-68491accfa3d","Type":"ContainerStarted","Data":"8dd0d4b4fdaee9e203bdfba8f14112ab1b42869a816c8570d3ef18bd5b1ea26d"} Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.849056 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cwb8d" event={"ID":"0d79a7e8-a443-47ad-b364-68491accfa3d","Type":"ContainerStarted","Data":"2a51bf1a1493f6b2f9c708e788c369a6c90a68a37fb40e7672ad4a72eacf6014"} Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.850947 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7tm9c" event={"ID":"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3","Type":"ContainerStarted","Data":"d527ecaabb8673ba5d29b2a7f099e7ef56cc832b1251db01af878b73f5e785b7"} Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.876662 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-c4vkg" podStartSLOduration=1.87663632 podStartE2EDuration="1.87663632s" podCreationTimestamp="2025-11-21 10:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:03:51.857203571 +0000 UTC m=+1376.966346089" watchObservedRunningTime="2025-11-21 10:03:51.87663632 +0000 UTC m=+1376.985778818" Nov 21 10:03:51 crc kubenswrapper[4972]: I1121 10:03:51.932585 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cwb8d" podStartSLOduration=2.932563615 podStartE2EDuration="2.932563615s" podCreationTimestamp="2025-11-21 10:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:03:51.894737494 +0000 UTC m=+1377.003880012" watchObservedRunningTime="2025-11-21 10:03:51.932563615 +0000 UTC m=+1377.041706113" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.177028 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.206085 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-ovsdbserver-nb\") pod \"4e13b353-3989-46fb-9023-f79e0a4cec68\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.206152 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwhxx\" (UniqueName: \"kubernetes.io/projected/4e13b353-3989-46fb-9023-f79e0a4cec68-kube-api-access-nwhxx\") pod \"4e13b353-3989-46fb-9023-f79e0a4cec68\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.206237 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-dns-svc\") pod \"4e13b353-3989-46fb-9023-f79e0a4cec68\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.206255 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-config\") pod \"4e13b353-3989-46fb-9023-f79e0a4cec68\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.206303 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-ovsdbserver-sb\") pod \"4e13b353-3989-46fb-9023-f79e0a4cec68\" (UID: \"4e13b353-3989-46fb-9023-f79e0a4cec68\") " Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.211723 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e13b353-3989-46fb-9023-f79e0a4cec68-kube-api-access-nwhxx" (OuterVolumeSpecName: "kube-api-access-nwhxx") pod "4e13b353-3989-46fb-9023-f79e0a4cec68" (UID: "4e13b353-3989-46fb-9023-f79e0a4cec68"). InnerVolumeSpecName "kube-api-access-nwhxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.234610 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e13b353-3989-46fb-9023-f79e0a4cec68" (UID: "4e13b353-3989-46fb-9023-f79e0a4cec68"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.236620 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e13b353-3989-46fb-9023-f79e0a4cec68" (UID: "4e13b353-3989-46fb-9023-f79e0a4cec68"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.259347 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-config" (OuterVolumeSpecName: "config") pod "4e13b353-3989-46fb-9023-f79e0a4cec68" (UID: "4e13b353-3989-46fb-9023-f79e0a4cec68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.266605 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e13b353-3989-46fb-9023-f79e0a4cec68" (UID: "4e13b353-3989-46fb-9023-f79e0a4cec68"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.308620 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.308932 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.308944 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.308954 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e13b353-3989-46fb-9023-f79e0a4cec68-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.308984 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwhxx\" (UniqueName: \"kubernetes.io/projected/4e13b353-3989-46fb-9023-f79e0a4cec68-kube-api-access-nwhxx\") on node \"crc\" DevicePath \"\"" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.444001 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.872597 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.872620 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676fb4f7f5-fgv7f" event={"ID":"4e13b353-3989-46fb-9023-f79e0a4cec68","Type":"ContainerDied","Data":"68e30e48359ec10b8205cf8c44fec98b7ece5a7bd2b1d49ccfd59cbbe2456bba"} Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.872695 4972 scope.go:117] "RemoveContainer" containerID="d5279cad02550e6837feb7444ef28d27dcd11dd19614523392c269cc1dae2e18" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.881912 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" event={"ID":"74c175b7-4f32-4315-ad7b-927f17c4bf1e","Type":"ContainerStarted","Data":"50db32f5bf6f606b782276822a09d015f11ad1d042607ca873d9e6783ac3ce78"} Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.881948 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.899535 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" podStartSLOduration=2.899516586 podStartE2EDuration="2.899516586s" podCreationTimestamp="2025-11-21 10:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:03:52.898629423 +0000 UTC m=+1378.007771931" watchObservedRunningTime="2025-11-21 10:03:52.899516586 +0000 UTC m=+1378.008659084" Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.973940 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-676fb4f7f5-fgv7f"] Nov 21 10:03:52 crc kubenswrapper[4972]: I1121 10:03:52.978727 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-676fb4f7f5-fgv7f"] Nov 21 10:03:53 crc kubenswrapper[4972]: I1121 10:03:53.634633 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:03:53 crc kubenswrapper[4972]: I1121 10:03:53.639910 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift\") pod \"swift-storage-0\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " pod="openstack/swift-storage-0" Nov 21 10:03:53 crc kubenswrapper[4972]: I1121 10:03:53.774640 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e13b353-3989-46fb-9023-f79e0a4cec68" path="/var/lib/kubelet/pods/4e13b353-3989-46fb-9023-f79e0a4cec68/volumes" Nov 21 10:03:53 crc kubenswrapper[4972]: I1121 10:03:53.800707 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 21 10:03:56 crc kubenswrapper[4972]: I1121 10:03:56.178735 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:03:56 crc kubenswrapper[4972]: I1121 10:03:56.182898 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:03:56 crc kubenswrapper[4972]: I1121 10:03:56.182939 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 10:03:56 crc kubenswrapper[4972]: I1121 10:03:56.183540 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ec11dc5626524562fd7c3b24c6b4002aa3a346dd5009bf5fa88dabd42ba42bd"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 10:03:56 crc kubenswrapper[4972]: I1121 10:03:56.183581 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://7ec11dc5626524562fd7c3b24c6b4002aa3a346dd5009bf5fa88dabd42ba42bd" gracePeriod=600 Nov 21 10:03:57 crc kubenswrapper[4972]: I1121 10:03:57.932796 4972 generic.go:334] "Generic (PLEG): container finished" podID="0d79a7e8-a443-47ad-b364-68491accfa3d" containerID="8dd0d4b4fdaee9e203bdfba8f14112ab1b42869a816c8570d3ef18bd5b1ea26d" exitCode=0 Nov 21 10:03:57 crc kubenswrapper[4972]: I1121 10:03:57.932890 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cwb8d" event={"ID":"0d79a7e8-a443-47ad-b364-68491accfa3d","Type":"ContainerDied","Data":"8dd0d4b4fdaee9e203bdfba8f14112ab1b42869a816c8570d3ef18bd5b1ea26d"} Nov 21 10:03:57 crc kubenswrapper[4972]: I1121 10:03:57.936293 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="7ec11dc5626524562fd7c3b24c6b4002aa3a346dd5009bf5fa88dabd42ba42bd" exitCode=0 Nov 21 10:03:57 crc kubenswrapper[4972]: I1121 10:03:57.936341 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"7ec11dc5626524562fd7c3b24c6b4002aa3a346dd5009bf5fa88dabd42ba42bd"} Nov 21 10:03:57 crc kubenswrapper[4972]: I1121 10:03:57.936380 4972 scope.go:117] "RemoveContainer" containerID="b5f6ea95f3d9b88cf1528773dedbad651b22ffa03b2cdc9849fa7c5b9b96c05e" Nov 21 10:04:00 crc kubenswrapper[4972]: I1121 10:04:00.813991 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:04:00 crc kubenswrapper[4972]: I1121 10:04:00.881688 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdfc8db59-mmcsb"] Nov 21 10:04:00 crc kubenswrapper[4972]: I1121 10:04:00.881955 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" podUID="093f696e-dee6-47dd-ba6f-07e65f594e60" containerName="dnsmasq-dns" containerID="cri-o://39748cd594210f7c13af38928aceccdc179584a99252069659d800a228a58a60" gracePeriod=10 Nov 21 10:04:01 crc kubenswrapper[4972]: I1121 10:04:01.975109 4972 generic.go:334] "Generic (PLEG): container finished" podID="093f696e-dee6-47dd-ba6f-07e65f594e60" containerID="39748cd594210f7c13af38928aceccdc179584a99252069659d800a228a58a60" exitCode=0 Nov 21 10:04:01 crc kubenswrapper[4972]: I1121 10:04:01.975151 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" event={"ID":"093f696e-dee6-47dd-ba6f-07e65f594e60","Type":"ContainerDied","Data":"39748cd594210f7c13af38928aceccdc179584a99252069659d800a228a58a60"} Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.688224 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.712996 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-fernet-keys\") pod \"0d79a7e8-a443-47ad-b364-68491accfa3d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.713086 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v8m2\" (UniqueName: \"kubernetes.io/projected/0d79a7e8-a443-47ad-b364-68491accfa3d-kube-api-access-4v8m2\") pod \"0d79a7e8-a443-47ad-b364-68491accfa3d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.713128 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-combined-ca-bundle\") pod \"0d79a7e8-a443-47ad-b364-68491accfa3d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.713176 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-credential-keys\") pod \"0d79a7e8-a443-47ad-b364-68491accfa3d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.713205 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-config-data\") pod \"0d79a7e8-a443-47ad-b364-68491accfa3d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.713276 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-scripts\") pod \"0d79a7e8-a443-47ad-b364-68491accfa3d\" (UID: \"0d79a7e8-a443-47ad-b364-68491accfa3d\") " Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.721057 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-scripts" (OuterVolumeSpecName: "scripts") pod "0d79a7e8-a443-47ad-b364-68491accfa3d" (UID: "0d79a7e8-a443-47ad-b364-68491accfa3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.723861 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d79a7e8-a443-47ad-b364-68491accfa3d-kube-api-access-4v8m2" (OuterVolumeSpecName: "kube-api-access-4v8m2") pod "0d79a7e8-a443-47ad-b364-68491accfa3d" (UID: "0d79a7e8-a443-47ad-b364-68491accfa3d"). InnerVolumeSpecName "kube-api-access-4v8m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.725092 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0d79a7e8-a443-47ad-b364-68491accfa3d" (UID: "0d79a7e8-a443-47ad-b364-68491accfa3d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.732113 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0d79a7e8-a443-47ad-b364-68491accfa3d" (UID: "0d79a7e8-a443-47ad-b364-68491accfa3d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.743902 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d79a7e8-a443-47ad-b364-68491accfa3d" (UID: "0d79a7e8-a443-47ad-b364-68491accfa3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.749522 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-config-data" (OuterVolumeSpecName: "config-data") pod "0d79a7e8-a443-47ad-b364-68491accfa3d" (UID: "0d79a7e8-a443-47ad-b364-68491accfa3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.816018 4972 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.816320 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.816330 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.816339 4972 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.816349 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v8m2\" (UniqueName: \"kubernetes.io/projected/0d79a7e8-a443-47ad-b364-68491accfa3d-kube-api-access-4v8m2\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.816359 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d79a7e8-a443-47ad-b364-68491accfa3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.995925 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cwb8d" event={"ID":"0d79a7e8-a443-47ad-b364-68491accfa3d","Type":"ContainerDied","Data":"2a51bf1a1493f6b2f9c708e788c369a6c90a68a37fb40e7672ad4a72eacf6014"} Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.996006 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a51bf1a1493f6b2f9c708e788c369a6c90a68a37fb40e7672ad4a72eacf6014" Nov 21 10:04:02 crc kubenswrapper[4972]: I1121 10:04:02.996158 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cwb8d" Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.794450 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cwb8d"] Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.804974 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cwb8d"] Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.869072 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nwp8l"] Nov 21 10:04:03 crc kubenswrapper[4972]: E1121 10:04:03.869539 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d79a7e8-a443-47ad-b364-68491accfa3d" containerName="keystone-bootstrap" Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.869560 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d79a7e8-a443-47ad-b364-68491accfa3d" containerName="keystone-bootstrap" Nov 21 10:04:03 crc kubenswrapper[4972]: E1121 10:04:03.869582 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e13b353-3989-46fb-9023-f79e0a4cec68" containerName="init" Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.869592 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e13b353-3989-46fb-9023-f79e0a4cec68" containerName="init" Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.869809 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e13b353-3989-46fb-9023-f79e0a4cec68" containerName="init" Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.869840 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d79a7e8-a443-47ad-b364-68491accfa3d" containerName="keystone-bootstrap" Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.870487 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.872671 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.873148 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.873248 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.873325 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-485dm" Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.874451 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 10:04:03 crc kubenswrapper[4972]: I1121 10:04:03.890452 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nwp8l"] Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.037691 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-fernet-keys\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.037781 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-combined-ca-bundle\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.038459 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-credential-keys\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.038517 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-config-data\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.038671 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-scripts\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.038733 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bckj\" (UniqueName: \"kubernetes.io/projected/ad055efa-60cd-4e60-952c-cb732c443d62-kube-api-access-6bckj\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.140196 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-fernet-keys\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.140271 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-combined-ca-bundle\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.140321 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-credential-keys\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.140349 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-config-data\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.140408 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-scripts\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.140445 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bckj\" (UniqueName: \"kubernetes.io/projected/ad055efa-60cd-4e60-952c-cb732c443d62-kube-api-access-6bckj\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.156071 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-scripts\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.156497 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-credential-keys\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.156616 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-combined-ca-bundle\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.157326 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bckj\" (UniqueName: \"kubernetes.io/projected/ad055efa-60cd-4e60-952c-cb732c443d62-kube-api-access-6bckj\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.157441 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-config-data\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.157669 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-fernet-keys\") pod \"keystone-bootstrap-nwp8l\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.191080 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:04 crc kubenswrapper[4972]: I1121 10:04:04.762532 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" podUID="093f696e-dee6-47dd-ba6f-07e65f594e60" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Nov 21 10:04:04 crc kubenswrapper[4972]: E1121 10:04:04.809903 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:ce8908c54afc567e5791f2940656436928afa6f135ebae31274e5283cf13f448" Nov 21 10:04:04 crc kubenswrapper[4972]: E1121 10:04:04.810292 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:ce8908c54afc567e5791f2940656436928afa6f135ebae31274e5283cf13f448,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slxcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-7tm9c_openstack(9359e0ad-9677-4dfd-8cc2-bb9e40144ab3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:04:04 crc kubenswrapper[4972]: E1121 10:04:04.811572 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-7tm9c" podUID="9359e0ad-9677-4dfd-8cc2-bb9e40144ab3" Nov 21 10:04:05 crc kubenswrapper[4972]: E1121 10:04:05.016102 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:ce8908c54afc567e5791f2940656436928afa6f135ebae31274e5283cf13f448\\\"\"" pod="openstack/placement-db-sync-7tm9c" podUID="9359e0ad-9677-4dfd-8cc2-bb9e40144ab3" Nov 21 10:04:05 crc kubenswrapper[4972]: I1121 10:04:05.772786 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d79a7e8-a443-47ad-b364-68491accfa3d" path="/var/lib/kubelet/pods/0d79a7e8-a443-47ad-b364-68491accfa3d/volumes" Nov 21 10:04:09 crc kubenswrapper[4972]: I1121 10:04:09.762430 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" podUID="093f696e-dee6-47dd-ba6f-07e65f594e60" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Nov 21 10:04:13 crc kubenswrapper[4972]: E1121 10:04:13.985773 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:8c7ecaaf282fb3dd419c02a3e017d5f190e1e0831965f1ce366b9763700b4e4a" Nov 21 10:04:13 crc kubenswrapper[4972]: E1121 10:04:13.986327 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:8c7ecaaf282fb3dd419c02a3e017d5f190e1e0831965f1ce366b9763700b4e4a,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9j8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-g84kj_openstack(f9513939-1a73-46a3-a946-db9b1008314f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:04:13 crc kubenswrapper[4972]: E1121 10:04:13.987775 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-g84kj" podUID="f9513939-1a73-46a3-a946-db9b1008314f" Nov 21 10:04:13 crc kubenswrapper[4972]: E1121 10:04:13.999346 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:09513b0e7384092548dd654fa2356d64e243315cf59fa8857bd6c4a3ae4037c4" Nov 21 10:04:13 crc kubenswrapper[4972]: E1121 10:04:13.999667 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:09513b0e7384092548dd654fa2356d64e243315cf59fa8857bd6c4a3ae4037c4,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rztgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-2jlvt_openstack(9c8d121f-c5f7-42c3-a8ce-6cbb48064e25): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:04:14 crc kubenswrapper[4972]: E1121 10:04:14.000916 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-2jlvt" podUID="9c8d121f-c5f7-42c3-a8ce-6cbb48064e25" Nov 21 10:04:14 crc kubenswrapper[4972]: E1121 10:04:14.107924 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:09513b0e7384092548dd654fa2356d64e243315cf59fa8857bd6c4a3ae4037c4\\\"\"" pod="openstack/barbican-db-sync-2jlvt" podUID="9c8d121f-c5f7-42c3-a8ce-6cbb48064e25" Nov 21 10:04:15 crc kubenswrapper[4972]: E1121 10:04:15.071192 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:25d0e71e0464df9502ad8bd3af0f73caeaca1bae11d89b4b5992b4fe712eda3a" Nov 21 10:04:15 crc kubenswrapper[4972]: E1121 10:04:15.071638 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:25d0e71e0464df9502ad8bd3af0f73caeaca1bae11d89b4b5992b4fe712eda3a,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wzmg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-s77fk_openstack(025ad09c-467a-451c-a24d-4bf686469677): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 10:04:15 crc kubenswrapper[4972]: E1121 10:04:15.072961 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-s77fk" podUID="025ad09c-467a-451c-a24d-4bf686469677" Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.132827 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" event={"ID":"093f696e-dee6-47dd-ba6f-07e65f594e60","Type":"ContainerDied","Data":"dd55be8c8459db780abf6ef086a079d584b7b50491cd949a3712a28d97665de4"} Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.132890 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd55be8c8459db780abf6ef086a079d584b7b50491cd949a3712a28d97665de4" Nov 21 10:04:15 crc kubenswrapper[4972]: E1121 10:04:15.133960 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:25d0e71e0464df9502ad8bd3af0f73caeaca1bae11d89b4b5992b4fe712eda3a\\\"\"" pod="openstack/cinder-db-sync-s77fk" podUID="025ad09c-467a-451c-a24d-4bf686469677" Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.271503 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.341294 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5hdq\" (UniqueName: \"kubernetes.io/projected/093f696e-dee6-47dd-ba6f-07e65f594e60-kube-api-access-t5hdq\") pod \"093f696e-dee6-47dd-ba6f-07e65f594e60\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.341370 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-config\") pod \"093f696e-dee6-47dd-ba6f-07e65f594e60\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.341389 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-dns-svc\") pod \"093f696e-dee6-47dd-ba6f-07e65f594e60\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.341434 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-ovsdbserver-sb\") pod \"093f696e-dee6-47dd-ba6f-07e65f594e60\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.341517 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-ovsdbserver-nb\") pod \"093f696e-dee6-47dd-ba6f-07e65f594e60\" (UID: \"093f696e-dee6-47dd-ba6f-07e65f594e60\") " Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.349575 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/093f696e-dee6-47dd-ba6f-07e65f594e60-kube-api-access-t5hdq" (OuterVolumeSpecName: "kube-api-access-t5hdq") pod "093f696e-dee6-47dd-ba6f-07e65f594e60" (UID: "093f696e-dee6-47dd-ba6f-07e65f594e60"). InnerVolumeSpecName "kube-api-access-t5hdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.394229 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "093f696e-dee6-47dd-ba6f-07e65f594e60" (UID: "093f696e-dee6-47dd-ba6f-07e65f594e60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.396723 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "093f696e-dee6-47dd-ba6f-07e65f594e60" (UID: "093f696e-dee6-47dd-ba6f-07e65f594e60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.422788 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-config" (OuterVolumeSpecName: "config") pod "093f696e-dee6-47dd-ba6f-07e65f594e60" (UID: "093f696e-dee6-47dd-ba6f-07e65f594e60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.423566 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "093f696e-dee6-47dd-ba6f-07e65f594e60" (UID: "093f696e-dee6-47dd-ba6f-07e65f594e60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.442818 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.442870 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5hdq\" (UniqueName: \"kubernetes.io/projected/093f696e-dee6-47dd-ba6f-07e65f594e60-kube-api-access-t5hdq\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.442883 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.442898 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.442907 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/093f696e-dee6-47dd-ba6f-07e65f594e60-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:15 crc kubenswrapper[4972]: W1121 10:04:15.626545 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31e140ab_a53a_4af2_864f_4c399d44f217.slice/crio-19de5ae47de38759656d8b02d8d0f5cd55c1234a94a1c1b7a5f0ad33a98b5d58 WatchSource:0}: Error finding container 19de5ae47de38759656d8b02d8d0f5cd55c1234a94a1c1b7a5f0ad33a98b5d58: Status 404 returned error can't find the container with id 19de5ae47de38759656d8b02d8d0f5cd55c1234a94a1c1b7a5f0ad33a98b5d58 Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.626829 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 21 10:04:15 crc kubenswrapper[4972]: I1121 10:04:15.684254 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nwp8l"] Nov 21 10:04:15 crc kubenswrapper[4972]: W1121 10:04:15.693955 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad055efa_60cd_4e60_952c_cb732c443d62.slice/crio-d7f4055b9af7751280564666bf592d80c4e099987eb1ff132714aa0ef2d226a0 WatchSource:0}: Error finding container d7f4055b9af7751280564666bf592d80c4e099987eb1ff132714aa0ef2d226a0: Status 404 returned error can't find the container with id d7f4055b9af7751280564666bf592d80c4e099987eb1ff132714aa0ef2d226a0 Nov 21 10:04:16 crc kubenswrapper[4972]: E1121 10:04:16.022925 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod093f696e_dee6_47dd_ba6f_07e65f594e60.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod093f696e_dee6_47dd_ba6f_07e65f594e60.slice/crio-dd55be8c8459db780abf6ef086a079d584b7b50491cd949a3712a28d97665de4\": RecentStats: unable to find data in memory cache]" Nov 21 10:04:16 crc kubenswrapper[4972]: I1121 10:04:16.140596 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"19de5ae47de38759656d8b02d8d0f5cd55c1234a94a1c1b7a5f0ad33a98b5d58"} Nov 21 10:04:16 crc kubenswrapper[4972]: I1121 10:04:16.142804 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f23a86fe-e939-4663-b964-454211c5d446","Type":"ContainerStarted","Data":"7281a142a4c53eaf85f98f739a1bc21ac3985c85ea2af36bfcd9fa7599671dbb"} Nov 21 10:04:16 crc kubenswrapper[4972]: I1121 10:04:16.145118 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8"} Nov 21 10:04:16 crc kubenswrapper[4972]: I1121 10:04:16.146795 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" Nov 21 10:04:16 crc kubenswrapper[4972]: I1121 10:04:16.147934 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nwp8l" event={"ID":"ad055efa-60cd-4e60-952c-cb732c443d62","Type":"ContainerStarted","Data":"c4226bed7fef0e427cad20601d46d442deb53e7fb865a67e0d99d0deaaa2b853"} Nov 21 10:04:16 crc kubenswrapper[4972]: I1121 10:04:16.147958 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nwp8l" event={"ID":"ad055efa-60cd-4e60-952c-cb732c443d62","Type":"ContainerStarted","Data":"d7f4055b9af7751280564666bf592d80c4e099987eb1ff132714aa0ef2d226a0"} Nov 21 10:04:16 crc kubenswrapper[4972]: I1121 10:04:16.189950 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdfc8db59-mmcsb"] Nov 21 10:04:16 crc kubenswrapper[4972]: I1121 10:04:16.200944 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdfc8db59-mmcsb"] Nov 21 10:04:16 crc kubenswrapper[4972]: I1121 10:04:16.205325 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nwp8l" podStartSLOduration=13.205306002 podStartE2EDuration="13.205306002s" podCreationTimestamp="2025-11-21 10:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:16.197645238 +0000 UTC m=+1401.306787756" watchObservedRunningTime="2025-11-21 10:04:16.205306002 +0000 UTC m=+1401.314448500" Nov 21 10:04:17 crc kubenswrapper[4972]: I1121 10:04:17.155932 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"ba577ff7853e877e687486121c6f0ab731e335150c782fdb6337e45da1ea7e56"} Nov 21 10:04:17 crc kubenswrapper[4972]: I1121 10:04:17.159160 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f23a86fe-e939-4663-b964-454211c5d446","Type":"ContainerStarted","Data":"750a20df693e3ef6fdeb49daa0b334d27d70c08a01a27a3ea0406685b4a367fd"} Nov 21 10:04:17 crc kubenswrapper[4972]: I1121 10:04:17.790614 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="093f696e-dee6-47dd-ba6f-07e65f594e60" path="/var/lib/kubelet/pods/093f696e-dee6-47dd-ba6f-07e65f594e60/volumes" Nov 21 10:04:18 crc kubenswrapper[4972]: I1121 10:04:18.171409 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"1d432671871d10b2f9d36122beb37f70113843388eedcb543148c0842f970029"} Nov 21 10:04:18 crc kubenswrapper[4972]: I1121 10:04:18.171792 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"8c31ccc0050d4e99074a90c40277647465e43314e1fdbb8b1f6a9b4753e956a8"} Nov 21 10:04:18 crc kubenswrapper[4972]: I1121 10:04:18.171805 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"7ac0c52eaf55d9c6a4f11a7c5914428a511032a6d41ca1f5562b5b774ab41f34"} Nov 21 10:04:19 crc kubenswrapper[4972]: I1121 10:04:19.181562 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7tm9c" event={"ID":"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3","Type":"ContainerStarted","Data":"dbf761b28dcff3b4c93b8b8713f7d86b5a2245b941292e19874fd9aa1d054251"} Nov 21 10:04:19 crc kubenswrapper[4972]: I1121 10:04:19.202061 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7tm9c" podStartSLOduration=2.506243497 podStartE2EDuration="29.202045819s" podCreationTimestamp="2025-11-21 10:03:50 +0000 UTC" firstStartedPulling="2025-11-21 10:03:51.578054281 +0000 UTC m=+1376.687196779" lastFinishedPulling="2025-11-21 10:04:18.273856593 +0000 UTC m=+1403.382999101" observedRunningTime="2025-11-21 10:04:19.19910535 +0000 UTC m=+1404.308247858" watchObservedRunningTime="2025-11-21 10:04:19.202045819 +0000 UTC m=+1404.311188317" Nov 21 10:04:19 crc kubenswrapper[4972]: I1121 10:04:19.763003 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7bdfc8db59-mmcsb" podUID="093f696e-dee6-47dd-ba6f-07e65f594e60" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Nov 21 10:04:20 crc kubenswrapper[4972]: I1121 10:04:20.191249 4972 generic.go:334] "Generic (PLEG): container finished" podID="ad055efa-60cd-4e60-952c-cb732c443d62" containerID="c4226bed7fef0e427cad20601d46d442deb53e7fb865a67e0d99d0deaaa2b853" exitCode=0 Nov 21 10:04:20 crc kubenswrapper[4972]: I1121 10:04:20.191291 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nwp8l" event={"ID":"ad055efa-60cd-4e60-952c-cb732c443d62","Type":"ContainerDied","Data":"c4226bed7fef0e427cad20601d46d442deb53e7fb865a67e0d99d0deaaa2b853"} Nov 21 10:04:23 crc kubenswrapper[4972]: I1121 10:04:23.939637 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.105736 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-config-data\") pod \"ad055efa-60cd-4e60-952c-cb732c443d62\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.105791 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-combined-ca-bundle\") pod \"ad055efa-60cd-4e60-952c-cb732c443d62\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.105881 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bckj\" (UniqueName: \"kubernetes.io/projected/ad055efa-60cd-4e60-952c-cb732c443d62-kube-api-access-6bckj\") pod \"ad055efa-60cd-4e60-952c-cb732c443d62\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.106050 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-scripts\") pod \"ad055efa-60cd-4e60-952c-cb732c443d62\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.106089 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-credential-keys\") pod \"ad055efa-60cd-4e60-952c-cb732c443d62\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.106111 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-fernet-keys\") pod \"ad055efa-60cd-4e60-952c-cb732c443d62\" (UID: \"ad055efa-60cd-4e60-952c-cb732c443d62\") " Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.112963 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ad055efa-60cd-4e60-952c-cb732c443d62" (UID: "ad055efa-60cd-4e60-952c-cb732c443d62"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.113595 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-scripts" (OuterVolumeSpecName: "scripts") pod "ad055efa-60cd-4e60-952c-cb732c443d62" (UID: "ad055efa-60cd-4e60-952c-cb732c443d62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.114058 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad055efa-60cd-4e60-952c-cb732c443d62-kube-api-access-6bckj" (OuterVolumeSpecName: "kube-api-access-6bckj") pod "ad055efa-60cd-4e60-952c-cb732c443d62" (UID: "ad055efa-60cd-4e60-952c-cb732c443d62"). InnerVolumeSpecName "kube-api-access-6bckj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.114749 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ad055efa-60cd-4e60-952c-cb732c443d62" (UID: "ad055efa-60cd-4e60-952c-cb732c443d62"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.133520 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-config-data" (OuterVolumeSpecName: "config-data") pod "ad055efa-60cd-4e60-952c-cb732c443d62" (UID: "ad055efa-60cd-4e60-952c-cb732c443d62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.146180 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad055efa-60cd-4e60-952c-cb732c443d62" (UID: "ad055efa-60cd-4e60-952c-cb732c443d62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.208418 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.208470 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.208492 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bckj\" (UniqueName: \"kubernetes.io/projected/ad055efa-60cd-4e60-952c-cb732c443d62-kube-api-access-6bckj\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.208510 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.208527 4972 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.208544 4972 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad055efa-60cd-4e60-952c-cb732c443d62-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.232994 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nwp8l" event={"ID":"ad055efa-60cd-4e60-952c-cb732c443d62","Type":"ContainerDied","Data":"d7f4055b9af7751280564666bf592d80c4e099987eb1ff132714aa0ef2d226a0"} Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.233036 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7f4055b9af7751280564666bf592d80c4e099987eb1ff132714aa0ef2d226a0" Nov 21 10:04:24 crc kubenswrapper[4972]: I1121 10:04:24.233092 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nwp8l" Nov 21 10:04:24 crc kubenswrapper[4972]: E1121 10:04:24.760942 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:8c7ecaaf282fb3dd419c02a3e017d5f190e1e0831965f1ce366b9763700b4e4a\\\"\"" pod="openstack/glance-db-sync-g84kj" podUID="f9513939-1a73-46a3-a946-db9b1008314f" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.083799 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db44576f7-2qgwb"] Nov 21 10:04:25 crc kubenswrapper[4972]: E1121 10:04:25.084464 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad055efa-60cd-4e60-952c-cb732c443d62" containerName="keystone-bootstrap" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.084487 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad055efa-60cd-4e60-952c-cb732c443d62" containerName="keystone-bootstrap" Nov 21 10:04:25 crc kubenswrapper[4972]: E1121 10:04:25.084511 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="093f696e-dee6-47dd-ba6f-07e65f594e60" containerName="init" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.084524 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="093f696e-dee6-47dd-ba6f-07e65f594e60" containerName="init" Nov 21 10:04:25 crc kubenswrapper[4972]: E1121 10:04:25.084547 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="093f696e-dee6-47dd-ba6f-07e65f594e60" containerName="dnsmasq-dns" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.084561 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="093f696e-dee6-47dd-ba6f-07e65f594e60" containerName="dnsmasq-dns" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.085572 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="093f696e-dee6-47dd-ba6f-07e65f594e60" containerName="dnsmasq-dns" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.085630 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad055efa-60cd-4e60-952c-cb732c443d62" containerName="keystone-bootstrap" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.086467 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.090444 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.090791 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.091037 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.091188 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.091824 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-485dm" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.092170 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.098979 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db44576f7-2qgwb"] Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.225705 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-internal-tls-certs\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.225771 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltvlm\" (UniqueName: \"kubernetes.io/projected/fe028de3-cf0f-4ab0-ab52-0898bd408c89-kube-api-access-ltvlm\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.225815 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-fernet-keys\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.225856 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-scripts\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.225876 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-config-data\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.225911 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-public-tls-certs\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.225931 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-credential-keys\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.225978 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-combined-ca-bundle\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.332031 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-public-tls-certs\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.332087 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-credential-keys\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.332157 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-combined-ca-bundle\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.332204 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-internal-tls-certs\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.332234 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltvlm\" (UniqueName: \"kubernetes.io/projected/fe028de3-cf0f-4ab0-ab52-0898bd408c89-kube-api-access-ltvlm\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.332285 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-fernet-keys\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.332317 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-scripts\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.332341 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-config-data\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.338539 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-config-data\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.339242 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-scripts\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.340362 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-internal-tls-certs\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.343320 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-credential-keys\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.344959 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-public-tls-certs\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.346244 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-fernet-keys\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.348007 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-combined-ca-bundle\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.354977 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltvlm\" (UniqueName: \"kubernetes.io/projected/fe028de3-cf0f-4ab0-ab52-0898bd408c89-kube-api-access-ltvlm\") pod \"keystone-db44576f7-2qgwb\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.408533 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:25 crc kubenswrapper[4972]: I1121 10:04:25.992461 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db44576f7-2qgwb"] Nov 21 10:04:26 crc kubenswrapper[4972]: I1121 10:04:26.263358 4972 generic.go:334] "Generic (PLEG): container finished" podID="9359e0ad-9677-4dfd-8cc2-bb9e40144ab3" containerID="dbf761b28dcff3b4c93b8b8713f7d86b5a2245b941292e19874fd9aa1d054251" exitCode=0 Nov 21 10:04:26 crc kubenswrapper[4972]: I1121 10:04:26.263712 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7tm9c" event={"ID":"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3","Type":"ContainerDied","Data":"dbf761b28dcff3b4c93b8b8713f7d86b5a2245b941292e19874fd9aa1d054251"} Nov 21 10:04:26 crc kubenswrapper[4972]: I1121 10:04:26.267393 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f23a86fe-e939-4663-b964-454211c5d446","Type":"ContainerStarted","Data":"e14cb4aad212d1998014fdf6f5ffdb1b7c811353ca8ed380411d627dab945835"} Nov 21 10:04:26 crc kubenswrapper[4972]: I1121 10:04:26.269157 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db44576f7-2qgwb" event={"ID":"fe028de3-cf0f-4ab0-ab52-0898bd408c89","Type":"ContainerStarted","Data":"fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188"} Nov 21 10:04:26 crc kubenswrapper[4972]: I1121 10:04:26.269201 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db44576f7-2qgwb" event={"ID":"fe028de3-cf0f-4ab0-ab52-0898bd408c89","Type":"ContainerStarted","Data":"d5e727113c90bc7f1cbb3731746ce893b8da96ada655758dfad1ccc50e6f8bf0"} Nov 21 10:04:26 crc kubenswrapper[4972]: I1121 10:04:26.269925 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:26 crc kubenswrapper[4972]: I1121 10:04:26.272715 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"da2ba4db5685edc3025f879f7e189cf7165c184ef92f7d26d3118102cbc00186"} Nov 21 10:04:26 crc kubenswrapper[4972]: I1121 10:04:26.272740 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"56e50d004614f42f95a39a005d2e581ae7498a4ab2ace52e0c8e44e4cb64b156"} Nov 21 10:04:26 crc kubenswrapper[4972]: I1121 10:04:26.272749 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"4e7f746ee8e85533e7ed177d7195703edc2217f4d9450127a0eefddf988dd729"} Nov 21 10:04:26 crc kubenswrapper[4972]: I1121 10:04:26.301059 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db44576f7-2qgwb" podStartSLOduration=1.301044036 podStartE2EDuration="1.301044036s" podCreationTimestamp="2025-11-21 10:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:26.300154742 +0000 UTC m=+1411.409297250" watchObservedRunningTime="2025-11-21 10:04:26.301044036 +0000 UTC m=+1411.410186534" Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.285999 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"ef4d5eb5bf9e2085aa31deab41d35f315b471c1a281ec7d0fdb5669055ceae7e"} Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.288558 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2jlvt" event={"ID":"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25","Type":"ContainerStarted","Data":"7051de170622f6e3d9ee0aeb88f14d4e81e53671ef41a9e5a7aa056b0f637786"} Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.312605 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2jlvt" podStartSLOduration=2.505895656 podStartE2EDuration="37.312586509s" podCreationTimestamp="2025-11-21 10:03:50 +0000 UTC" firstStartedPulling="2025-11-21 10:03:51.451981651 +0000 UTC m=+1376.561124149" lastFinishedPulling="2025-11-21 10:04:26.258672504 +0000 UTC m=+1411.367815002" observedRunningTime="2025-11-21 10:04:27.30776002 +0000 UTC m=+1412.416902538" watchObservedRunningTime="2025-11-21 10:04:27.312586509 +0000 UTC m=+1412.421729007" Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.744111 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7tm9c" Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.817076 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-logs\") pod \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.817165 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-config-data\") pod \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.817325 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slxcc\" (UniqueName: \"kubernetes.io/projected/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-kube-api-access-slxcc\") pod \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.817717 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-logs" (OuterVolumeSpecName: "logs") pod "9359e0ad-9677-4dfd-8cc2-bb9e40144ab3" (UID: "9359e0ad-9677-4dfd-8cc2-bb9e40144ab3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.818267 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-combined-ca-bundle\") pod \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.818336 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-scripts\") pod \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\" (UID: \"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3\") " Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.818762 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.823597 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-kube-api-access-slxcc" (OuterVolumeSpecName: "kube-api-access-slxcc") pod "9359e0ad-9677-4dfd-8cc2-bb9e40144ab3" (UID: "9359e0ad-9677-4dfd-8cc2-bb9e40144ab3"). InnerVolumeSpecName "kube-api-access-slxcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.841080 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-scripts" (OuterVolumeSpecName: "scripts") pod "9359e0ad-9677-4dfd-8cc2-bb9e40144ab3" (UID: "9359e0ad-9677-4dfd-8cc2-bb9e40144ab3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.845002 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-config-data" (OuterVolumeSpecName: "config-data") pod "9359e0ad-9677-4dfd-8cc2-bb9e40144ab3" (UID: "9359e0ad-9677-4dfd-8cc2-bb9e40144ab3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.847636 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9359e0ad-9677-4dfd-8cc2-bb9e40144ab3" (UID: "9359e0ad-9677-4dfd-8cc2-bb9e40144ab3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.920600 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slxcc\" (UniqueName: \"kubernetes.io/projected/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-kube-api-access-slxcc\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.920866 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.920876 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:27 crc kubenswrapper[4972]: I1121 10:04:27.920885 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.322748 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"7e3179b2cf36ea30c1f398322b657083876aff67dca73310812bf6eda27e562d"} Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.322789 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"6b51383c400616239b3920aae870a35808849c73b781889b2d7c3fca1086fcc9"} Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.322799 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"6d2997e2bf31afa38122b707eaffd973a10f37e15af3ef380d90f5a0e46e40a2"} Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.325592 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7tm9c" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.333255 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7tm9c" event={"ID":"9359e0ad-9677-4dfd-8cc2-bb9e40144ab3","Type":"ContainerDied","Data":"d527ecaabb8673ba5d29b2a7f099e7ef56cc832b1251db01af878b73f5e785b7"} Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.333299 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d527ecaabb8673ba5d29b2a7f099e7ef56cc832b1251db01af878b73f5e785b7" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.380133 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78d4f89dc4-2qvzl"] Nov 21 10:04:28 crc kubenswrapper[4972]: E1121 10:04:28.380474 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9359e0ad-9677-4dfd-8cc2-bb9e40144ab3" containerName="placement-db-sync" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.380491 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9359e0ad-9677-4dfd-8cc2-bb9e40144ab3" containerName="placement-db-sync" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.380669 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9359e0ad-9677-4dfd-8cc2-bb9e40144ab3" containerName="placement-db-sync" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.381552 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.388430 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.388618 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jlzhq" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.388703 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.388938 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.389173 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.427790 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-logs\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.427856 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-public-tls-certs\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.427904 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bfzw\" (UniqueName: \"kubernetes.io/projected/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-kube-api-access-4bfzw\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.428310 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-internal-tls-certs\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.428441 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-config-data\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.428513 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-scripts\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.428615 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-combined-ca-bundle\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.456084 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78d4f89dc4-2qvzl"] Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.530720 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-config-data\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.531027 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-scripts\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.531158 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-combined-ca-bundle\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.531285 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-logs\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.531391 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-public-tls-certs\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.531499 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bfzw\" (UniqueName: \"kubernetes.io/projected/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-kube-api-access-4bfzw\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.531662 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-internal-tls-certs\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.625402 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-logs\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.629300 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-internal-tls-certs\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.629599 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-config-data\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.630128 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-public-tls-certs\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.630333 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bfzw\" (UniqueName: \"kubernetes.io/projected/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-kube-api-access-4bfzw\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.635252 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-scripts\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.771509 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-combined-ca-bundle\") pod \"placement-78d4f89dc4-2qvzl\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:28 crc kubenswrapper[4972]: I1121 10:04:28.796287 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:29 crc kubenswrapper[4972]: I1121 10:04:29.258965 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78d4f89dc4-2qvzl"] Nov 21 10:04:31 crc kubenswrapper[4972]: I1121 10:04:31.362378 4972 generic.go:334] "Generic (PLEG): container finished" podID="9c8d121f-c5f7-42c3-a8ce-6cbb48064e25" containerID="7051de170622f6e3d9ee0aeb88f14d4e81e53671ef41a9e5a7aa056b0f637786" exitCode=0 Nov 21 10:04:31 crc kubenswrapper[4972]: I1121 10:04:31.362663 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2jlvt" event={"ID":"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25","Type":"ContainerDied","Data":"7051de170622f6e3d9ee0aeb88f14d4e81e53671ef41a9e5a7aa056b0f637786"} Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.214918 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.323226 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rztgd\" (UniqueName: \"kubernetes.io/projected/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-kube-api-access-rztgd\") pod \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\" (UID: \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\") " Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.324271 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-db-sync-config-data\") pod \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\" (UID: \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\") " Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.324435 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-combined-ca-bundle\") pod \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\" (UID: \"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25\") " Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.346966 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9c8d121f-c5f7-42c3-a8ce-6cbb48064e25" (UID: "9c8d121f-c5f7-42c3-a8ce-6cbb48064e25"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.347419 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-kube-api-access-rztgd" (OuterVolumeSpecName: "kube-api-access-rztgd") pod "9c8d121f-c5f7-42c3-a8ce-6cbb48064e25" (UID: "9c8d121f-c5f7-42c3-a8ce-6cbb48064e25"). InnerVolumeSpecName "kube-api-access-rztgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.422386 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c8d121f-c5f7-42c3-a8ce-6cbb48064e25" (UID: "9c8d121f-c5f7-42c3-a8ce-6cbb48064e25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.429156 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.429189 4972 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.429199 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rztgd\" (UniqueName: \"kubernetes.io/projected/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25-kube-api-access-rztgd\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.489092 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"0f6fda84aaa98d450bf8db3dd84c394bcbdd91eb2c614ce51ee1f7e2fdf05d9e"} Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.500289 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2jlvt" event={"ID":"9c8d121f-c5f7-42c3-a8ce-6cbb48064e25","Type":"ContainerDied","Data":"6e3edfd5179c9d9008d06201c382485c94ffcda89d7c3a88789a10824b3790ac"} Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.500334 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e3edfd5179c9d9008d06201c382485c94ffcda89d7c3a88789a10824b3790ac" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.500399 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2jlvt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.525029 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78d4f89dc4-2qvzl" event={"ID":"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd","Type":"ContainerStarted","Data":"293a0ba14db09e2c1ece2b6244ac59a9b488c2d99d7824f84bddec0614abdd65"} Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.632930 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7fcd667fc5-5ctgv"] Nov 21 10:04:33 crc kubenswrapper[4972]: E1121 10:04:33.633439 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c8d121f-c5f7-42c3-a8ce-6cbb48064e25" containerName="barbican-db-sync" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.633463 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8d121f-c5f7-42c3-a8ce-6cbb48064e25" containerName="barbican-db-sync" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.633804 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c8d121f-c5f7-42c3-a8ce-6cbb48064e25" containerName="barbican-db-sync" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.636607 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.638413 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.639100 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7tkvw" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.639270 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.647245 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7fcd667fc5-5ctgv"] Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.724423 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-78666b77b6-ll6mt"] Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.729002 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.732723 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-config-data-custom\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.732759 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbaa8ec7-5499-43d1-ac80-dd8708d28643-logs\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.732782 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgnl7\" (UniqueName: \"kubernetes.io/projected/fbaa8ec7-5499-43d1-ac80-dd8708d28643-kube-api-access-bgnl7\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.732808 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-config-data\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.732847 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-combined-ca-bundle\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.732906 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1934e8d3-ef66-4d0e-8d12-bd958545270a-logs\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.732928 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-config-data\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.732951 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-combined-ca-bundle\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.732981 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-config-data-custom\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.733010 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpc5q\" (UniqueName: \"kubernetes.io/projected/1934e8d3-ef66-4d0e-8d12-bd958545270a-kube-api-access-cpc5q\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.736498 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78666b77b6-ll6mt"] Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.738077 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.755029 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86dc678b79-vtvgx"] Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.757562 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.781027 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86dc678b79-vtvgx"] Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.835385 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1934e8d3-ef66-4d0e-8d12-bd958545270a-logs\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.835447 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-config-data\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.835486 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-combined-ca-bundle\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.835532 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-config-data-custom\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.835558 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpc5q\" (UniqueName: \"kubernetes.io/projected/1934e8d3-ef66-4d0e-8d12-bd958545270a-kube-api-access-cpc5q\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.835663 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-config-data-custom\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.835693 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbaa8ec7-5499-43d1-ac80-dd8708d28643-logs\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.835721 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgnl7\" (UniqueName: \"kubernetes.io/projected/fbaa8ec7-5499-43d1-ac80-dd8708d28643-kube-api-access-bgnl7\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.835757 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-config-data\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.835795 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-combined-ca-bundle\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.836311 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbaa8ec7-5499-43d1-ac80-dd8708d28643-logs\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.839211 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1934e8d3-ef66-4d0e-8d12-bd958545270a-logs\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.840862 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-config-data-custom\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.842175 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-config-data\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.843521 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-config-data\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.848312 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-combined-ca-bundle\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.848450 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-combined-ca-bundle\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.849204 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-config-data-custom\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.857604 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpc5q\" (UniqueName: \"kubernetes.io/projected/1934e8d3-ef66-4d0e-8d12-bd958545270a-kube-api-access-cpc5q\") pod \"barbican-worker-7fcd667fc5-5ctgv\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.858353 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgnl7\" (UniqueName: \"kubernetes.io/projected/fbaa8ec7-5499-43d1-ac80-dd8708d28643-kube-api-access-bgnl7\") pod \"barbican-keystone-listener-78666b77b6-ll6mt\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.899024 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b7d889fc4-nj77m"] Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.900410 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.902569 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.939909 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b7d889fc4-nj77m"] Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.941028 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gss9\" (UniqueName: \"kubernetes.io/projected/e524d1c9-e84c-42f2-badb-72bf26a9c38f-kube-api-access-2gss9\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.941072 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-ovsdbserver-nb\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.941179 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-config\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.941208 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-ovsdbserver-sb\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.941226 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-dns-svc\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:33 crc kubenswrapper[4972]: I1121 10:04:33.960500 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.043265 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nswbm\" (UniqueName: \"kubernetes.io/projected/107f78da-c307-41d6-9491-c4b4e237649a-kube-api-access-nswbm\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.043336 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/107f78da-c307-41d6-9491-c4b4e237649a-logs\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.043389 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-config\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.043413 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-combined-ca-bundle\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.043527 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-ovsdbserver-sb\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.043590 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-config-data\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.043619 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-dns-svc\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.043737 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-config-data-custom\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.043807 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gss9\" (UniqueName: \"kubernetes.io/projected/e524d1c9-e84c-42f2-badb-72bf26a9c38f-kube-api-access-2gss9\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.043879 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-ovsdbserver-nb\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.044537 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-ovsdbserver-sb\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.044691 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-ovsdbserver-nb\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.044688 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-config\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.045050 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-dns-svc\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.060823 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gss9\" (UniqueName: \"kubernetes.io/projected/e524d1c9-e84c-42f2-badb-72bf26a9c38f-kube-api-access-2gss9\") pod \"dnsmasq-dns-86dc678b79-vtvgx\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.074477 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.099447 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.145442 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-combined-ca-bundle\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.145484 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-config-data\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.145532 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-config-data-custom\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.145604 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nswbm\" (UniqueName: \"kubernetes.io/projected/107f78da-c307-41d6-9491-c4b4e237649a-kube-api-access-nswbm\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.145634 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/107f78da-c307-41d6-9491-c4b4e237649a-logs\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.146309 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/107f78da-c307-41d6-9491-c4b4e237649a-logs\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.149201 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-config-data\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.150035 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-combined-ca-bundle\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.150351 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-config-data-custom\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.166654 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nswbm\" (UniqueName: \"kubernetes.io/projected/107f78da-c307-41d6-9491-c4b4e237649a-kube-api-access-nswbm\") pod \"barbican-api-6b7d889fc4-nj77m\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:34 crc kubenswrapper[4972]: I1121 10:04:34.255220 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.110157 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86dc678b79-vtvgx"] Nov 21 10:04:36 crc kubenswrapper[4972]: W1121 10:04:36.125957 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode524d1c9_e84c_42f2_badb_72bf26a9c38f.slice/crio-d8e9598fcfbdb94de39fc6f0ff0342789a8790398858feca3891c0ec22927106 WatchSource:0}: Error finding container d8e9598fcfbdb94de39fc6f0ff0342789a8790398858feca3891c0ec22927106: Status 404 returned error can't find the container with id d8e9598fcfbdb94de39fc6f0ff0342789a8790398858feca3891c0ec22927106 Nov 21 10:04:36 crc kubenswrapper[4972]: W1121 10:04:36.239423 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1934e8d3_ef66_4d0e_8d12_bd958545270a.slice/crio-5fbd2f9d2fec5da940c5d561a6666dae5c0a5537bd48f5d1767a133a503b3413 WatchSource:0}: Error finding container 5fbd2f9d2fec5da940c5d561a6666dae5c0a5537bd48f5d1767a133a503b3413: Status 404 returned error can't find the container with id 5fbd2f9d2fec5da940c5d561a6666dae5c0a5537bd48f5d1767a133a503b3413 Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.251363 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7fcd667fc5-5ctgv"] Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.395209 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78666b77b6-ll6mt"] Nov 21 10:04:36 crc kubenswrapper[4972]: W1121 10:04:36.399197 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbaa8ec7_5499_43d1_ac80_dd8708d28643.slice/crio-55dee782b334cdc279970f9cac4c47950b7bbe5b2f6f00f92ae492c03d54793a WatchSource:0}: Error finding container 55dee782b334cdc279970f9cac4c47950b7bbe5b2f6f00f92ae492c03d54793a: Status 404 returned error can't find the container with id 55dee782b334cdc279970f9cac4c47950b7bbe5b2f6f00f92ae492c03d54793a Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.515957 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b7d889fc4-nj77m"] Nov 21 10:04:36 crc kubenswrapper[4972]: W1121 10:04:36.536689 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod107f78da_c307_41d6_9491_c4b4e237649a.slice/crio-add8e9665eea0e1dc66e28b4d24307cbda65b2afddf490510173395c3622dad8 WatchSource:0}: Error finding container add8e9665eea0e1dc66e28b4d24307cbda65b2afddf490510173395c3622dad8: Status 404 returned error can't find the container with id add8e9665eea0e1dc66e28b4d24307cbda65b2afddf490510173395c3622dad8 Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.571694 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"d6c241802e71e9521da5b44bb300b3ed93a83b5a2a3b5384891a37d0477bcf5f"} Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.571732 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"b27dea1fedce06fdcc7b8b10bfa4e01b3977a2c1835d79507b63bffd8cd7cf4f"} Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.575484 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f23a86fe-e939-4663-b964-454211c5d446","Type":"ContainerStarted","Data":"4743a7a33e8a464b0ea411bfad83c19a075010f2dcd322939fb901f06f09722d"} Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.575630 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="ceilometer-central-agent" containerID="cri-o://7281a142a4c53eaf85f98f739a1bc21ac3985c85ea2af36bfcd9fa7599671dbb" gracePeriod=30 Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.575931 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.576344 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="proxy-httpd" containerID="cri-o://4743a7a33e8a464b0ea411bfad83c19a075010f2dcd322939fb901f06f09722d" gracePeriod=30 Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.576391 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="sg-core" containerID="cri-o://e14cb4aad212d1998014fdf6f5ffdb1b7c811353ca8ed380411d627dab945835" gracePeriod=30 Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.576431 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="ceilometer-notification-agent" containerID="cri-o://750a20df693e3ef6fdeb49daa0b334d27d70c08a01a27a3ea0406685b4a367fd" gracePeriod=30 Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.580041 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" event={"ID":"1934e8d3-ef66-4d0e-8d12-bd958545270a","Type":"ContainerStarted","Data":"5fbd2f9d2fec5da940c5d561a6666dae5c0a5537bd48f5d1767a133a503b3413"} Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.585713 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78d4f89dc4-2qvzl" event={"ID":"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd","Type":"ContainerStarted","Data":"8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f"} Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.585763 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78d4f89dc4-2qvzl" event={"ID":"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd","Type":"ContainerStarted","Data":"a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923"} Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.585993 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.586040 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.588887 4972 generic.go:334] "Generic (PLEG): container finished" podID="e524d1c9-e84c-42f2-badb-72bf26a9c38f" containerID="c34c8cf3552330d88fb219b989a0cfec9666eb1b4e4bb38ed6dcd70f8ad7c45b" exitCode=0 Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.588985 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" event={"ID":"e524d1c9-e84c-42f2-badb-72bf26a9c38f","Type":"ContainerDied","Data":"c34c8cf3552330d88fb219b989a0cfec9666eb1b4e4bb38ed6dcd70f8ad7c45b"} Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.589014 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" event={"ID":"e524d1c9-e84c-42f2-badb-72bf26a9c38f","Type":"ContainerStarted","Data":"d8e9598fcfbdb94de39fc6f0ff0342789a8790398858feca3891c0ec22927106"} Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.596052 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7d889fc4-nj77m" event={"ID":"107f78da-c307-41d6-9491-c4b4e237649a","Type":"ContainerStarted","Data":"add8e9665eea0e1dc66e28b4d24307cbda65b2afddf490510173395c3622dad8"} Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.605109 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" event={"ID":"fbaa8ec7-5499-43d1-ac80-dd8708d28643","Type":"ContainerStarted","Data":"55dee782b334cdc279970f9cac4c47950b7bbe5b2f6f00f92ae492c03d54793a"} Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.623814 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.206904205 podStartE2EDuration="46.623798047s" podCreationTimestamp="2025-11-21 10:03:50 +0000 UTC" firstStartedPulling="2025-11-21 10:03:51.381466077 +0000 UTC m=+1376.490608575" lastFinishedPulling="2025-11-21 10:04:35.798359919 +0000 UTC m=+1420.907502417" observedRunningTime="2025-11-21 10:04:36.601477781 +0000 UTC m=+1421.710620289" watchObservedRunningTime="2025-11-21 10:04:36.623798047 +0000 UTC m=+1421.732940545" Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.656176 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-78d4f89dc4-2qvzl" podStartSLOduration=8.656154662 podStartE2EDuration="8.656154662s" podCreationTimestamp="2025-11-21 10:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:36.654074236 +0000 UTC m=+1421.763216754" watchObservedRunningTime="2025-11-21 10:04:36.656154662 +0000 UTC m=+1421.765297160" Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.932749 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-669568d65b-4t6gp"] Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.934695 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.939367 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.939437 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 21 10:04:36 crc kubenswrapper[4972]: I1121 10:04:36.946295 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-669568d65b-4t6gp"] Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.013055 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-logs\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.013124 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhht4\" (UniqueName: \"kubernetes.io/projected/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-kube-api-access-dhht4\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.013150 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-public-tls-certs\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.013270 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-combined-ca-bundle\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.013337 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-config-data-custom\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.013353 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-internal-tls-certs\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.013419 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-config-data\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.116932 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-config-data-custom\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.116972 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-internal-tls-certs\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.116998 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-config-data\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.117021 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-logs\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.117047 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhht4\" (UniqueName: \"kubernetes.io/projected/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-kube-api-access-dhht4\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.117068 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-public-tls-certs\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.117128 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-combined-ca-bundle\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.118882 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-logs\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.123327 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-config-data-custom\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.124458 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-combined-ca-bundle\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.136918 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-internal-tls-certs\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.137251 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhht4\" (UniqueName: \"kubernetes.io/projected/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-kube-api-access-dhht4\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.140430 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-config-data\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.150572 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-public-tls-certs\") pod \"barbican-api-669568d65b-4t6gp\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.263317 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.620741 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s77fk" event={"ID":"025ad09c-467a-451c-a24d-4bf686469677","Type":"ContainerStarted","Data":"6d2aa63779319b38cf4983db05e91553bbee48b755832575671149b098b5a84b"} Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.643738 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-s77fk" podStartSLOduration=2.917973009 podStartE2EDuration="47.643723995s" podCreationTimestamp="2025-11-21 10:03:50 +0000 UTC" firstStartedPulling="2025-11-21 10:03:51.070542327 +0000 UTC m=+1376.179684825" lastFinishedPulling="2025-11-21 10:04:35.796293313 +0000 UTC m=+1420.905435811" observedRunningTime="2025-11-21 10:04:37.639143742 +0000 UTC m=+1422.748286240" watchObservedRunningTime="2025-11-21 10:04:37.643723995 +0000 UTC m=+1422.752866493" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.647489 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerStarted","Data":"d4d2c9d3e605844fc00e4083833139b1121a575ad83be76839782a80b770f46a"} Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.669366 4972 generic.go:334] "Generic (PLEG): container finished" podID="f23a86fe-e939-4663-b964-454211c5d446" containerID="4743a7a33e8a464b0ea411bfad83c19a075010f2dcd322939fb901f06f09722d" exitCode=0 Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.669397 4972 generic.go:334] "Generic (PLEG): container finished" podID="f23a86fe-e939-4663-b964-454211c5d446" containerID="e14cb4aad212d1998014fdf6f5ffdb1b7c811353ca8ed380411d627dab945835" exitCode=2 Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.669405 4972 generic.go:334] "Generic (PLEG): container finished" podID="f23a86fe-e939-4663-b964-454211c5d446" containerID="7281a142a4c53eaf85f98f739a1bc21ac3985c85ea2af36bfcd9fa7599671dbb" exitCode=0 Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.669443 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f23a86fe-e939-4663-b964-454211c5d446","Type":"ContainerDied","Data":"4743a7a33e8a464b0ea411bfad83c19a075010f2dcd322939fb901f06f09722d"} Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.669468 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f23a86fe-e939-4663-b964-454211c5d446","Type":"ContainerDied","Data":"e14cb4aad212d1998014fdf6f5ffdb1b7c811353ca8ed380411d627dab945835"} Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.669481 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f23a86fe-e939-4663-b964-454211c5d446","Type":"ContainerDied","Data":"7281a142a4c53eaf85f98f739a1bc21ac3985c85ea2af36bfcd9fa7599671dbb"} Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.671658 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" event={"ID":"e524d1c9-e84c-42f2-badb-72bf26a9c38f","Type":"ContainerStarted","Data":"6ddcc25860ab0463b49e05a4805881975516242090de960ed42f9a067656838d"} Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.672420 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.674859 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7d889fc4-nj77m" event={"ID":"107f78da-c307-41d6-9491-c4b4e237649a","Type":"ContainerStarted","Data":"26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa"} Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.674885 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7d889fc4-nj77m" event={"ID":"107f78da-c307-41d6-9491-c4b4e237649a","Type":"ContainerStarted","Data":"5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec"} Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.675410 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.675445 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.677624 4972 generic.go:334] "Generic (PLEG): container finished" podID="1d74b758-4b34-4381-bb6d-ba95a0ce1c62" containerID="bcd8fe3e44217018095632c736ff35f44419a3efd411bc546910c3270f906dfe" exitCode=0 Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.678417 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-c4vkg" event={"ID":"1d74b758-4b34-4381-bb6d-ba95a0ce1c62","Type":"ContainerDied","Data":"bcd8fe3e44217018095632c736ff35f44419a3efd411bc546910c3270f906dfe"} Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.686636 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=97.655927876 podStartE2EDuration="1m49.68660076s" podCreationTimestamp="2025-11-21 10:02:48 +0000 UTC" firstStartedPulling="2025-11-21 10:04:15.628668682 +0000 UTC m=+1400.737811180" lastFinishedPulling="2025-11-21 10:04:27.659341566 +0000 UTC m=+1412.768484064" observedRunningTime="2025-11-21 10:04:37.678101603 +0000 UTC m=+1422.787244121" watchObservedRunningTime="2025-11-21 10:04:37.68660076 +0000 UTC m=+1422.795743258" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.699121 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" podStartSLOduration=4.699101735 podStartE2EDuration="4.699101735s" podCreationTimestamp="2025-11-21 10:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:37.698733975 +0000 UTC m=+1422.807876483" watchObservedRunningTime="2025-11-21 10:04:37.699101735 +0000 UTC m=+1422.808244233" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.722614 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b7d889fc4-nj77m" podStartSLOduration=4.722596212 podStartE2EDuration="4.722596212s" podCreationTimestamp="2025-11-21 10:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:37.716186141 +0000 UTC m=+1422.825328659" watchObservedRunningTime="2025-11-21 10:04:37.722596212 +0000 UTC m=+1422.831738710" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.930809 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86dc678b79-vtvgx"] Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.951945 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf6b66dfc-hp427"] Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.953443 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.956764 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 21 10:04:37 crc kubenswrapper[4972]: I1121 10:04:37.974795 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf6b66dfc-hp427"] Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.044660 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-ovsdbserver-nb\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.044775 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-config\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.044839 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-dns-svc\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.044876 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9ftd\" (UniqueName: \"kubernetes.io/projected/00df1195-bd1c-43db-9bcf-48baf060f494-kube-api-access-r9ftd\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.044959 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-ovsdbserver-sb\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.044987 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-dns-swift-storage-0\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.149874 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-config\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.149968 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-dns-svc\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.150009 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9ftd\" (UniqueName: \"kubernetes.io/projected/00df1195-bd1c-43db-9bcf-48baf060f494-kube-api-access-r9ftd\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.150103 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-ovsdbserver-sb\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.150139 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-dns-swift-storage-0\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.150169 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-ovsdbserver-nb\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.151546 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-ovsdbserver-nb\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.152580 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-config\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.154779 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-dns-svc\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.155764 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-ovsdbserver-sb\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.157113 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-dns-swift-storage-0\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.185412 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9ftd\" (UniqueName: \"kubernetes.io/projected/00df1195-bd1c-43db-9bcf-48baf060f494-kube-api-access-r9ftd\") pod \"dnsmasq-dns-cf6b66dfc-hp427\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.271541 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.548757 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-669568d65b-4t6gp"] Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.690261 4972 generic.go:334] "Generic (PLEG): container finished" podID="f23a86fe-e939-4663-b964-454211c5d446" containerID="750a20df693e3ef6fdeb49daa0b334d27d70c08a01a27a3ea0406685b4a367fd" exitCode=0 Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.690358 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f23a86fe-e939-4663-b964-454211c5d446","Type":"ContainerDied","Data":"750a20df693e3ef6fdeb49daa0b334d27d70c08a01a27a3ea0406685b4a367fd"} Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.690592 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f23a86fe-e939-4663-b964-454211c5d446","Type":"ContainerDied","Data":"74c549c9b9c773f069b3c126b02b01b3002ae413cc3fb35dc14e1e0019a31ee3"} Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.690612 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74c549c9b9c773f069b3c126b02b01b3002ae413cc3fb35dc14e1e0019a31ee3" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.692187 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" event={"ID":"1934e8d3-ef66-4d0e-8d12-bd958545270a","Type":"ContainerStarted","Data":"b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad"} Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.693690 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-669568d65b-4t6gp" event={"ID":"272d9c39-ab5b-4fc1-8dbe-209fbe33e293","Type":"ContainerStarted","Data":"e24aaf7f9163025a621092e32257309ef777d6244a6377108ad0b3bf28059de4"} Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.696244 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" event={"ID":"fbaa8ec7-5499-43d1-ac80-dd8708d28643","Type":"ContainerStarted","Data":"cfd792eb202fbf7b53ee8748aadb575b4d7545be47e73ef984e2cbe95e0adcce"} Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.718652 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.849694 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf6b66dfc-hp427"] Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.866315 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-sg-core-conf-yaml\") pod \"f23a86fe-e939-4663-b964-454211c5d446\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.866440 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-combined-ca-bundle\") pod \"f23a86fe-e939-4663-b964-454211c5d446\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.866498 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-config-data\") pod \"f23a86fe-e939-4663-b964-454211c5d446\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.866528 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f23a86fe-e939-4663-b964-454211c5d446-run-httpd\") pod \"f23a86fe-e939-4663-b964-454211c5d446\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.866547 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f23a86fe-e939-4663-b964-454211c5d446-log-httpd\") pod \"f23a86fe-e939-4663-b964-454211c5d446\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.866566 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-scripts\") pod \"f23a86fe-e939-4663-b964-454211c5d446\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.866591 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfktn\" (UniqueName: \"kubernetes.io/projected/f23a86fe-e939-4663-b964-454211c5d446-kube-api-access-hfktn\") pod \"f23a86fe-e939-4663-b964-454211c5d446\" (UID: \"f23a86fe-e939-4663-b964-454211c5d446\") " Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.867736 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23a86fe-e939-4663-b964-454211c5d446-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f23a86fe-e939-4663-b964-454211c5d446" (UID: "f23a86fe-e939-4663-b964-454211c5d446"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.868485 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23a86fe-e939-4663-b964-454211c5d446-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f23a86fe-e939-4663-b964-454211c5d446" (UID: "f23a86fe-e939-4663-b964-454211c5d446"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:04:38 crc kubenswrapper[4972]: W1121 10:04:38.874319 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00df1195_bd1c_43db_9bcf_48baf060f494.slice/crio-f245e39f6b046663434b473cb00b3c0e1130432944e1ddd7241e65839f6129c2 WatchSource:0}: Error finding container f245e39f6b046663434b473cb00b3c0e1130432944e1ddd7241e65839f6129c2: Status 404 returned error can't find the container with id f245e39f6b046663434b473cb00b3c0e1130432944e1ddd7241e65839f6129c2 Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.880471 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f23a86fe-e939-4663-b964-454211c5d446-kube-api-access-hfktn" (OuterVolumeSpecName: "kube-api-access-hfktn") pod "f23a86fe-e939-4663-b964-454211c5d446" (UID: "f23a86fe-e939-4663-b964-454211c5d446"). InnerVolumeSpecName "kube-api-access-hfktn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.881974 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-scripts" (OuterVolumeSpecName: "scripts") pod "f23a86fe-e939-4663-b964-454211c5d446" (UID: "f23a86fe-e939-4663-b964-454211c5d446"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.968621 4972 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f23a86fe-e939-4663-b964-454211c5d446-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.968964 4972 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f23a86fe-e939-4663-b964-454211c5d446-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.968974 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:38 crc kubenswrapper[4972]: I1121 10:04:38.968983 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfktn\" (UniqueName: \"kubernetes.io/projected/f23a86fe-e939-4663-b964-454211c5d446-kube-api-access-hfktn\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.052516 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f23a86fe-e939-4663-b964-454211c5d446" (UID: "f23a86fe-e939-4663-b964-454211c5d446"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.071623 4972 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.100408 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f23a86fe-e939-4663-b964-454211c5d446" (UID: "f23a86fe-e939-4663-b964-454211c5d446"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.123765 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-config-data" (OuterVolumeSpecName: "config-data") pod "f23a86fe-e939-4663-b964-454211c5d446" (UID: "f23a86fe-e939-4663-b964-454211c5d446"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.173671 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.173705 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23a86fe-e939-4663-b964-454211c5d446-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.203818 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.376275 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-combined-ca-bundle\") pod \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\" (UID: \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\") " Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.376805 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-config\") pod \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\" (UID: \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\") " Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.376883 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmsz8\" (UniqueName: \"kubernetes.io/projected/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-kube-api-access-rmsz8\") pod \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\" (UID: \"1d74b758-4b34-4381-bb6d-ba95a0ce1c62\") " Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.383720 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-kube-api-access-rmsz8" (OuterVolumeSpecName: "kube-api-access-rmsz8") pod "1d74b758-4b34-4381-bb6d-ba95a0ce1c62" (UID: "1d74b758-4b34-4381-bb6d-ba95a0ce1c62"). InnerVolumeSpecName "kube-api-access-rmsz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.411549 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-config" (OuterVolumeSpecName: "config") pod "1d74b758-4b34-4381-bb6d-ba95a0ce1c62" (UID: "1d74b758-4b34-4381-bb6d-ba95a0ce1c62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.417039 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d74b758-4b34-4381-bb6d-ba95a0ce1c62" (UID: "1d74b758-4b34-4381-bb6d-ba95a0ce1c62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.479441 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.479487 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmsz8\" (UniqueName: \"kubernetes.io/projected/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-kube-api-access-rmsz8\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.479502 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d74b758-4b34-4381-bb6d-ba95a0ce1c62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.732337 4972 generic.go:334] "Generic (PLEG): container finished" podID="00df1195-bd1c-43db-9bcf-48baf060f494" containerID="fd02f6843cc9a6a7631b3c6e7ce341ef4cbcd40107d68aea55b2cc466b520a7b" exitCode=0 Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.732476 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" event={"ID":"00df1195-bd1c-43db-9bcf-48baf060f494","Type":"ContainerDied","Data":"fd02f6843cc9a6a7631b3c6e7ce341ef4cbcd40107d68aea55b2cc466b520a7b"} Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.732505 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" event={"ID":"00df1195-bd1c-43db-9bcf-48baf060f494","Type":"ContainerStarted","Data":"f245e39f6b046663434b473cb00b3c0e1130432944e1ddd7241e65839f6129c2"} Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.741363 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-c4vkg" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.741678 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-c4vkg" event={"ID":"1d74b758-4b34-4381-bb6d-ba95a0ce1c62","Type":"ContainerDied","Data":"f4eb866cf8e8197d1f6a40f010a1dc61e08d8bda2d1ec6620a5d40c482bccb11"} Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.741732 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4eb866cf8e8197d1f6a40f010a1dc61e08d8bda2d1ec6620a5d40c482bccb11" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.792658 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" event={"ID":"fbaa8ec7-5499-43d1-ac80-dd8708d28643","Type":"ContainerStarted","Data":"8e1c5eaa82bd2eee5d1cc7e05fbf76fc1373742047fa91d2696d0552ca0cc505"} Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.793165 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" event={"ID":"1934e8d3-ef66-4d0e-8d12-bd958545270a","Type":"ContainerStarted","Data":"1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f"} Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.796385 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-669568d65b-4t6gp" event={"ID":"272d9c39-ab5b-4fc1-8dbe-209fbe33e293","Type":"ContainerStarted","Data":"c170fcfc81ca59f5bc98bc8edc442c5c3a824cf4040a9ddb3b5479628d9471b5"} Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.796426 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-669568d65b-4t6gp" event={"ID":"272d9c39-ab5b-4fc1-8dbe-209fbe33e293","Type":"ContainerStarted","Data":"3fe0c9bf4632a5a91bbedb92ac2a74a5be61932a64fbf9dec4c9fe6b9c892be9"} Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.796968 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.797110 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.817559 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" podUID="e524d1c9-e84c-42f2-badb-72bf26a9c38f" containerName="dnsmasq-dns" containerID="cri-o://6ddcc25860ab0463b49e05a4805881975516242090de960ed42f9a067656838d" gracePeriod=10 Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.818232 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-g84kj" event={"ID":"f9513939-1a73-46a3-a946-db9b1008314f","Type":"ContainerStarted","Data":"f7e978a05b49d9a8b55170b6916c286f1ba8d5d193d9fb52b446f78bd3d0ec08"} Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.825867 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.828491 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" podStartSLOduration=5.105996887 podStartE2EDuration="6.82846863s" podCreationTimestamp="2025-11-21 10:04:33 +0000 UTC" firstStartedPulling="2025-11-21 10:04:36.401524357 +0000 UTC m=+1421.510666855" lastFinishedPulling="2025-11-21 10:04:38.1239961 +0000 UTC m=+1423.233138598" observedRunningTime="2025-11-21 10:04:39.818491894 +0000 UTC m=+1424.927634392" watchObservedRunningTime="2025-11-21 10:04:39.82846863 +0000 UTC m=+1424.937611128" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.854594 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-669568d65b-4t6gp" podStartSLOduration=3.854578098 podStartE2EDuration="3.854578098s" podCreationTimestamp="2025-11-21 10:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:39.852211285 +0000 UTC m=+1424.961353793" watchObservedRunningTime="2025-11-21 10:04:39.854578098 +0000 UTC m=+1424.963720596" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.875993 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-g84kj" podStartSLOduration=2.329406376 podStartE2EDuration="1m17.87597836s" podCreationTimestamp="2025-11-21 10:03:22 +0000 UTC" firstStartedPulling="2025-11-21 10:03:22.833609712 +0000 UTC m=+1347.942752210" lastFinishedPulling="2025-11-21 10:04:38.380181696 +0000 UTC m=+1423.489324194" observedRunningTime="2025-11-21 10:04:39.873226507 +0000 UTC m=+1424.982369005" watchObservedRunningTime="2025-11-21 10:04:39.87597836 +0000 UTC m=+1424.985120858" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.903762 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" podStartSLOduration=5.125362915 podStartE2EDuration="6.903747362s" podCreationTimestamp="2025-11-21 10:04:33 +0000 UTC" firstStartedPulling="2025-11-21 10:04:36.344327368 +0000 UTC m=+1421.453469866" lastFinishedPulling="2025-11-21 10:04:38.122711815 +0000 UTC m=+1423.231854313" observedRunningTime="2025-11-21 10:04:39.896045946 +0000 UTC m=+1425.005188474" watchObservedRunningTime="2025-11-21 10:04:39.903747362 +0000 UTC m=+1425.012889860" Nov 21 10:04:39 crc kubenswrapper[4972]: I1121 10:04:39.984140 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.005319 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.016481 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf6b66dfc-hp427"] Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.027584 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:04:40 crc kubenswrapper[4972]: E1121 10:04:40.028525 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="ceilometer-notification-agent" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.028554 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="ceilometer-notification-agent" Nov 21 10:04:40 crc kubenswrapper[4972]: E1121 10:04:40.028632 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d74b758-4b34-4381-bb6d-ba95a0ce1c62" containerName="neutron-db-sync" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.028641 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d74b758-4b34-4381-bb6d-ba95a0ce1c62" containerName="neutron-db-sync" Nov 21 10:04:40 crc kubenswrapper[4972]: E1121 10:04:40.028657 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="sg-core" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.028662 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="sg-core" Nov 21 10:04:40 crc kubenswrapper[4972]: E1121 10:04:40.028702 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="proxy-httpd" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.028709 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="proxy-httpd" Nov 21 10:04:40 crc kubenswrapper[4972]: E1121 10:04:40.028720 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="ceilometer-central-agent" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.028725 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="ceilometer-central-agent" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.028972 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="ceilometer-notification-agent" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.029017 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="sg-core" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.029045 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="proxy-httpd" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.029060 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d74b758-4b34-4381-bb6d-ba95a0ce1c62" containerName="neutron-db-sync" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.029097 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23a86fe-e939-4663-b964-454211c5d446" containerName="ceilometer-central-agent" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.031206 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.036975 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.039315 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.065441 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.078012 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65757696bf-dvwq8"] Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.079657 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.093950 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blh98\" (UniqueName: \"kubernetes.io/projected/1e58da07-71c8-4739-848a-94e49b6c473c-kube-api-access-blh98\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.094017 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-scripts\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.094107 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-config-data\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.094246 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.094295 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.094343 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e58da07-71c8-4739-848a-94e49b6c473c-run-httpd\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.094494 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e58da07-71c8-4739-848a-94e49b6c473c-log-httpd\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.114863 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65757696bf-dvwq8"] Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.166452 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-799fdbb85b-bfzq9"] Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.167845 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.173435 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.173852 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.174734 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hnlwr" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.175027 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.191613 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-799fdbb85b-bfzq9"] Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.196863 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.196900 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-ovsdbserver-sb\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.196930 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e58da07-71c8-4739-848a-94e49b6c473c-run-httpd\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.196981 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-dns-swift-storage-0\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.197005 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-config\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.197021 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-ovsdbserver-nb\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.197041 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e58da07-71c8-4739-848a-94e49b6c473c-log-httpd\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.197084 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blh98\" (UniqueName: \"kubernetes.io/projected/1e58da07-71c8-4739-848a-94e49b6c473c-kube-api-access-blh98\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.197105 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-dns-svc\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.197127 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-scripts\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.197147 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-config-data\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.197166 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgrvd\" (UniqueName: \"kubernetes.io/projected/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-kube-api-access-dgrvd\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.197203 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.277735 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e58da07-71c8-4739-848a-94e49b6c473c-run-httpd\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.278266 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e58da07-71c8-4739-848a-94e49b6c473c-log-httpd\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.282171 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.282918 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.283982 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-config-data\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.284815 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blh98\" (UniqueName: \"kubernetes.io/projected/1e58da07-71c8-4739-848a-94e49b6c473c-kube-api-access-blh98\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.285941 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-scripts\") pod \"ceilometer-0\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.299592 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-combined-ca-bundle\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.299661 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw5kg\" (UniqueName: \"kubernetes.io/projected/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-kube-api-access-qw5kg\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.299687 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-httpd-config\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.299748 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-dns-swift-storage-0\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.299782 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-config\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.300426 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-ovsdbserver-nb\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.300501 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-config\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.300590 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-dns-svc\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.300756 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgrvd\" (UniqueName: \"kubernetes.io/projected/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-kube-api-access-dgrvd\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.300796 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-ovndb-tls-certs\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.300905 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-ovsdbserver-sb\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.301046 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-dns-swift-storage-0\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.301415 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-config\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.302577 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-ovsdbserver-nb\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.302671 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-ovsdbserver-sb\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.302969 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-dns-svc\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.321672 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgrvd\" (UniqueName: \"kubernetes.io/projected/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-kube-api-access-dgrvd\") pod \"dnsmasq-dns-65757696bf-dvwq8\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.346820 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.404713 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-combined-ca-bundle\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.404781 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw5kg\" (UniqueName: \"kubernetes.io/projected/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-kube-api-access-qw5kg\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.404806 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-httpd-config\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.404906 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-config\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.404983 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-ovndb-tls-certs\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.410930 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.412292 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-httpd-config\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.412390 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-config\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.412787 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-ovndb-tls-certs\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.422599 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-combined-ca-bundle\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.436929 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw5kg\" (UniqueName: \"kubernetes.io/projected/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-kube-api-access-qw5kg\") pod \"neutron-799fdbb85b-bfzq9\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.579388 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.682462 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:04:40 crc kubenswrapper[4972]: W1121 10:04:40.703033 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e58da07_71c8_4739_848a_94e49b6c473c.slice/crio-3a1841833b6830d544c75999eef76fda9526a7505f7b724df7c3ca827dfb67a9 WatchSource:0}: Error finding container 3a1841833b6830d544c75999eef76fda9526a7505f7b724df7c3ca827dfb67a9: Status 404 returned error can't find the container with id 3a1841833b6830d544c75999eef76fda9526a7505f7b724df7c3ca827dfb67a9 Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.841752 4972 generic.go:334] "Generic (PLEG): container finished" podID="e524d1c9-e84c-42f2-badb-72bf26a9c38f" containerID="6ddcc25860ab0463b49e05a4805881975516242090de960ed42f9a067656838d" exitCode=0 Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.841817 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" event={"ID":"e524d1c9-e84c-42f2-badb-72bf26a9c38f","Type":"ContainerDied","Data":"6ddcc25860ab0463b49e05a4805881975516242090de960ed42f9a067656838d"} Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.845253 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" event={"ID":"00df1195-bd1c-43db-9bcf-48baf060f494","Type":"ContainerStarted","Data":"b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6"} Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.845378 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" podUID="00df1195-bd1c-43db-9bcf-48baf060f494" containerName="dnsmasq-dns" containerID="cri-o://b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6" gracePeriod=10 Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.845617 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.853527 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e58da07-71c8-4739-848a-94e49b6c473c","Type":"ContainerStarted","Data":"3a1841833b6830d544c75999eef76fda9526a7505f7b724df7c3ca827dfb67a9"} Nov 21 10:04:40 crc kubenswrapper[4972]: I1121 10:04:40.883254 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" podStartSLOduration=3.883237619 podStartE2EDuration="3.883237619s" podCreationTimestamp="2025-11-21 10:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:40.871002632 +0000 UTC m=+1425.980145160" watchObservedRunningTime="2025-11-21 10:04:40.883237619 +0000 UTC m=+1425.992380107" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.229899 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.293349 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-799fdbb85b-bfzq9"] Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.306466 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65757696bf-dvwq8"] Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.324430 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-dns-svc\") pod \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.324567 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-config\") pod \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.324665 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-ovsdbserver-sb\") pod \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.324758 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gss9\" (UniqueName: \"kubernetes.io/projected/e524d1c9-e84c-42f2-badb-72bf26a9c38f-kube-api-access-2gss9\") pod \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.324888 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-ovsdbserver-nb\") pod \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\" (UID: \"e524d1c9-e84c-42f2-badb-72bf26a9c38f\") " Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.328984 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e524d1c9-e84c-42f2-badb-72bf26a9c38f-kube-api-access-2gss9" (OuterVolumeSpecName: "kube-api-access-2gss9") pod "e524d1c9-e84c-42f2-badb-72bf26a9c38f" (UID: "e524d1c9-e84c-42f2-badb-72bf26a9c38f"). InnerVolumeSpecName "kube-api-access-2gss9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.378114 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e524d1c9-e84c-42f2-badb-72bf26a9c38f" (UID: "e524d1c9-e84c-42f2-badb-72bf26a9c38f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.379033 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e524d1c9-e84c-42f2-badb-72bf26a9c38f" (UID: "e524d1c9-e84c-42f2-badb-72bf26a9c38f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.383376 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-config" (OuterVolumeSpecName: "config") pod "e524d1c9-e84c-42f2-badb-72bf26a9c38f" (UID: "e524d1c9-e84c-42f2-badb-72bf26a9c38f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.388266 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e524d1c9-e84c-42f2-badb-72bf26a9c38f" (UID: "e524d1c9-e84c-42f2-badb-72bf26a9c38f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.426757 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gss9\" (UniqueName: \"kubernetes.io/projected/e524d1c9-e84c-42f2-badb-72bf26a9c38f-kube-api-access-2gss9\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.426786 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.426796 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.426805 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.426813 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e524d1c9-e84c-42f2-badb-72bf26a9c38f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.776769 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f23a86fe-e939-4663-b964-454211c5d446" path="/var/lib/kubelet/pods/f23a86fe-e939-4663-b964-454211c5d446/volumes" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.865995 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-799fdbb85b-bfzq9" event={"ID":"3df233db-ea36-4a96-9a2f-4f7e5be4a73c","Type":"ContainerStarted","Data":"7f27ff66420d477a2c5a023be0af54d1b40a6f54c3a980aa9339b676c2b45419"} Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.868135 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" event={"ID":"99c9e220-ca0a-4830-91c2-96c2fcb4d93d","Type":"ContainerStarted","Data":"e46f546eb9d44c92689e53d9b18703ecb4e10ee9e8268cd5e616a014aa4bf537"} Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.870917 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.870914 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86dc678b79-vtvgx" event={"ID":"e524d1c9-e84c-42f2-badb-72bf26a9c38f","Type":"ContainerDied","Data":"d8e9598fcfbdb94de39fc6f0ff0342789a8790398858feca3891c0ec22927106"} Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.870997 4972 scope.go:117] "RemoveContainer" containerID="6ddcc25860ab0463b49e05a4805881975516242090de960ed42f9a067656838d" Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.903366 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86dc678b79-vtvgx"] Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.912842 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86dc678b79-vtvgx"] Nov 21 10:04:41 crc kubenswrapper[4972]: I1121 10:04:41.918074 4972 scope.go:117] "RemoveContainer" containerID="c34c8cf3552330d88fb219b989a0cfec9666eb1b4e4bb38ed6dcd70f8ad7c45b" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.537457 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.666933 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9ftd\" (UniqueName: \"kubernetes.io/projected/00df1195-bd1c-43db-9bcf-48baf060f494-kube-api-access-r9ftd\") pod \"00df1195-bd1c-43db-9bcf-48baf060f494\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.667043 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-dns-svc\") pod \"00df1195-bd1c-43db-9bcf-48baf060f494\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.667166 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-ovsdbserver-nb\") pod \"00df1195-bd1c-43db-9bcf-48baf060f494\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.667193 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-config\") pod \"00df1195-bd1c-43db-9bcf-48baf060f494\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.667253 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-dns-swift-storage-0\") pod \"00df1195-bd1c-43db-9bcf-48baf060f494\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.667341 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-ovsdbserver-sb\") pod \"00df1195-bd1c-43db-9bcf-48baf060f494\" (UID: \"00df1195-bd1c-43db-9bcf-48baf060f494\") " Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.671706 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00df1195-bd1c-43db-9bcf-48baf060f494-kube-api-access-r9ftd" (OuterVolumeSpecName: "kube-api-access-r9ftd") pod "00df1195-bd1c-43db-9bcf-48baf060f494" (UID: "00df1195-bd1c-43db-9bcf-48baf060f494"). InnerVolumeSpecName "kube-api-access-r9ftd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.709875 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-config" (OuterVolumeSpecName: "config") pod "00df1195-bd1c-43db-9bcf-48baf060f494" (UID: "00df1195-bd1c-43db-9bcf-48baf060f494"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.713148 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "00df1195-bd1c-43db-9bcf-48baf060f494" (UID: "00df1195-bd1c-43db-9bcf-48baf060f494"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.721795 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "00df1195-bd1c-43db-9bcf-48baf060f494" (UID: "00df1195-bd1c-43db-9bcf-48baf060f494"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.725250 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00df1195-bd1c-43db-9bcf-48baf060f494" (UID: "00df1195-bd1c-43db-9bcf-48baf060f494"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.735183 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00df1195-bd1c-43db-9bcf-48baf060f494" (UID: "00df1195-bd1c-43db-9bcf-48baf060f494"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.769745 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.769778 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.769788 4972 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.769798 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.769807 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9ftd\" (UniqueName: \"kubernetes.io/projected/00df1195-bd1c-43db-9bcf-48baf060f494-kube-api-access-r9ftd\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.769817 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00df1195-bd1c-43db-9bcf-48baf060f494-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.880668 4972 generic.go:334] "Generic (PLEG): container finished" podID="00df1195-bd1c-43db-9bcf-48baf060f494" containerID="b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6" exitCode=0 Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.880715 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" event={"ID":"00df1195-bd1c-43db-9bcf-48baf060f494","Type":"ContainerDied","Data":"b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6"} Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.880748 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.880758 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf6b66dfc-hp427" event={"ID":"00df1195-bd1c-43db-9bcf-48baf060f494","Type":"ContainerDied","Data":"f245e39f6b046663434b473cb00b3c0e1130432944e1ddd7241e65839f6129c2"} Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.880777 4972 scope.go:117] "RemoveContainer" containerID="b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.886275 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e58da07-71c8-4739-848a-94e49b6c473c","Type":"ContainerStarted","Data":"b2e7ad04d06d5cf578cf608137e619982339ee1dce176875b0863adfbcd2c5b4"} Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.892346 4972 generic.go:334] "Generic (PLEG): container finished" podID="99c9e220-ca0a-4830-91c2-96c2fcb4d93d" containerID="45839324943dfa5db8c1f7598b6cfee5f383e54c0772356dad81825e11e6c8b7" exitCode=0 Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.892506 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" event={"ID":"99c9e220-ca0a-4830-91c2-96c2fcb4d93d","Type":"ContainerDied","Data":"45839324943dfa5db8c1f7598b6cfee5f383e54c0772356dad81825e11e6c8b7"} Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.897270 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-799fdbb85b-bfzq9" event={"ID":"3df233db-ea36-4a96-9a2f-4f7e5be4a73c","Type":"ContainerStarted","Data":"804834ec69c3f51d8d6ac7baf9e6f02dc3f1d86f9b43589d3e761b16c7cb0a0e"} Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.897314 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-799fdbb85b-bfzq9" event={"ID":"3df233db-ea36-4a96-9a2f-4f7e5be4a73c","Type":"ContainerStarted","Data":"205e4e594658f925dc0b429281b5275d2b2fc6c063e62b641ff6d463318934dd"} Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.897514 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.904726 4972 scope.go:117] "RemoveContainer" containerID="fd02f6843cc9a6a7631b3c6e7ce341ef4cbcd40107d68aea55b2cc466b520a7b" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.944662 4972 scope.go:117] "RemoveContainer" containerID="b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6" Nov 21 10:04:42 crc kubenswrapper[4972]: E1121 10:04:42.945667 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6\": container with ID starting with b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6 not found: ID does not exist" containerID="b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.945712 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6"} err="failed to get container status \"b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6\": rpc error: code = NotFound desc = could not find container \"b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6\": container with ID starting with b05e474a7a3779315ffe78864a16856855907d61547728448ce5de7e3e7fb2d6 not found: ID does not exist" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.945757 4972 scope.go:117] "RemoveContainer" containerID="fd02f6843cc9a6a7631b3c6e7ce341ef4cbcd40107d68aea55b2cc466b520a7b" Nov 21 10:04:42 crc kubenswrapper[4972]: E1121 10:04:42.948499 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd02f6843cc9a6a7631b3c6e7ce341ef4cbcd40107d68aea55b2cc466b520a7b\": container with ID starting with fd02f6843cc9a6a7631b3c6e7ce341ef4cbcd40107d68aea55b2cc466b520a7b not found: ID does not exist" containerID="fd02f6843cc9a6a7631b3c6e7ce341ef4cbcd40107d68aea55b2cc466b520a7b" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.948555 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd02f6843cc9a6a7631b3c6e7ce341ef4cbcd40107d68aea55b2cc466b520a7b"} err="failed to get container status \"fd02f6843cc9a6a7631b3c6e7ce341ef4cbcd40107d68aea55b2cc466b520a7b\": rpc error: code = NotFound desc = could not find container \"fd02f6843cc9a6a7631b3c6e7ce341ef4cbcd40107d68aea55b2cc466b520a7b\": container with ID starting with fd02f6843cc9a6a7631b3c6e7ce341ef4cbcd40107d68aea55b2cc466b520a7b not found: ID does not exist" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.975607 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-799fdbb85b-bfzq9" podStartSLOduration=2.975585046 podStartE2EDuration="2.975585046s" podCreationTimestamp="2025-11-21 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:42.945179974 +0000 UTC m=+1428.054322482" watchObservedRunningTime="2025-11-21 10:04:42.975585046 +0000 UTC m=+1428.084727544" Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.986215 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf6b66dfc-hp427"] Nov 21 10:04:42 crc kubenswrapper[4972]: I1121 10:04:42.993501 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf6b66dfc-hp427"] Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.771305 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00df1195-bd1c-43db-9bcf-48baf060f494" path="/var/lib/kubelet/pods/00df1195-bd1c-43db-9bcf-48baf060f494/volumes" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.772503 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e524d1c9-e84c-42f2-badb-72bf26a9c38f" path="/var/lib/kubelet/pods/e524d1c9-e84c-42f2-badb-72bf26a9c38f/volumes" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.808184 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-79f8cf4757-8cflk"] Nov 21 10:04:43 crc kubenswrapper[4972]: E1121 10:04:43.808634 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00df1195-bd1c-43db-9bcf-48baf060f494" containerName="dnsmasq-dns" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.808657 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="00df1195-bd1c-43db-9bcf-48baf060f494" containerName="dnsmasq-dns" Nov 21 10:04:43 crc kubenswrapper[4972]: E1121 10:04:43.808677 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00df1195-bd1c-43db-9bcf-48baf060f494" containerName="init" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.808688 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="00df1195-bd1c-43db-9bcf-48baf060f494" containerName="init" Nov 21 10:04:43 crc kubenswrapper[4972]: E1121 10:04:43.808704 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e524d1c9-e84c-42f2-badb-72bf26a9c38f" containerName="dnsmasq-dns" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.808712 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e524d1c9-e84c-42f2-badb-72bf26a9c38f" containerName="dnsmasq-dns" Nov 21 10:04:43 crc kubenswrapper[4972]: E1121 10:04:43.808725 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e524d1c9-e84c-42f2-badb-72bf26a9c38f" containerName="init" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.808733 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e524d1c9-e84c-42f2-badb-72bf26a9c38f" containerName="init" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.808973 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="00df1195-bd1c-43db-9bcf-48baf060f494" containerName="dnsmasq-dns" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.809005 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e524d1c9-e84c-42f2-badb-72bf26a9c38f" containerName="dnsmasq-dns" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.810218 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.812972 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.813172 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.823146 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79f8cf4757-8cflk"] Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.890818 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-config\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.890889 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-public-tls-certs\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.890986 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-ovndb-tls-certs\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.891006 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-internal-tls-certs\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.891034 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-httpd-config\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.891064 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-combined-ca-bundle\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.891086 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56wn\" (UniqueName: \"kubernetes.io/projected/56aac81e-b855-4419-b8a5-8f1fc099b5e6-kube-api-access-k56wn\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.908392 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e58da07-71c8-4739-848a-94e49b6c473c","Type":"ContainerStarted","Data":"62f86be96e036f2ac23fd13150fbf6bacfb4b8ce5f2ba708160bbc54de2e0910"} Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.910361 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" event={"ID":"99c9e220-ca0a-4830-91c2-96c2fcb4d93d","Type":"ContainerStarted","Data":"2d3e781327bf9ec8c2b2f2d0f110a02b3da34b80df7bb5e3f09d66e3f19fe1f3"} Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.911538 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.944972 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" podStartSLOduration=3.944951202 podStartE2EDuration="3.944951202s" podCreationTimestamp="2025-11-21 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:43.941439258 +0000 UTC m=+1429.050581776" watchObservedRunningTime="2025-11-21 10:04:43.944951202 +0000 UTC m=+1429.054093710" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.992210 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-config\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.992266 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-public-tls-certs\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.992363 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-ovndb-tls-certs\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.992399 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-internal-tls-certs\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.992426 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-httpd-config\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.992456 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-combined-ca-bundle\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.992490 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k56wn\" (UniqueName: \"kubernetes.io/projected/56aac81e-b855-4419-b8a5-8f1fc099b5e6-kube-api-access-k56wn\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.997587 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-combined-ca-bundle\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.997603 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-config\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.997698 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-ovndb-tls-certs\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.997721 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-public-tls-certs\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:43 crc kubenswrapper[4972]: I1121 10:04:43.998192 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-httpd-config\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:44 crc kubenswrapper[4972]: I1121 10:04:44.004399 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-internal-tls-certs\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:44 crc kubenswrapper[4972]: I1121 10:04:44.012450 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k56wn\" (UniqueName: \"kubernetes.io/projected/56aac81e-b855-4419-b8a5-8f1fc099b5e6-kube-api-access-k56wn\") pod \"neutron-79f8cf4757-8cflk\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:44 crc kubenswrapper[4972]: I1121 10:04:44.129333 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:44 crc kubenswrapper[4972]: I1121 10:04:44.682394 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79f8cf4757-8cflk"] Nov 21 10:04:44 crc kubenswrapper[4972]: I1121 10:04:44.927701 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e58da07-71c8-4739-848a-94e49b6c473c","Type":"ContainerStarted","Data":"93de2e8b696fe5a07f80ebff1526da274e75d0fbfd512cadffc83d5b337356aa"} Nov 21 10:04:44 crc kubenswrapper[4972]: I1121 10:04:44.930972 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f8cf4757-8cflk" event={"ID":"56aac81e-b855-4419-b8a5-8f1fc099b5e6","Type":"ContainerStarted","Data":"cd60b1b2d6420183cc66e449c1678dedc4fe565967fabc882fd0a4a7eca66999"} Nov 21 10:04:45 crc kubenswrapper[4972]: I1121 10:04:45.940515 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e58da07-71c8-4739-848a-94e49b6c473c","Type":"ContainerStarted","Data":"aeab841033fd01e3f4e3ea8935c42be4a459c6ac89c4166b63e1de3e9f14cdbd"} Nov 21 10:04:45 crc kubenswrapper[4972]: I1121 10:04:45.941303 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 10:04:45 crc kubenswrapper[4972]: I1121 10:04:45.945247 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f8cf4757-8cflk" event={"ID":"56aac81e-b855-4419-b8a5-8f1fc099b5e6","Type":"ContainerStarted","Data":"4d77ecd5438c1e9b16f7c8d4f0e5a8b33983d1efefc68af6391bbc8b9f26e966"} Nov 21 10:04:45 crc kubenswrapper[4972]: I1121 10:04:45.945274 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:04:45 crc kubenswrapper[4972]: I1121 10:04:45.945285 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f8cf4757-8cflk" event={"ID":"56aac81e-b855-4419-b8a5-8f1fc099b5e6","Type":"ContainerStarted","Data":"a84df8a5c99a95c300cc9bc766b529621a802a107975b46bcdb8f96199772bb6"} Nov 21 10:04:45 crc kubenswrapper[4972]: I1121 10:04:45.961477 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.534406621 podStartE2EDuration="6.961461152s" podCreationTimestamp="2025-11-21 10:04:39 +0000 UTC" firstStartedPulling="2025-11-21 10:04:40.71452994 +0000 UTC m=+1425.823672448" lastFinishedPulling="2025-11-21 10:04:45.141584481 +0000 UTC m=+1430.250726979" observedRunningTime="2025-11-21 10:04:45.960432715 +0000 UTC m=+1431.069575233" watchObservedRunningTime="2025-11-21 10:04:45.961461152 +0000 UTC m=+1431.070603650" Nov 21 10:04:45 crc kubenswrapper[4972]: I1121 10:04:45.979338 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-79f8cf4757-8cflk" podStartSLOduration=2.97931893 podStartE2EDuration="2.97931893s" podCreationTimestamp="2025-11-21 10:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:45.976382131 +0000 UTC m=+1431.085524639" watchObservedRunningTime="2025-11-21 10:04:45.97931893 +0000 UTC m=+1431.088461428" Nov 21 10:04:46 crc kubenswrapper[4972]: I1121 10:04:46.083298 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:46 crc kubenswrapper[4972]: I1121 10:04:46.303010 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:46 crc kubenswrapper[4972]: I1121 10:04:46.954966 4972 generic.go:334] "Generic (PLEG): container finished" podID="025ad09c-467a-451c-a24d-4bf686469677" containerID="6d2aa63779319b38cf4983db05e91553bbee48b755832575671149b098b5a84b" exitCode=0 Nov 21 10:04:46 crc kubenswrapper[4972]: I1121 10:04:46.955143 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s77fk" event={"ID":"025ad09c-467a-451c-a24d-4bf686469677","Type":"ContainerDied","Data":"6d2aa63779319b38cf4983db05e91553bbee48b755832575671149b098b5a84b"} Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.376444 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s77fk" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.485561 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-db-sync-config-data\") pod \"025ad09c-467a-451c-a24d-4bf686469677\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.485652 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-config-data\") pod \"025ad09c-467a-451c-a24d-4bf686469677\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.485708 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzmg9\" (UniqueName: \"kubernetes.io/projected/025ad09c-467a-451c-a24d-4bf686469677-kube-api-access-wzmg9\") pod \"025ad09c-467a-451c-a24d-4bf686469677\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.485792 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-combined-ca-bundle\") pod \"025ad09c-467a-451c-a24d-4bf686469677\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.485918 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-scripts\") pod \"025ad09c-467a-451c-a24d-4bf686469677\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.485968 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/025ad09c-467a-451c-a24d-4bf686469677-etc-machine-id\") pod \"025ad09c-467a-451c-a24d-4bf686469677\" (UID: \"025ad09c-467a-451c-a24d-4bf686469677\") " Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.486127 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/025ad09c-467a-451c-a24d-4bf686469677-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "025ad09c-467a-451c-a24d-4bf686469677" (UID: "025ad09c-467a-451c-a24d-4bf686469677"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.486622 4972 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/025ad09c-467a-451c-a24d-4bf686469677-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.491971 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-scripts" (OuterVolumeSpecName: "scripts") pod "025ad09c-467a-451c-a24d-4bf686469677" (UID: "025ad09c-467a-451c-a24d-4bf686469677"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.500004 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025ad09c-467a-451c-a24d-4bf686469677-kube-api-access-wzmg9" (OuterVolumeSpecName: "kube-api-access-wzmg9") pod "025ad09c-467a-451c-a24d-4bf686469677" (UID: "025ad09c-467a-451c-a24d-4bf686469677"). InnerVolumeSpecName "kube-api-access-wzmg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.500080 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "025ad09c-467a-451c-a24d-4bf686469677" (UID: "025ad09c-467a-451c-a24d-4bf686469677"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.526909 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "025ad09c-467a-451c-a24d-4bf686469677" (UID: "025ad09c-467a-451c-a24d-4bf686469677"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.557911 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-config-data" (OuterVolumeSpecName: "config-data") pod "025ad09c-467a-451c-a24d-4bf686469677" (UID: "025ad09c-467a-451c-a24d-4bf686469677"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.588203 4972 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.588236 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.588249 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzmg9\" (UniqueName: \"kubernetes.io/projected/025ad09c-467a-451c-a24d-4bf686469677-kube-api-access-wzmg9\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.588262 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.588273 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025ad09c-467a-451c-a24d-4bf686469677-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.827541 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.972742 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s77fk" event={"ID":"025ad09c-467a-451c-a24d-4bf686469677","Type":"ContainerDied","Data":"94f34478dfd43ed647500fe745f03f6a8493142aec0aef905ae82fa49f7e90fe"} Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.972779 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f34478dfd43ed647500fe745f03f6a8493142aec0aef905ae82fa49f7e90fe" Nov 21 10:04:48 crc kubenswrapper[4972]: I1121 10:04:48.972794 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s77fk" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.060854 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.142801 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b7d889fc4-nj77m"] Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.143484 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b7d889fc4-nj77m" podUID="107f78da-c307-41d6-9491-c4b4e237649a" containerName="barbican-api-log" containerID="cri-o://5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec" gracePeriod=30 Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.143957 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b7d889fc4-nj77m" podUID="107f78da-c307-41d6-9491-c4b4e237649a" containerName="barbican-api" containerID="cri-o://26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa" gracePeriod=30 Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.327256 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 10:04:49 crc kubenswrapper[4972]: E1121 10:04:49.327594 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025ad09c-467a-451c-a24d-4bf686469677" containerName="cinder-db-sync" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.327610 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="025ad09c-467a-451c-a24d-4bf686469677" containerName="cinder-db-sync" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.327782 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="025ad09c-467a-451c-a24d-4bf686469677" containerName="cinder-db-sync" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.328724 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.337336 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hb4zz" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.337615 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.337943 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.338085 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.355287 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.405034 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-scripts\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.405110 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.405140 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.405164 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q7kq\" (UniqueName: \"kubernetes.io/projected/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-kube-api-access-5q7kq\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.405191 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.405256 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-config-data\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.433115 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65757696bf-dvwq8"] Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.433521 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" podUID="99c9e220-ca0a-4830-91c2-96c2fcb4d93d" containerName="dnsmasq-dns" containerID="cri-o://2d3e781327bf9ec8c2b2f2d0f110a02b3da34b80df7bb5e3f09d66e3f19fe1f3" gracePeriod=10 Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.438487 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.487625 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586689c4f9-dx6rb"] Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.490553 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.499337 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586689c4f9-dx6rb"] Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.506739 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.506961 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.507043 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q7kq\" (UniqueName: \"kubernetes.io/projected/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-kube-api-access-5q7kq\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.507123 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.507244 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-config-data\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.507353 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-scripts\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.507055 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.526609 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-scripts\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.526850 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.527019 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-config-data\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.527123 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.570670 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q7kq\" (UniqueName: \"kubernetes.io/projected/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-kube-api-access-5q7kq\") pod \"cinder-scheduler-0\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.608985 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdkz4\" (UniqueName: \"kubernetes.io/projected/06293605-a389-4f8d-a217-7007fd7c6ade-kube-api-access-fdkz4\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.609106 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-ovsdbserver-nb\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.609134 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-dns-svc\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.609177 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-ovsdbserver-sb\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.609210 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-config\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.609226 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-dns-swift-storage-0\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.612787 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.617110 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.621181 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.651187 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.672134 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.713146 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-ovsdbserver-sb\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714051 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-ovsdbserver-sb\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714132 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-config\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714150 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-dns-swift-storage-0\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714178 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9ngf\" (UniqueName: \"kubernetes.io/projected/306fe7eb-e7e7-45c6-be73-29c38afc1f06-kube-api-access-s9ngf\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714200 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-scripts\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714215 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306fe7eb-e7e7-45c6-be73-29c38afc1f06-logs\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714250 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdkz4\" (UniqueName: \"kubernetes.io/projected/06293605-a389-4f8d-a217-7007fd7c6ade-kube-api-access-fdkz4\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714283 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/306fe7eb-e7e7-45c6-be73-29c38afc1f06-etc-machine-id\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714299 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-config-data\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714349 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714376 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-ovsdbserver-nb\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714396 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-dns-svc\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.714417 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-config-data-custom\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.715019 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-config\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.715524 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-dns-swift-storage-0\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.718521 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-dns-svc\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.720496 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-ovsdbserver-nb\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.736566 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdkz4\" (UniqueName: \"kubernetes.io/projected/06293605-a389-4f8d-a217-7007fd7c6ade-kube-api-access-fdkz4\") pod \"dnsmasq-dns-586689c4f9-dx6rb\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.815998 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9ngf\" (UniqueName: \"kubernetes.io/projected/306fe7eb-e7e7-45c6-be73-29c38afc1f06-kube-api-access-s9ngf\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.816062 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-scripts\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.816084 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306fe7eb-e7e7-45c6-be73-29c38afc1f06-logs\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.816145 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/306fe7eb-e7e7-45c6-be73-29c38afc1f06-etc-machine-id\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.816170 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-config-data\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.816227 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.816285 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-config-data-custom\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.820377 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306fe7eb-e7e7-45c6-be73-29c38afc1f06-logs\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.820449 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/306fe7eb-e7e7-45c6-be73-29c38afc1f06-etc-machine-id\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.822289 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-scripts\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.829788 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-config-data-custom\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.835668 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.844790 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-config-data\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.866391 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9ngf\" (UniqueName: \"kubernetes.io/projected/306fe7eb-e7e7-45c6-be73-29c38afc1f06-kube-api-access-s9ngf\") pod \"cinder-api-0\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " pod="openstack/cinder-api-0" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.936501 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:49 crc kubenswrapper[4972]: I1121 10:04:49.947706 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.083090 4972 generic.go:334] "Generic (PLEG): container finished" podID="99c9e220-ca0a-4830-91c2-96c2fcb4d93d" containerID="2d3e781327bf9ec8c2b2f2d0f110a02b3da34b80df7bb5e3f09d66e3f19fe1f3" exitCode=0 Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.083281 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" event={"ID":"99c9e220-ca0a-4830-91c2-96c2fcb4d93d","Type":"ContainerDied","Data":"2d3e781327bf9ec8c2b2f2d0f110a02b3da34b80df7bb5e3f09d66e3f19fe1f3"} Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.117729 4972 generic.go:334] "Generic (PLEG): container finished" podID="107f78da-c307-41d6-9491-c4b4e237649a" containerID="5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec" exitCode=143 Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.117782 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7d889fc4-nj77m" event={"ID":"107f78da-c307-41d6-9491-c4b4e237649a","Type":"ContainerDied","Data":"5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec"} Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.172620 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.236430 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-config\") pod \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.236539 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-dns-swift-storage-0\") pod \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.236591 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-dns-svc\") pod \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.236717 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgrvd\" (UniqueName: \"kubernetes.io/projected/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-kube-api-access-dgrvd\") pod \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.236806 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-ovsdbserver-sb\") pod \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.236857 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-ovsdbserver-nb\") pod \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\" (UID: \"99c9e220-ca0a-4830-91c2-96c2fcb4d93d\") " Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.250820 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-kube-api-access-dgrvd" (OuterVolumeSpecName: "kube-api-access-dgrvd") pod "99c9e220-ca0a-4830-91c2-96c2fcb4d93d" (UID: "99c9e220-ca0a-4830-91c2-96c2fcb4d93d"). InnerVolumeSpecName "kube-api-access-dgrvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.340472 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgrvd\" (UniqueName: \"kubernetes.io/projected/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-kube-api-access-dgrvd\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.362525 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.398361 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "99c9e220-ca0a-4830-91c2-96c2fcb4d93d" (UID: "99c9e220-ca0a-4830-91c2-96c2fcb4d93d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.410264 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "99c9e220-ca0a-4830-91c2-96c2fcb4d93d" (UID: "99c9e220-ca0a-4830-91c2-96c2fcb4d93d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.463169 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.463212 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.463328 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "99c9e220-ca0a-4830-91c2-96c2fcb4d93d" (UID: "99c9e220-ca0a-4830-91c2-96c2fcb4d93d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.470662 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99c9e220-ca0a-4830-91c2-96c2fcb4d93d" (UID: "99c9e220-ca0a-4830-91c2-96c2fcb4d93d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.475379 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-config" (OuterVolumeSpecName: "config") pod "99c9e220-ca0a-4830-91c2-96c2fcb4d93d" (UID: "99c9e220-ca0a-4830-91c2-96c2fcb4d93d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.566109 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.566698 4972 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.566817 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c9e220-ca0a-4830-91c2-96c2fcb4d93d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.622060 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 10:04:50 crc kubenswrapper[4972]: I1121 10:04:50.646963 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586689c4f9-dx6rb"] Nov 21 10:04:51 crc kubenswrapper[4972]: I1121 10:04:51.136985 4972 generic.go:334] "Generic (PLEG): container finished" podID="06293605-a389-4f8d-a217-7007fd7c6ade" containerID="bab8c2730c4d93cb8829d31f9bb2123b52b26288d3ff5218adfa2edf501f3037" exitCode=0 Nov 21 10:04:51 crc kubenswrapper[4972]: I1121 10:04:51.137044 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" event={"ID":"06293605-a389-4f8d-a217-7007fd7c6ade","Type":"ContainerDied","Data":"bab8c2730c4d93cb8829d31f9bb2123b52b26288d3ff5218adfa2edf501f3037"} Nov 21 10:04:51 crc kubenswrapper[4972]: I1121 10:04:51.137068 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" event={"ID":"06293605-a389-4f8d-a217-7007fd7c6ade","Type":"ContainerStarted","Data":"a993f8bc920bdb5ffec88aa20245575c78195b487a3c18b3da3c9ea3b93cb6b4"} Nov 21 10:04:51 crc kubenswrapper[4972]: I1121 10:04:51.154571 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" event={"ID":"99c9e220-ca0a-4830-91c2-96c2fcb4d93d","Type":"ContainerDied","Data":"e46f546eb9d44c92689e53d9b18703ecb4e10ee9e8268cd5e616a014aa4bf537"} Nov 21 10:04:51 crc kubenswrapper[4972]: I1121 10:04:51.154621 4972 scope.go:117] "RemoveContainer" containerID="2d3e781327bf9ec8c2b2f2d0f110a02b3da34b80df7bb5e3f09d66e3f19fe1f3" Nov 21 10:04:51 crc kubenswrapper[4972]: I1121 10:04:51.154753 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65757696bf-dvwq8" Nov 21 10:04:51 crc kubenswrapper[4972]: I1121 10:04:51.174818 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9","Type":"ContainerStarted","Data":"aa40782bb440e98a8608c9e90c51347663a084e81657745eba993da011bbd071"} Nov 21 10:04:51 crc kubenswrapper[4972]: I1121 10:04:51.182885 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"306fe7eb-e7e7-45c6-be73-29c38afc1f06","Type":"ContainerStarted","Data":"956ecc6452ce5783531ad1200b5acefe0016bec0de3d42a8958c4aeef75d19ba"} Nov 21 10:04:51 crc kubenswrapper[4972]: I1121 10:04:51.213968 4972 scope.go:117] "RemoveContainer" containerID="45839324943dfa5db8c1f7598b6cfee5f383e54c0772356dad81825e11e6c8b7" Nov 21 10:04:51 crc kubenswrapper[4972]: I1121 10:04:51.214812 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65757696bf-dvwq8"] Nov 21 10:04:51 crc kubenswrapper[4972]: I1121 10:04:51.227734 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65757696bf-dvwq8"] Nov 21 10:04:51 crc kubenswrapper[4972]: I1121 10:04:51.788966 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99c9e220-ca0a-4830-91c2-96c2fcb4d93d" path="/var/lib/kubelet/pods/99c9e220-ca0a-4830-91c2-96c2fcb4d93d/volumes" Nov 21 10:04:52 crc kubenswrapper[4972]: I1121 10:04:52.204904 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 21 10:04:52 crc kubenswrapper[4972]: I1121 10:04:52.221015 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9","Type":"ContainerStarted","Data":"4e680d6cdc146edd49c7480a589768447776d27d89950c1f345f0a8f44c5bb69"} Nov 21 10:04:52 crc kubenswrapper[4972]: I1121 10:04:52.222867 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"306fe7eb-e7e7-45c6-be73-29c38afc1f06","Type":"ContainerStarted","Data":"996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28"} Nov 21 10:04:52 crc kubenswrapper[4972]: I1121 10:04:52.228994 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" event={"ID":"06293605-a389-4f8d-a217-7007fd7c6ade","Type":"ContainerStarted","Data":"413b760fe413b5991acf15f38783e9894aa4c315734c0b78b5a77c6e03726693"} Nov 21 10:04:52 crc kubenswrapper[4972]: I1121 10:04:52.230125 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:52 crc kubenswrapper[4972]: I1121 10:04:52.254278 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" podStartSLOduration=3.254173512 podStartE2EDuration="3.254173512s" podCreationTimestamp="2025-11-21 10:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:52.248658095 +0000 UTC m=+1437.357800613" watchObservedRunningTime="2025-11-21 10:04:52.254173512 +0000 UTC m=+1437.363316010" Nov 21 10:04:52 crc kubenswrapper[4972]: I1121 10:04:52.631679 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b7d889fc4-nj77m" podUID="107f78da-c307-41d6-9491-c4b4e237649a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": read tcp 10.217.0.2:48782->10.217.0.146:9311: read: connection reset by peer" Nov 21 10:04:52 crc kubenswrapper[4972]: I1121 10:04:52.631749 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b7d889fc4-nj77m" podUID="107f78da-c307-41d6-9491-c4b4e237649a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": read tcp 10.217.0.2:48778->10.217.0.146:9311: read: connection reset by peer" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.057796 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.125767 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/107f78da-c307-41d6-9491-c4b4e237649a-logs\") pod \"107f78da-c307-41d6-9491-c4b4e237649a\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.125876 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nswbm\" (UniqueName: \"kubernetes.io/projected/107f78da-c307-41d6-9491-c4b4e237649a-kube-api-access-nswbm\") pod \"107f78da-c307-41d6-9491-c4b4e237649a\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.125988 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-config-data\") pod \"107f78da-c307-41d6-9491-c4b4e237649a\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.126063 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-config-data-custom\") pod \"107f78da-c307-41d6-9491-c4b4e237649a\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.126108 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-combined-ca-bundle\") pod \"107f78da-c307-41d6-9491-c4b4e237649a\" (UID: \"107f78da-c307-41d6-9491-c4b4e237649a\") " Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.126420 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/107f78da-c307-41d6-9491-c4b4e237649a-logs" (OuterVolumeSpecName: "logs") pod "107f78da-c307-41d6-9491-c4b4e237649a" (UID: "107f78da-c307-41d6-9491-c4b4e237649a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.127292 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/107f78da-c307-41d6-9491-c4b4e237649a-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.144986 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107f78da-c307-41d6-9491-c4b4e237649a-kube-api-access-nswbm" (OuterVolumeSpecName: "kube-api-access-nswbm") pod "107f78da-c307-41d6-9491-c4b4e237649a" (UID: "107f78da-c307-41d6-9491-c4b4e237649a"). InnerVolumeSpecName "kube-api-access-nswbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.158018 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "107f78da-c307-41d6-9491-c4b4e237649a" (UID: "107f78da-c307-41d6-9491-c4b4e237649a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.178016 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "107f78da-c307-41d6-9491-c4b4e237649a" (UID: "107f78da-c307-41d6-9491-c4b4e237649a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.229024 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.229058 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.229068 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nswbm\" (UniqueName: \"kubernetes.io/projected/107f78da-c307-41d6-9491-c4b4e237649a-kube-api-access-nswbm\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.229502 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-config-data" (OuterVolumeSpecName: "config-data") pod "107f78da-c307-41d6-9491-c4b4e237649a" (UID: "107f78da-c307-41d6-9491-c4b4e237649a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.240595 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9","Type":"ContainerStarted","Data":"30c1c69d07ce7356438bfb3dbd1483f98dec1c2a4432b1db1e595f58ceda5ecf"} Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.243671 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"306fe7eb-e7e7-45c6-be73-29c38afc1f06","Type":"ContainerStarted","Data":"b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8"} Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.243876 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="306fe7eb-e7e7-45c6-be73-29c38afc1f06" containerName="cinder-api-log" containerID="cri-o://996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28" gracePeriod=30 Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.244222 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.244275 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="306fe7eb-e7e7-45c6-be73-29c38afc1f06" containerName="cinder-api" containerID="cri-o://b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8" gracePeriod=30 Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.251468 4972 generic.go:334] "Generic (PLEG): container finished" podID="107f78da-c307-41d6-9491-c4b4e237649a" containerID="26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa" exitCode=0 Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.252545 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b7d889fc4-nj77m" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.255012 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7d889fc4-nj77m" event={"ID":"107f78da-c307-41d6-9491-c4b4e237649a","Type":"ContainerDied","Data":"26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa"} Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.255083 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7d889fc4-nj77m" event={"ID":"107f78da-c307-41d6-9491-c4b4e237649a","Type":"ContainerDied","Data":"add8e9665eea0e1dc66e28b4d24307cbda65b2afddf490510173395c3622dad8"} Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.255118 4972 scope.go:117] "RemoveContainer" containerID="26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.284688 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.507578246 podStartE2EDuration="4.284672002s" podCreationTimestamp="2025-11-21 10:04:49 +0000 UTC" firstStartedPulling="2025-11-21 10:04:50.323035734 +0000 UTC m=+1435.432178232" lastFinishedPulling="2025-11-21 10:04:51.10012949 +0000 UTC m=+1436.209271988" observedRunningTime="2025-11-21 10:04:53.283192742 +0000 UTC m=+1438.392335230" watchObservedRunningTime="2025-11-21 10:04:53.284672002 +0000 UTC m=+1438.393814500" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.311280 4972 scope.go:117] "RemoveContainer" containerID="5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.330444 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107f78da-c307-41d6-9491-c4b4e237649a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.334623 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.334610456 podStartE2EDuration="4.334610456s" podCreationTimestamp="2025-11-21 10:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:53.329298694 +0000 UTC m=+1438.438441212" watchObservedRunningTime="2025-11-21 10:04:53.334610456 +0000 UTC m=+1438.443752954" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.356041 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b7d889fc4-nj77m"] Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.366225 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6b7d889fc4-nj77m"] Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.445641 4972 scope.go:117] "RemoveContainer" containerID="26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa" Nov 21 10:04:53 crc kubenswrapper[4972]: E1121 10:04:53.446130 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa\": container with ID starting with 26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa not found: ID does not exist" containerID="26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.446214 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa"} err="failed to get container status \"26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa\": rpc error: code = NotFound desc = could not find container \"26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa\": container with ID starting with 26204dcf3f5a71978f41f8918a75d69f23bbf4096988654b5964c1b6759ed6fa not found: ID does not exist" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.446294 4972 scope.go:117] "RemoveContainer" containerID="5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec" Nov 21 10:04:53 crc kubenswrapper[4972]: E1121 10:04:53.446730 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec\": container with ID starting with 5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec not found: ID does not exist" containerID="5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.446811 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec"} err="failed to get container status \"5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec\": rpc error: code = NotFound desc = could not find container \"5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec\": container with ID starting with 5bb248abf415e17b3ba6bd15434eaabfb0ccc903170317dbda14124b253860ec not found: ID does not exist" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.787513 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="107f78da-c307-41d6-9491-c4b4e237649a" path="/var/lib/kubelet/pods/107f78da-c307-41d6-9491-c4b4e237649a/volumes" Nov 21 10:04:53 crc kubenswrapper[4972]: I1121 10:04:53.937853 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.042963 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/306fe7eb-e7e7-45c6-be73-29c38afc1f06-etc-machine-id\") pod \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.043065 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-scripts\") pod \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.043105 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/306fe7eb-e7e7-45c6-be73-29c38afc1f06-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "306fe7eb-e7e7-45c6-be73-29c38afc1f06" (UID: "306fe7eb-e7e7-45c6-be73-29c38afc1f06"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.043137 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-config-data\") pod \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.043194 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-config-data-custom\") pod \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.043262 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-combined-ca-bundle\") pod \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.043331 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306fe7eb-e7e7-45c6-be73-29c38afc1f06-logs\") pod \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.043364 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9ngf\" (UniqueName: \"kubernetes.io/projected/306fe7eb-e7e7-45c6-be73-29c38afc1f06-kube-api-access-s9ngf\") pod \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\" (UID: \"306fe7eb-e7e7-45c6-be73-29c38afc1f06\") " Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.043774 4972 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/306fe7eb-e7e7-45c6-be73-29c38afc1f06-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.043859 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/306fe7eb-e7e7-45c6-be73-29c38afc1f06-logs" (OuterVolumeSpecName: "logs") pod "306fe7eb-e7e7-45c6-be73-29c38afc1f06" (UID: "306fe7eb-e7e7-45c6-be73-29c38afc1f06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.048239 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "306fe7eb-e7e7-45c6-be73-29c38afc1f06" (UID: "306fe7eb-e7e7-45c6-be73-29c38afc1f06"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.048344 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/306fe7eb-e7e7-45c6-be73-29c38afc1f06-kube-api-access-s9ngf" (OuterVolumeSpecName: "kube-api-access-s9ngf") pod "306fe7eb-e7e7-45c6-be73-29c38afc1f06" (UID: "306fe7eb-e7e7-45c6-be73-29c38afc1f06"). InnerVolumeSpecName "kube-api-access-s9ngf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.052433 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-scripts" (OuterVolumeSpecName: "scripts") pod "306fe7eb-e7e7-45c6-be73-29c38afc1f06" (UID: "306fe7eb-e7e7-45c6-be73-29c38afc1f06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.079109 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "306fe7eb-e7e7-45c6-be73-29c38afc1f06" (UID: "306fe7eb-e7e7-45c6-be73-29c38afc1f06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.096274 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-config-data" (OuterVolumeSpecName: "config-data") pod "306fe7eb-e7e7-45c6-be73-29c38afc1f06" (UID: "306fe7eb-e7e7-45c6-be73-29c38afc1f06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.145019 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.145053 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.145065 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.145074 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306fe7eb-e7e7-45c6-be73-29c38afc1f06-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.145082 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9ngf\" (UniqueName: \"kubernetes.io/projected/306fe7eb-e7e7-45c6-be73-29c38afc1f06-kube-api-access-s9ngf\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.145092 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306fe7eb-e7e7-45c6-be73-29c38afc1f06-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.261283 4972 generic.go:334] "Generic (PLEG): container finished" podID="306fe7eb-e7e7-45c6-be73-29c38afc1f06" containerID="b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8" exitCode=0 Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.261319 4972 generic.go:334] "Generic (PLEG): container finished" podID="306fe7eb-e7e7-45c6-be73-29c38afc1f06" containerID="996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28" exitCode=143 Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.261345 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.261344 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"306fe7eb-e7e7-45c6-be73-29c38afc1f06","Type":"ContainerDied","Data":"b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8"} Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.261471 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"306fe7eb-e7e7-45c6-be73-29c38afc1f06","Type":"ContainerDied","Data":"996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28"} Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.261485 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"306fe7eb-e7e7-45c6-be73-29c38afc1f06","Type":"ContainerDied","Data":"956ecc6452ce5783531ad1200b5acefe0016bec0de3d42a8958c4aeef75d19ba"} Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.261505 4972 scope.go:117] "RemoveContainer" containerID="b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.264154 4972 generic.go:334] "Generic (PLEG): container finished" podID="f9513939-1a73-46a3-a946-db9b1008314f" containerID="f7e978a05b49d9a8b55170b6916c286f1ba8d5d193d9fb52b446f78bd3d0ec08" exitCode=0 Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.264210 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-g84kj" event={"ID":"f9513939-1a73-46a3-a946-db9b1008314f","Type":"ContainerDied","Data":"f7e978a05b49d9a8b55170b6916c286f1ba8d5d193d9fb52b446f78bd3d0ec08"} Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.300445 4972 scope.go:117] "RemoveContainer" containerID="996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.325971 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.338444 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.342033 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 21 10:04:54 crc kubenswrapper[4972]: E1121 10:04:54.342450 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306fe7eb-e7e7-45c6-be73-29c38afc1f06" containerName="cinder-api-log" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.342472 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="306fe7eb-e7e7-45c6-be73-29c38afc1f06" containerName="cinder-api-log" Nov 21 10:04:54 crc kubenswrapper[4972]: E1121 10:04:54.342494 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c9e220-ca0a-4830-91c2-96c2fcb4d93d" containerName="init" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.342500 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c9e220-ca0a-4830-91c2-96c2fcb4d93d" containerName="init" Nov 21 10:04:54 crc kubenswrapper[4972]: E1121 10:04:54.342513 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c9e220-ca0a-4830-91c2-96c2fcb4d93d" containerName="dnsmasq-dns" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.342520 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c9e220-ca0a-4830-91c2-96c2fcb4d93d" containerName="dnsmasq-dns" Nov 21 10:04:54 crc kubenswrapper[4972]: E1121 10:04:54.342538 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="107f78da-c307-41d6-9491-c4b4e237649a" containerName="barbican-api-log" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.342543 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="107f78da-c307-41d6-9491-c4b4e237649a" containerName="barbican-api-log" Nov 21 10:04:54 crc kubenswrapper[4972]: E1121 10:04:54.342553 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306fe7eb-e7e7-45c6-be73-29c38afc1f06" containerName="cinder-api" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.342559 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="306fe7eb-e7e7-45c6-be73-29c38afc1f06" containerName="cinder-api" Nov 21 10:04:54 crc kubenswrapper[4972]: E1121 10:04:54.342575 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="107f78da-c307-41d6-9491-c4b4e237649a" containerName="barbican-api" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.342583 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="107f78da-c307-41d6-9491-c4b4e237649a" containerName="barbican-api" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.342738 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c9e220-ca0a-4830-91c2-96c2fcb4d93d" containerName="dnsmasq-dns" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.342759 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="306fe7eb-e7e7-45c6-be73-29c38afc1f06" containerName="cinder-api-log" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.342771 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="107f78da-c307-41d6-9491-c4b4e237649a" containerName="barbican-api-log" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.342784 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="107f78da-c307-41d6-9491-c4b4e237649a" containerName="barbican-api" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.342812 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="306fe7eb-e7e7-45c6-be73-29c38afc1f06" containerName="cinder-api" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.344208 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.345984 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.346255 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.346492 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.350811 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.357421 4972 scope.go:117] "RemoveContainer" containerID="b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8" Nov 21 10:04:54 crc kubenswrapper[4972]: E1121 10:04:54.359027 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8\": container with ID starting with b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8 not found: ID does not exist" containerID="b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.359070 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8"} err="failed to get container status \"b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8\": rpc error: code = NotFound desc = could not find container \"b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8\": container with ID starting with b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8 not found: ID does not exist" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.359100 4972 scope.go:117] "RemoveContainer" containerID="996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28" Nov 21 10:04:54 crc kubenswrapper[4972]: E1121 10:04:54.372145 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28\": container with ID starting with 996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28 not found: ID does not exist" containerID="996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.372200 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28"} err="failed to get container status \"996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28\": rpc error: code = NotFound desc = could not find container \"996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28\": container with ID starting with 996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28 not found: ID does not exist" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.372233 4972 scope.go:117] "RemoveContainer" containerID="b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.372580 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8"} err="failed to get container status \"b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8\": rpc error: code = NotFound desc = could not find container \"b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8\": container with ID starting with b1b93fd01b02eea710fd9b8d918d758f074ef0058653280e85bc9501058950a8 not found: ID does not exist" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.372613 4972 scope.go:117] "RemoveContainer" containerID="996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.372875 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28"} err="failed to get container status \"996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28\": rpc error: code = NotFound desc = could not find container \"996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28\": container with ID starting with 996b24ea8c6099e973bb60758f9d8afc650ff3a4879ea678a02fb5ffd0631d28 not found: ID does not exist" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.449609 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2zk4\" (UniqueName: \"kubernetes.io/projected/dc57ffef-2527-4b16-b281-9139b6a0f1a1-kube-api-access-t2zk4\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.449844 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc57ffef-2527-4b16-b281-9139b6a0f1a1-logs\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.449975 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.450103 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-scripts\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.450190 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-config-data-custom\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.450287 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-config-data\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.450371 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc57ffef-2527-4b16-b281-9139b6a0f1a1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.450471 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.450563 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.552742 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-config-data\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.553239 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc57ffef-2527-4b16-b281-9139b6a0f1a1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.553363 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc57ffef-2527-4b16-b281-9139b6a0f1a1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.553486 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.553621 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.553898 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2zk4\" (UniqueName: \"kubernetes.io/projected/dc57ffef-2527-4b16-b281-9139b6a0f1a1-kube-api-access-t2zk4\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.554407 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc57ffef-2527-4b16-b281-9139b6a0f1a1-logs\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.554662 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.555250 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc57ffef-2527-4b16-b281-9139b6a0f1a1-logs\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.555378 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-scripts\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.555555 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-config-data-custom\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.558239 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.558524 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-config-data\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.560252 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.560724 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.563380 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-scripts\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.578072 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-config-data-custom\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.580254 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2zk4\" (UniqueName: \"kubernetes.io/projected/dc57ffef-2527-4b16-b281-9139b6a0f1a1-kube-api-access-t2zk4\") pod \"cinder-api-0\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " pod="openstack/cinder-api-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.653782 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 21 10:04:54 crc kubenswrapper[4972]: I1121 10:04:54.667491 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 10:04:55 crc kubenswrapper[4972]: W1121 10:04:55.087492 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc57ffef_2527_4b16_b281_9139b6a0f1a1.slice/crio-1af864544aa808fbae91493338a5ae16d87eccc8d300a4b2025420de1726dea1 WatchSource:0}: Error finding container 1af864544aa808fbae91493338a5ae16d87eccc8d300a4b2025420de1726dea1: Status 404 returned error can't find the container with id 1af864544aa808fbae91493338a5ae16d87eccc8d300a4b2025420de1726dea1 Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.089951 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.280082 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dc57ffef-2527-4b16-b281-9139b6a0f1a1","Type":"ContainerStarted","Data":"1af864544aa808fbae91493338a5ae16d87eccc8d300a4b2025420de1726dea1"} Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.772551 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="306fe7eb-e7e7-45c6-be73-29c38afc1f06" path="/var/lib/kubelet/pods/306fe7eb-e7e7-45c6-be73-29c38afc1f06/volumes" Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.876207 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-g84kj" Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.894718 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-db-sync-config-data\") pod \"f9513939-1a73-46a3-a946-db9b1008314f\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.894796 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-config-data\") pod \"f9513939-1a73-46a3-a946-db9b1008314f\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.894840 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-combined-ca-bundle\") pod \"f9513939-1a73-46a3-a946-db9b1008314f\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.894860 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9j8v\" (UniqueName: \"kubernetes.io/projected/f9513939-1a73-46a3-a946-db9b1008314f-kube-api-access-k9j8v\") pod \"f9513939-1a73-46a3-a946-db9b1008314f\" (UID: \"f9513939-1a73-46a3-a946-db9b1008314f\") " Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.901705 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9513939-1a73-46a3-a946-db9b1008314f-kube-api-access-k9j8v" (OuterVolumeSpecName: "kube-api-access-k9j8v") pod "f9513939-1a73-46a3-a946-db9b1008314f" (UID: "f9513939-1a73-46a3-a946-db9b1008314f"). InnerVolumeSpecName "kube-api-access-k9j8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.909223 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f9513939-1a73-46a3-a946-db9b1008314f" (UID: "f9513939-1a73-46a3-a946-db9b1008314f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.921741 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9513939-1a73-46a3-a946-db9b1008314f" (UID: "f9513939-1a73-46a3-a946-db9b1008314f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.960137 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-config-data" (OuterVolumeSpecName: "config-data") pod "f9513939-1a73-46a3-a946-db9b1008314f" (UID: "f9513939-1a73-46a3-a946-db9b1008314f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.996552 4972 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.996971 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.997108 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9513939-1a73-46a3-a946-db9b1008314f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:55 crc kubenswrapper[4972]: I1121 10:04:55.997184 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9j8v\" (UniqueName: \"kubernetes.io/projected/f9513939-1a73-46a3-a946-db9b1008314f-kube-api-access-k9j8v\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:56 crc kubenswrapper[4972]: I1121 10:04:56.294250 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dc57ffef-2527-4b16-b281-9139b6a0f1a1","Type":"ContainerStarted","Data":"ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad"} Nov 21 10:04:56 crc kubenswrapper[4972]: I1121 10:04:56.310995 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-g84kj" event={"ID":"f9513939-1a73-46a3-a946-db9b1008314f","Type":"ContainerDied","Data":"77df3e1714bd52ece3dd29b5953f4b06a864d04998a3188d3ab6b54a1f15df93"} Nov 21 10:04:56 crc kubenswrapper[4972]: I1121 10:04:56.311394 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77df3e1714bd52ece3dd29b5953f4b06a864d04998a3188d3ab6b54a1f15df93" Nov 21 10:04:56 crc kubenswrapper[4972]: I1121 10:04:56.312056 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-g84kj" Nov 21 10:04:56 crc kubenswrapper[4972]: I1121 10:04:56.810543 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586689c4f9-dx6rb"] Nov 21 10:04:56 crc kubenswrapper[4972]: I1121 10:04:56.811101 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" podUID="06293605-a389-4f8d-a217-7007fd7c6ade" containerName="dnsmasq-dns" containerID="cri-o://413b760fe413b5991acf15f38783e9894aa4c315734c0b78b5a77c6e03726693" gracePeriod=10 Nov 21 10:04:56 crc kubenswrapper[4972]: I1121 10:04:56.814034 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:56 crc kubenswrapper[4972]: I1121 10:04:56.931896 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7445585cd9-xnz5q"] Nov 21 10:04:56 crc kubenswrapper[4972]: E1121 10:04:56.932371 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9513939-1a73-46a3-a946-db9b1008314f" containerName="glance-db-sync" Nov 21 10:04:56 crc kubenswrapper[4972]: I1121 10:04:56.932383 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9513939-1a73-46a3-a946-db9b1008314f" containerName="glance-db-sync" Nov 21 10:04:56 crc kubenswrapper[4972]: I1121 10:04:56.932610 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9513939-1a73-46a3-a946-db9b1008314f" containerName="glance-db-sync" Nov 21 10:04:56 crc kubenswrapper[4972]: I1121 10:04:56.933731 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:56 crc kubenswrapper[4972]: I1121 10:04:56.969507 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7445585cd9-xnz5q"] Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.123434 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-dns-svc\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.123497 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-ovsdbserver-nb\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.123520 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pj8d\" (UniqueName: \"kubernetes.io/projected/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-kube-api-access-9pj8d\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.123582 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-config\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.123609 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-dns-swift-storage-0\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.123642 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-ovsdbserver-sb\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.224974 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-dns-svc\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.225380 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-ovsdbserver-nb\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.225455 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pj8d\" (UniqueName: \"kubernetes.io/projected/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-kube-api-access-9pj8d\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.225774 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-config\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.225926 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-dns-swift-storage-0\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.225972 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-ovsdbserver-sb\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.226974 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-dns-swift-storage-0\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.227029 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-config\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.227091 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-dns-svc\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.227321 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-ovsdbserver-sb\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.227855 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-ovsdbserver-nb\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.251587 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pj8d\" (UniqueName: \"kubernetes.io/projected/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-kube-api-access-9pj8d\") pod \"dnsmasq-dns-7445585cd9-xnz5q\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.352637 4972 generic.go:334] "Generic (PLEG): container finished" podID="06293605-a389-4f8d-a217-7007fd7c6ade" containerID="413b760fe413b5991acf15f38783e9894aa4c315734c0b78b5a77c6e03726693" exitCode=0 Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.352711 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" event={"ID":"06293605-a389-4f8d-a217-7007fd7c6ade","Type":"ContainerDied","Data":"413b760fe413b5991acf15f38783e9894aa4c315734c0b78b5a77c6e03726693"} Nov 21 10:04:57 crc kubenswrapper[4972]: E1121 10:04:57.366761 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06293605_a389_4f8d_a217_7007fd7c6ade.slice/crio-conmon-413b760fe413b5991acf15f38783e9894aa4c315734c0b78b5a77c6e03726693.scope\": RecentStats: unable to find data in memory cache]" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.368586 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dc57ffef-2527-4b16-b281-9139b6a0f1a1","Type":"ContainerStarted","Data":"49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07"} Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.369776 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.405451 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.405431617 podStartE2EDuration="3.405431617s" podCreationTimestamp="2025-11-21 10:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:57.404156023 +0000 UTC m=+1442.513298531" watchObservedRunningTime="2025-11-21 10:04:57.405431617 +0000 UTC m=+1442.514574115" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.452488 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.585801 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.644754 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-config\") pod \"06293605-a389-4f8d-a217-7007fd7c6ade\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.644797 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-ovsdbserver-nb\") pod \"06293605-a389-4f8d-a217-7007fd7c6ade\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.644818 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-dns-svc\") pod \"06293605-a389-4f8d-a217-7007fd7c6ade\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.644858 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdkz4\" (UniqueName: \"kubernetes.io/projected/06293605-a389-4f8d-a217-7007fd7c6ade-kube-api-access-fdkz4\") pod \"06293605-a389-4f8d-a217-7007fd7c6ade\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.644913 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-dns-swift-storage-0\") pod \"06293605-a389-4f8d-a217-7007fd7c6ade\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.644954 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-ovsdbserver-sb\") pod \"06293605-a389-4f8d-a217-7007fd7c6ade\" (UID: \"06293605-a389-4f8d-a217-7007fd7c6ade\") " Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.668351 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:04:57 crc kubenswrapper[4972]: E1121 10:04:57.669334 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06293605-a389-4f8d-a217-7007fd7c6ade" containerName="dnsmasq-dns" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.669350 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="06293605-a389-4f8d-a217-7007fd7c6ade" containerName="dnsmasq-dns" Nov 21 10:04:57 crc kubenswrapper[4972]: E1121 10:04:57.669410 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06293605-a389-4f8d-a217-7007fd7c6ade" containerName="init" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.669419 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="06293605-a389-4f8d-a217-7007fd7c6ade" containerName="init" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.669800 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="06293605-a389-4f8d-a217-7007fd7c6ade" containerName="dnsmasq-dns" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.670455 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06293605-a389-4f8d-a217-7007fd7c6ade-kube-api-access-fdkz4" (OuterVolumeSpecName: "kube-api-access-fdkz4") pod "06293605-a389-4f8d-a217-7007fd7c6ade" (UID: "06293605-a389-4f8d-a217-7007fd7c6ade"). InnerVolumeSpecName "kube-api-access-fdkz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.671035 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.680508 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.685203 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-56jfv" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.685474 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.685582 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.746032 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-config" (OuterVolumeSpecName: "config") pod "06293605-a389-4f8d-a217-7007fd7c6ade" (UID: "06293605-a389-4f8d-a217-7007fd7c6ade"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.747350 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.747366 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdkz4\" (UniqueName: \"kubernetes.io/projected/06293605-a389-4f8d-a217-7007fd7c6ade-kube-api-access-fdkz4\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.770810 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "06293605-a389-4f8d-a217-7007fd7c6ade" (UID: "06293605-a389-4f8d-a217-7007fd7c6ade"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.782972 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "06293605-a389-4f8d-a217-7007fd7c6ade" (UID: "06293605-a389-4f8d-a217-7007fd7c6ade"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.788972 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "06293605-a389-4f8d-a217-7007fd7c6ade" (UID: "06293605-a389-4f8d-a217-7007fd7c6ade"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.793726 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06293605-a389-4f8d-a217-7007fd7c6ade" (UID: "06293605-a389-4f8d-a217-7007fd7c6ade"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.825228 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.849037 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/824d6f85-3763-4c81-a329-2f9e68fd8cda-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.849109 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-scripts\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.849166 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.849222 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlpks\" (UniqueName: \"kubernetes.io/projected/824d6f85-3763-4c81-a329-2f9e68fd8cda-kube-api-access-dlpks\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.849266 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-config-data\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.849300 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/824d6f85-3763-4c81-a329-2f9e68fd8cda-logs\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.849421 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.849489 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.849507 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.849521 4972 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.849533 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06293605-a389-4f8d-a217-7007fd7c6ade-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.950736 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.950819 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/824d6f85-3763-4c81-a329-2f9e68fd8cda-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.950940 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-scripts\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.951109 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.951196 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlpks\" (UniqueName: \"kubernetes.io/projected/824d6f85-3763-4c81-a329-2f9e68fd8cda-kube-api-access-dlpks\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.951205 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.952790 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/824d6f85-3763-4c81-a329-2f9e68fd8cda-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.952948 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-config-data\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.953005 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/824d6f85-3763-4c81-a329-2f9e68fd8cda-logs\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.953572 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/824d6f85-3763-4c81-a329-2f9e68fd8cda-logs\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.955070 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.961084 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-scripts\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.962275 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-config-data\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.983565 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlpks\" (UniqueName: \"kubernetes.io/projected/824d6f85-3763-4c81-a329-2f9e68fd8cda-kube-api-access-dlpks\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.991177 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7445585cd9-xnz5q"] Nov 21 10:04:57 crc kubenswrapper[4972]: I1121 10:04:57.995991 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " pod="openstack/glance-default-external-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: W1121 10:04:58.001033 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42fc5873_eac2_406f_b1a0_0cc24e3a0a4d.slice/crio-cf264e1e86e03cbcbfbd14a27614dab0c99711155388af5b1e96725188c34847 WatchSource:0}: Error finding container cf264e1e86e03cbcbfbd14a27614dab0c99711155388af5b1e96725188c34847: Status 404 returned error can't find the container with id cf264e1e86e03cbcbfbd14a27614dab0c99711155388af5b1e96725188c34847 Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.042953 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.044383 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.047361 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.107474 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.140334 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.156703 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.156788 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.156862 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.156912 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aee9ed59-e2a7-4d33-be50-85088b8195c4-logs\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.156936 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aee9ed59-e2a7-4d33-be50-85088b8195c4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.156982 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.157015 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdxfk\" (UniqueName: \"kubernetes.io/projected/aee9ed59-e2a7-4d33-be50-85088b8195c4-kube-api-access-sdxfk\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.259092 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.259149 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.259187 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.259216 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aee9ed59-e2a7-4d33-be50-85088b8195c4-logs\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.259237 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aee9ed59-e2a7-4d33-be50-85088b8195c4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.259273 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.259296 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdxfk\" (UniqueName: \"kubernetes.io/projected/aee9ed59-e2a7-4d33-be50-85088b8195c4-kube-api-access-sdxfk\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.260041 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.261443 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aee9ed59-e2a7-4d33-be50-85088b8195c4-logs\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.261568 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aee9ed59-e2a7-4d33-be50-85088b8195c4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.265373 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.270543 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.277571 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.280032 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdxfk\" (UniqueName: \"kubernetes.io/projected/aee9ed59-e2a7-4d33-be50-85088b8195c4-kube-api-access-sdxfk\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.301066 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.380746 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" event={"ID":"06293605-a389-4f8d-a217-7007fd7c6ade","Type":"ContainerDied","Data":"a993f8bc920bdb5ffec88aa20245575c78195b487a3c18b3da3c9ea3b93cb6b4"} Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.380796 4972 scope.go:117] "RemoveContainer" containerID="413b760fe413b5991acf15f38783e9894aa4c315734c0b78b5a77c6e03726693" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.380857 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586689c4f9-dx6rb" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.383086 4972 generic.go:334] "Generic (PLEG): container finished" podID="42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" containerID="9d3710f07760bd2cee3b96643dd67f5dbb849b665129b2c9e4bc027efb560d5e" exitCode=0 Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.385293 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" event={"ID":"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d","Type":"ContainerDied","Data":"9d3710f07760bd2cee3b96643dd67f5dbb849b665129b2c9e4bc027efb560d5e"} Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.385332 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" event={"ID":"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d","Type":"ContainerStarted","Data":"cf264e1e86e03cbcbfbd14a27614dab0c99711155388af5b1e96725188c34847"} Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.413437 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.429871 4972 scope.go:117] "RemoveContainer" containerID="bab8c2730c4d93cb8829d31f9bb2123b52b26288d3ff5218adfa2edf501f3037" Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.441745 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586689c4f9-dx6rb"] Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.454615 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586689c4f9-dx6rb"] Nov 21 10:04:58 crc kubenswrapper[4972]: I1121 10:04:58.722556 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:04:59 crc kubenswrapper[4972]: I1121 10:04:59.092231 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:04:59 crc kubenswrapper[4972]: W1121 10:04:59.130678 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaee9ed59_e2a7_4d33_be50_85088b8195c4.slice/crio-d88103a7e27c4927484f1e67450612d3e43295ac5fb6d59a27eb63db02d19de3 WatchSource:0}: Error finding container d88103a7e27c4927484f1e67450612d3e43295ac5fb6d59a27eb63db02d19de3: Status 404 returned error can't find the container with id d88103a7e27c4927484f1e67450612d3e43295ac5fb6d59a27eb63db02d19de3 Nov 21 10:04:59 crc kubenswrapper[4972]: I1121 10:04:59.400043 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"824d6f85-3763-4c81-a329-2f9e68fd8cda","Type":"ContainerStarted","Data":"43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95"} Nov 21 10:04:59 crc kubenswrapper[4972]: I1121 10:04:59.400304 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"824d6f85-3763-4c81-a329-2f9e68fd8cda","Type":"ContainerStarted","Data":"47fab3421c04703e18b037256d53b4d9c012bdf0bc7760897dd8e7113f8b8dad"} Nov 21 10:04:59 crc kubenswrapper[4972]: I1121 10:04:59.405624 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" event={"ID":"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d","Type":"ContainerStarted","Data":"cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325"} Nov 21 10:04:59 crc kubenswrapper[4972]: I1121 10:04:59.406690 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:04:59 crc kubenswrapper[4972]: I1121 10:04:59.412414 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aee9ed59-e2a7-4d33-be50-85088b8195c4","Type":"ContainerStarted","Data":"d88103a7e27c4927484f1e67450612d3e43295ac5fb6d59a27eb63db02d19de3"} Nov 21 10:04:59 crc kubenswrapper[4972]: I1121 10:04:59.427805 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" podStartSLOduration=3.427768553 podStartE2EDuration="3.427768553s" podCreationTimestamp="2025-11-21 10:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:04:59.422658806 +0000 UTC m=+1444.531801314" watchObservedRunningTime="2025-11-21 10:04:59.427768553 +0000 UTC m=+1444.536911071" Nov 21 10:04:59 crc kubenswrapper[4972]: I1121 10:04:59.771597 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06293605-a389-4f8d-a217-7007fd7c6ade" path="/var/lib/kubelet/pods/06293605-a389-4f8d-a217-7007fd7c6ade/volumes" Nov 21 10:04:59 crc kubenswrapper[4972]: I1121 10:04:59.934188 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 21 10:04:59 crc kubenswrapper[4972]: I1121 10:04:59.981786 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 10:05:00 crc kubenswrapper[4972]: I1121 10:05:00.095066 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:05:00 crc kubenswrapper[4972]: I1121 10:05:00.158655 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:05:00 crc kubenswrapper[4972]: I1121 10:05:00.219460 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:05:00 crc kubenswrapper[4972]: I1121 10:05:00.230505 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:05:00 crc kubenswrapper[4972]: I1121 10:05:00.425804 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aee9ed59-e2a7-4d33-be50-85088b8195c4","Type":"ContainerStarted","Data":"8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f"} Nov 21 10:05:00 crc kubenswrapper[4972]: I1121 10:05:00.427909 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"824d6f85-3763-4c81-a329-2f9e68fd8cda","Type":"ContainerStarted","Data":"f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2"} Nov 21 10:05:00 crc kubenswrapper[4972]: I1121 10:05:00.428124 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="824d6f85-3763-4c81-a329-2f9e68fd8cda" containerName="glance-log" containerID="cri-o://43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95" gracePeriod=30 Nov 21 10:05:00 crc kubenswrapper[4972]: I1121 10:05:00.428752 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="824d6f85-3763-4c81-a329-2f9e68fd8cda" containerName="glance-httpd" containerID="cri-o://f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2" gracePeriod=30 Nov 21 10:05:00 crc kubenswrapper[4972]: I1121 10:05:00.428939 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" containerName="cinder-scheduler" containerID="cri-o://4e680d6cdc146edd49c7480a589768447776d27d89950c1f345f0a8f44c5bb69" gracePeriod=30 Nov 21 10:05:00 crc kubenswrapper[4972]: I1121 10:05:00.430024 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" containerName="probe" containerID="cri-o://30c1c69d07ce7356438bfb3dbd1483f98dec1c2a4432b1db1e595f58ceda5ecf" gracePeriod=30 Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.352925 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.435010 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-combined-ca-bundle\") pod \"824d6f85-3763-4c81-a329-2f9e68fd8cda\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.435063 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlpks\" (UniqueName: \"kubernetes.io/projected/824d6f85-3763-4c81-a329-2f9e68fd8cda-kube-api-access-dlpks\") pod \"824d6f85-3763-4c81-a329-2f9e68fd8cda\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.435133 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/824d6f85-3763-4c81-a329-2f9e68fd8cda-logs\") pod \"824d6f85-3763-4c81-a329-2f9e68fd8cda\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.435222 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-config-data\") pod \"824d6f85-3763-4c81-a329-2f9e68fd8cda\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.435250 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"824d6f85-3763-4c81-a329-2f9e68fd8cda\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.435298 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/824d6f85-3763-4c81-a329-2f9e68fd8cda-httpd-run\") pod \"824d6f85-3763-4c81-a329-2f9e68fd8cda\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.435368 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-scripts\") pod \"824d6f85-3763-4c81-a329-2f9e68fd8cda\" (UID: \"824d6f85-3763-4c81-a329-2f9e68fd8cda\") " Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.435767 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/824d6f85-3763-4c81-a329-2f9e68fd8cda-logs" (OuterVolumeSpecName: "logs") pod "824d6f85-3763-4c81-a329-2f9e68fd8cda" (UID: "824d6f85-3763-4c81-a329-2f9e68fd8cda"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.436071 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/824d6f85-3763-4c81-a329-2f9e68fd8cda-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.436562 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/824d6f85-3763-4c81-a329-2f9e68fd8cda-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "824d6f85-3763-4c81-a329-2f9e68fd8cda" (UID: "824d6f85-3763-4c81-a329-2f9e68fd8cda"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.438388 4972 generic.go:334] "Generic (PLEG): container finished" podID="31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" containerID="30c1c69d07ce7356438bfb3dbd1483f98dec1c2a4432b1db1e595f58ceda5ecf" exitCode=0 Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.438544 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9","Type":"ContainerDied","Data":"30c1c69d07ce7356438bfb3dbd1483f98dec1c2a4432b1db1e595f58ceda5ecf"} Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.440412 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "824d6f85-3763-4c81-a329-2f9e68fd8cda" (UID: "824d6f85-3763-4c81-a329-2f9e68fd8cda"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.440896 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/824d6f85-3763-4c81-a329-2f9e68fd8cda-kube-api-access-dlpks" (OuterVolumeSpecName: "kube-api-access-dlpks") pod "824d6f85-3763-4c81-a329-2f9e68fd8cda" (UID: "824d6f85-3763-4c81-a329-2f9e68fd8cda"). InnerVolumeSpecName "kube-api-access-dlpks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.441181 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aee9ed59-e2a7-4d33-be50-85088b8195c4","Type":"ContainerStarted","Data":"8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3"} Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.441376 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aee9ed59-e2a7-4d33-be50-85088b8195c4" containerName="glance-log" containerID="cri-o://8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f" gracePeriod=30 Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.441454 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-scripts" (OuterVolumeSpecName: "scripts") pod "824d6f85-3763-4c81-a329-2f9e68fd8cda" (UID: "824d6f85-3763-4c81-a329-2f9e68fd8cda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.441578 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aee9ed59-e2a7-4d33-be50-85088b8195c4" containerName="glance-httpd" containerID="cri-o://8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3" gracePeriod=30 Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.445415 4972 generic.go:334] "Generic (PLEG): container finished" podID="824d6f85-3763-4c81-a329-2f9e68fd8cda" containerID="f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2" exitCode=0 Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.445474 4972 generic.go:334] "Generic (PLEG): container finished" podID="824d6f85-3763-4c81-a329-2f9e68fd8cda" containerID="43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95" exitCode=143 Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.446524 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.446717 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"824d6f85-3763-4c81-a329-2f9e68fd8cda","Type":"ContainerDied","Data":"f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2"} Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.446749 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"824d6f85-3763-4c81-a329-2f9e68fd8cda","Type":"ContainerDied","Data":"43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95"} Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.446765 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"824d6f85-3763-4c81-a329-2f9e68fd8cda","Type":"ContainerDied","Data":"47fab3421c04703e18b037256d53b4d9c012bdf0bc7760897dd8e7113f8b8dad"} Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.446782 4972 scope.go:117] "RemoveContainer" containerID="f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.466513 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "824d6f85-3763-4c81-a329-2f9e68fd8cda" (UID: "824d6f85-3763-4c81-a329-2f9e68fd8cda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.469325 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.469301552 podStartE2EDuration="4.469301552s" podCreationTimestamp="2025-11-21 10:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:01.463690652 +0000 UTC m=+1446.572833160" watchObservedRunningTime="2025-11-21 10:05:01.469301552 +0000 UTC m=+1446.578444060" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.500254 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-config-data" (OuterVolumeSpecName: "config-data") pod "824d6f85-3763-4c81-a329-2f9e68fd8cda" (UID: "824d6f85-3763-4c81-a329-2f9e68fd8cda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.512153 4972 scope.go:117] "RemoveContainer" containerID="43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.530510 4972 scope.go:117] "RemoveContainer" containerID="f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2" Nov 21 10:05:01 crc kubenswrapper[4972]: E1121 10:05:01.533309 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2\": container with ID starting with f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2 not found: ID does not exist" containerID="f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.533367 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2"} err="failed to get container status \"f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2\": rpc error: code = NotFound desc = could not find container \"f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2\": container with ID starting with f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2 not found: ID does not exist" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.533401 4972 scope.go:117] "RemoveContainer" containerID="43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95" Nov 21 10:05:01 crc kubenswrapper[4972]: E1121 10:05:01.533736 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95\": container with ID starting with 43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95 not found: ID does not exist" containerID="43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.533784 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95"} err="failed to get container status \"43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95\": rpc error: code = NotFound desc = could not find container \"43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95\": container with ID starting with 43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95 not found: ID does not exist" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.533811 4972 scope.go:117] "RemoveContainer" containerID="f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.534106 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2"} err="failed to get container status \"f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2\": rpc error: code = NotFound desc = could not find container \"f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2\": container with ID starting with f4cb519261605160c507524b208fff203fba30c13bd933725b8e2b73fdb258c2 not found: ID does not exist" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.534131 4972 scope.go:117] "RemoveContainer" containerID="43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.534343 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95"} err="failed to get container status \"43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95\": rpc error: code = NotFound desc = could not find container \"43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95\": container with ID starting with 43b1330d284ae0743402c1177dffcd16d9ff3ee67e7ce4265b6105e3aed2db95 not found: ID does not exist" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.537649 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.537670 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlpks\" (UniqueName: \"kubernetes.io/projected/824d6f85-3763-4c81-a329-2f9e68fd8cda-kube-api-access-dlpks\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.537679 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.537697 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.537707 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/824d6f85-3763-4c81-a329-2f9e68fd8cda-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.537715 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/824d6f85-3763-4c81-a329-2f9e68fd8cda-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.565357 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.639685 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.790472 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.799663 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.808319 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:05:01 crc kubenswrapper[4972]: E1121 10:05:01.808941 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824d6f85-3763-4c81-a329-2f9e68fd8cda" containerName="glance-log" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.809063 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="824d6f85-3763-4c81-a329-2f9e68fd8cda" containerName="glance-log" Nov 21 10:05:01 crc kubenswrapper[4972]: E1121 10:05:01.809167 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824d6f85-3763-4c81-a329-2f9e68fd8cda" containerName="glance-httpd" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.809243 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="824d6f85-3763-4c81-a329-2f9e68fd8cda" containerName="glance-httpd" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.809531 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="824d6f85-3763-4c81-a329-2f9e68fd8cda" containerName="glance-log" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.809625 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="824d6f85-3763-4c81-a329-2f9e68fd8cda" containerName="glance-httpd" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.812641 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.816940 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.820312 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.822470 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.944380 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-config-data\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.944449 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.944504 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-scripts\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.944539 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ceb104d-0967-4ef1-87d7-23149492461f-logs\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.944573 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ceb104d-0967-4ef1-87d7-23149492461f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.944612 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.944674 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp74n\" (UniqueName: \"kubernetes.io/projected/4ceb104d-0967-4ef1-87d7-23149492461f-kube-api-access-gp74n\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:01 crc kubenswrapper[4972]: I1121 10:05:01.944705 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.050532 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-scripts\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.050661 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ceb104d-0967-4ef1-87d7-23149492461f-logs\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.050804 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ceb104d-0967-4ef1-87d7-23149492461f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.050912 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.051084 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp74n\" (UniqueName: \"kubernetes.io/projected/4ceb104d-0967-4ef1-87d7-23149492461f-kube-api-access-gp74n\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.051170 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.051391 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-config-data\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.051467 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.053050 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.054076 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ceb104d-0967-4ef1-87d7-23149492461f-logs\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.054887 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ceb104d-0967-4ef1-87d7-23149492461f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.063958 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.064495 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-scripts\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.065148 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.066183 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-config-data\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.073613 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp74n\" (UniqueName: \"kubernetes.io/projected/4ceb104d-0967-4ef1-87d7-23149492461f-kube-api-access-gp74n\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.094763 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.135967 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.192449 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.254051 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-config-data\") pod \"aee9ed59-e2a7-4d33-be50-85088b8195c4\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.254113 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-scripts\") pod \"aee9ed59-e2a7-4d33-be50-85088b8195c4\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.254138 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-combined-ca-bundle\") pod \"aee9ed59-e2a7-4d33-be50-85088b8195c4\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.254202 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aee9ed59-e2a7-4d33-be50-85088b8195c4-httpd-run\") pod \"aee9ed59-e2a7-4d33-be50-85088b8195c4\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.254231 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aee9ed59-e2a7-4d33-be50-85088b8195c4-logs\") pod \"aee9ed59-e2a7-4d33-be50-85088b8195c4\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.254289 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"aee9ed59-e2a7-4d33-be50-85088b8195c4\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.254319 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdxfk\" (UniqueName: \"kubernetes.io/projected/aee9ed59-e2a7-4d33-be50-85088b8195c4-kube-api-access-sdxfk\") pod \"aee9ed59-e2a7-4d33-be50-85088b8195c4\" (UID: \"aee9ed59-e2a7-4d33-be50-85088b8195c4\") " Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.257460 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee9ed59-e2a7-4d33-be50-85088b8195c4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aee9ed59-e2a7-4d33-be50-85088b8195c4" (UID: "aee9ed59-e2a7-4d33-be50-85088b8195c4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.257470 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee9ed59-e2a7-4d33-be50-85088b8195c4-logs" (OuterVolumeSpecName: "logs") pod "aee9ed59-e2a7-4d33-be50-85088b8195c4" (UID: "aee9ed59-e2a7-4d33-be50-85088b8195c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.258140 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-scripts" (OuterVolumeSpecName: "scripts") pod "aee9ed59-e2a7-4d33-be50-85088b8195c4" (UID: "aee9ed59-e2a7-4d33-be50-85088b8195c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.260897 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "aee9ed59-e2a7-4d33-be50-85088b8195c4" (UID: "aee9ed59-e2a7-4d33-be50-85088b8195c4"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.260947 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee9ed59-e2a7-4d33-be50-85088b8195c4-kube-api-access-sdxfk" (OuterVolumeSpecName: "kube-api-access-sdxfk") pod "aee9ed59-e2a7-4d33-be50-85088b8195c4" (UID: "aee9ed59-e2a7-4d33-be50-85088b8195c4"). InnerVolumeSpecName "kube-api-access-sdxfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.318145 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aee9ed59-e2a7-4d33-be50-85088b8195c4" (UID: "aee9ed59-e2a7-4d33-be50-85088b8195c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.318347 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-config-data" (OuterVolumeSpecName: "config-data") pod "aee9ed59-e2a7-4d33-be50-85088b8195c4" (UID: "aee9ed59-e2a7-4d33-be50-85088b8195c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.357964 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aee9ed59-e2a7-4d33-be50-85088b8195c4-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.358926 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aee9ed59-e2a7-4d33-be50-85088b8195c4-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.358964 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.358976 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdxfk\" (UniqueName: \"kubernetes.io/projected/aee9ed59-e2a7-4d33-be50-85088b8195c4-kube-api-access-sdxfk\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.358989 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.359010 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.359020 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee9ed59-e2a7-4d33-be50-85088b8195c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.377592 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.461153 4972 generic.go:334] "Generic (PLEG): container finished" podID="aee9ed59-e2a7-4d33-be50-85088b8195c4" containerID="8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3" exitCode=0 Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.461194 4972 generic.go:334] "Generic (PLEG): container finished" podID="aee9ed59-e2a7-4d33-be50-85088b8195c4" containerID="8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f" exitCode=143 Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.461220 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.461264 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aee9ed59-e2a7-4d33-be50-85088b8195c4","Type":"ContainerDied","Data":"8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3"} Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.461333 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aee9ed59-e2a7-4d33-be50-85088b8195c4","Type":"ContainerDied","Data":"8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f"} Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.461348 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aee9ed59-e2a7-4d33-be50-85088b8195c4","Type":"ContainerDied","Data":"d88103a7e27c4927484f1e67450612d3e43295ac5fb6d59a27eb63db02d19de3"} Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.461368 4972 scope.go:117] "RemoveContainer" containerID="8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.462260 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.502262 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.510281 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.513389 4972 scope.go:117] "RemoveContainer" containerID="8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.546563 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:05:02 crc kubenswrapper[4972]: E1121 10:05:02.546947 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee9ed59-e2a7-4d33-be50-85088b8195c4" containerName="glance-log" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.546960 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee9ed59-e2a7-4d33-be50-85088b8195c4" containerName="glance-log" Nov 21 10:05:02 crc kubenswrapper[4972]: E1121 10:05:02.547004 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee9ed59-e2a7-4d33-be50-85088b8195c4" containerName="glance-httpd" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.547011 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee9ed59-e2a7-4d33-be50-85088b8195c4" containerName="glance-httpd" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.547173 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee9ed59-e2a7-4d33-be50-85088b8195c4" containerName="glance-log" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.547198 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee9ed59-e2a7-4d33-be50-85088b8195c4" containerName="glance-httpd" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.548135 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.551152 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.552846 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.560059 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.581050 4972 scope.go:117] "RemoveContainer" containerID="8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3" Nov 21 10:05:02 crc kubenswrapper[4972]: E1121 10:05:02.581816 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3\": container with ID starting with 8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3 not found: ID does not exist" containerID="8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.581863 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3"} err="failed to get container status \"8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3\": rpc error: code = NotFound desc = could not find container \"8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3\": container with ID starting with 8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3 not found: ID does not exist" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.581883 4972 scope.go:117] "RemoveContainer" containerID="8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f" Nov 21 10:05:02 crc kubenswrapper[4972]: E1121 10:05:02.582470 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f\": container with ID starting with 8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f not found: ID does not exist" containerID="8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.582523 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f"} err="failed to get container status \"8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f\": rpc error: code = NotFound desc = could not find container \"8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f\": container with ID starting with 8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f not found: ID does not exist" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.582557 4972 scope.go:117] "RemoveContainer" containerID="8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.582866 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3"} err="failed to get container status \"8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3\": rpc error: code = NotFound desc = could not find container \"8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3\": container with ID starting with 8610589840c704f4f708faf03e781d43f892b6636640af57f14e35648ccb8ce3 not found: ID does not exist" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.582887 4972 scope.go:117] "RemoveContainer" containerID="8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.583353 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f"} err="failed to get container status \"8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f\": rpc error: code = NotFound desc = could not find container \"8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f\": container with ID starting with 8d0ab55a2a4e0ca82655f59932725a058d963f39f21f77859cc6ae5eed166d3f not found: ID does not exist" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.669690 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.669764 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.669810 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.669970 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.670056 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.670154 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.670233 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.670293 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fccfs\" (UniqueName: \"kubernetes.io/projected/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-kube-api-access-fccfs\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.677867 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.771430 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fccfs\" (UniqueName: \"kubernetes.io/projected/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-kube-api-access-fccfs\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.771749 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.772159 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.772223 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.772711 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.772750 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.772806 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.772875 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.773108 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.773139 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.773386 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.780382 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.780943 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.782129 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.795541 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fccfs\" (UniqueName: \"kubernetes.io/projected/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-kube-api-access-fccfs\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.808373 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.832632 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.927237 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.975319 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.976578 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.978622 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.979491 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lc6c7" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.980288 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 21 10:05:02 crc kubenswrapper[4972]: I1121 10:05:02.989031 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.077181 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6krh\" (UniqueName: \"kubernetes.io/projected/88c81504-7f14-498f-bd8d-4fa74aebf2d2-kube-api-access-b6krh\") pod \"openstackclient\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.077250 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88c81504-7f14-498f-bd8d-4fa74aebf2d2-openstack-config\") pod \"openstackclient\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.077276 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88c81504-7f14-498f-bd8d-4fa74aebf2d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.077319 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c81504-7f14-498f-bd8d-4fa74aebf2d2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.179472 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6krh\" (UniqueName: \"kubernetes.io/projected/88c81504-7f14-498f-bd8d-4fa74aebf2d2-kube-api-access-b6krh\") pod \"openstackclient\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.179542 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88c81504-7f14-498f-bd8d-4fa74aebf2d2-openstack-config\") pod \"openstackclient\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.179575 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88c81504-7f14-498f-bd8d-4fa74aebf2d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.179621 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c81504-7f14-498f-bd8d-4fa74aebf2d2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.181299 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88c81504-7f14-498f-bd8d-4fa74aebf2d2-openstack-config\") pod \"openstackclient\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.185885 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c81504-7f14-498f-bd8d-4fa74aebf2d2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.185908 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88c81504-7f14-498f-bd8d-4fa74aebf2d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.201394 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6krh\" (UniqueName: \"kubernetes.io/projected/88c81504-7f14-498f-bd8d-4fa74aebf2d2-kube-api-access-b6krh\") pod \"openstackclient\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.299337 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.481973 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.511174 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4ceb104d-0967-4ef1-87d7-23149492461f","Type":"ContainerStarted","Data":"970b2cf0e7c0d0678a5e91fb11daab6d9b02b23cb875fda14cfe0c82fc707282"} Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.511207 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4ceb104d-0967-4ef1-87d7-23149492461f","Type":"ContainerStarted","Data":"e0f16971b3c58d62c6551395c6b441fdea9a4e83d9c108bed44398a93b94acbf"} Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.516854 4972 generic.go:334] "Generic (PLEG): container finished" podID="31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" containerID="4e680d6cdc146edd49c7480a589768447776d27d89950c1f345f0a8f44c5bb69" exitCode=0 Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.516912 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9","Type":"ContainerDied","Data":"4e680d6cdc146edd49c7480a589768447776d27d89950c1f345f0a8f44c5bb69"} Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.774021 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="824d6f85-3763-4c81-a329-2f9e68fd8cda" path="/var/lib/kubelet/pods/824d6f85-3763-4c81-a329-2f9e68fd8cda/volumes" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.775572 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aee9ed59-e2a7-4d33-be50-85088b8195c4" path="/var/lib/kubelet/pods/aee9ed59-e2a7-4d33-be50-85088b8195c4/volumes" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.776764 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.829350 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-748ccf64d9-7vqzf"] Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.835530 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.838460 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.838472 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.840770 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.850240 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-748ccf64d9-7vqzf"] Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.896310 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f69d7d80-dc29-4483-917c-c25921b56e9c-etc-swift\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.896421 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-combined-ca-bundle\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.896465 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdlww\" (UniqueName: \"kubernetes.io/projected/f69d7d80-dc29-4483-917c-c25921b56e9c-kube-api-access-bdlww\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.896486 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69d7d80-dc29-4483-917c-c25921b56e9c-run-httpd\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.896554 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-public-tls-certs\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.896723 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69d7d80-dc29-4483-917c-c25921b56e9c-log-httpd\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.896758 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-internal-tls-certs\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.896822 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-config-data\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.922759 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.932387 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.932736 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="ceilometer-central-agent" containerID="cri-o://b2e7ad04d06d5cf578cf608137e619982339ee1dce176875b0863adfbcd2c5b4" gracePeriod=30 Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.932897 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="sg-core" containerID="cri-o://93de2e8b696fe5a07f80ebff1526da274e75d0fbfd512cadffc83d5b337356aa" gracePeriod=30 Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.932854 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="proxy-httpd" containerID="cri-o://aeab841033fd01e3f4e3ea8935c42be4a459c6ac89c4166b63e1de3e9f14cdbd" gracePeriod=30 Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.932978 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="ceilometer-notification-agent" containerID="cri-o://62f86be96e036f2ac23fd13150fbf6bacfb4b8ce5f2ba708160bbc54de2e0910" gracePeriod=30 Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.997581 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-config-data-custom\") pod \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.997972 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q7kq\" (UniqueName: \"kubernetes.io/projected/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-kube-api-access-5q7kq\") pod \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.998074 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-etc-machine-id\") pod \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.998129 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-config-data\") pod \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.998191 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-scripts\") pod \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.998211 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-combined-ca-bundle\") pod \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\" (UID: \"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9\") " Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.998468 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-combined-ca-bundle\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.998523 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdlww\" (UniqueName: \"kubernetes.io/projected/f69d7d80-dc29-4483-917c-c25921b56e9c-kube-api-access-bdlww\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.998540 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69d7d80-dc29-4483-917c-c25921b56e9c-run-httpd\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.998560 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-public-tls-certs\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.998612 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69d7d80-dc29-4483-917c-c25921b56e9c-log-httpd\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.998627 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-internal-tls-certs\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.998659 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-config-data\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:03 crc kubenswrapper[4972]: I1121 10:05:03.998711 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f69d7d80-dc29-4483-917c-c25921b56e9c-etc-swift\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.000744 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69d7d80-dc29-4483-917c-c25921b56e9c-run-httpd\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.001218 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" (UID: "31088e5c-be9d-4551-8ed2-6c4fc1c71ee9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.001623 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69d7d80-dc29-4483-917c-c25921b56e9c-log-httpd\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.005512 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" (UID: "31088e5c-be9d-4551-8ed2-6c4fc1c71ee9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.021532 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-kube-api-access-5q7kq" (OuterVolumeSpecName: "kube-api-access-5q7kq") pod "31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" (UID: "31088e5c-be9d-4551-8ed2-6c4fc1c71ee9"). InnerVolumeSpecName "kube-api-access-5q7kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.022171 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-internal-tls-certs\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.026992 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-scripts" (OuterVolumeSpecName: "scripts") pod "31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" (UID: "31088e5c-be9d-4551-8ed2-6c4fc1c71ee9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.028596 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-combined-ca-bundle\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.028621 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-public-tls-certs\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.028623 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdlww\" (UniqueName: \"kubernetes.io/projected/f69d7d80-dc29-4483-917c-c25921b56e9c-kube-api-access-bdlww\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.029229 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-config-data\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.029841 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f69d7d80-dc29-4483-917c-c25921b56e9c-etc-swift\") pod \"swift-proxy-748ccf64d9-7vqzf\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.045976 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.149:3000/\": read tcp 10.217.0.2:45578->10.217.0.149:3000: read: connection reset by peer" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.101668 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.101706 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q7kq\" (UniqueName: \"kubernetes.io/projected/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-kube-api-access-5q7kq\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.101723 4972 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.101734 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.109938 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" (UID: "31088e5c-be9d-4551-8ed2-6c4fc1c71ee9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.187976 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-config-data" (OuterVolumeSpecName: "config-data") pod "31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" (UID: "31088e5c-be9d-4551-8ed2-6c4fc1c71ee9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.204763 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.204791 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.256253 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.535699 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.535816 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31088e5c-be9d-4551-8ed2-6c4fc1c71ee9","Type":"ContainerDied","Data":"aa40782bb440e98a8608c9e90c51347663a084e81657745eba993da011bbd071"} Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.536501 4972 scope.go:117] "RemoveContainer" containerID="30c1c69d07ce7356438bfb3dbd1483f98dec1c2a4432b1db1e595f58ceda5ecf" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.559264 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"88c81504-7f14-498f-bd8d-4fa74aebf2d2","Type":"ContainerStarted","Data":"413ac08851ca4e3658cef7ee53ed3622b15402e028c67d647bb01718521e6fc0"} Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.564136 4972 generic.go:334] "Generic (PLEG): container finished" podID="1e58da07-71c8-4739-848a-94e49b6c473c" containerID="aeab841033fd01e3f4e3ea8935c42be4a459c6ac89c4166b63e1de3e9f14cdbd" exitCode=0 Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.564188 4972 generic.go:334] "Generic (PLEG): container finished" podID="1e58da07-71c8-4739-848a-94e49b6c473c" containerID="93de2e8b696fe5a07f80ebff1526da274e75d0fbfd512cadffc83d5b337356aa" exitCode=2 Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.564198 4972 generic.go:334] "Generic (PLEG): container finished" podID="1e58da07-71c8-4739-848a-94e49b6c473c" containerID="b2e7ad04d06d5cf578cf608137e619982339ee1dce176875b0863adfbcd2c5b4" exitCode=0 Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.564254 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e58da07-71c8-4739-848a-94e49b6c473c","Type":"ContainerDied","Data":"aeab841033fd01e3f4e3ea8935c42be4a459c6ac89c4166b63e1de3e9f14cdbd"} Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.564278 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e58da07-71c8-4739-848a-94e49b6c473c","Type":"ContainerDied","Data":"93de2e8b696fe5a07f80ebff1526da274e75d0fbfd512cadffc83d5b337356aa"} Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.564288 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e58da07-71c8-4739-848a-94e49b6c473c","Type":"ContainerDied","Data":"b2e7ad04d06d5cf578cf608137e619982339ee1dce176875b0863adfbcd2c5b4"} Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.570195 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4ceb104d-0967-4ef1-87d7-23149492461f","Type":"ContainerStarted","Data":"2921c78fe04ce5e118035b984bf13a134833efbd0278281daf335ac8e8cdab45"} Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.581491 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c9dee05-0b10-4cbf-af72-70d6928f8c8e","Type":"ContainerStarted","Data":"605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79"} Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.581844 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c9dee05-0b10-4cbf-af72-70d6928f8c8e","Type":"ContainerStarted","Data":"4a147f29470dcb7f7901b5814d58d7c1a1dc665518d850c6fe6b92836c92fc70"} Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.608974 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.608955778 podStartE2EDuration="3.608955778s" podCreationTimestamp="2025-11-21 10:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:04.592218611 +0000 UTC m=+1449.701361109" watchObservedRunningTime="2025-11-21 10:05:04.608955778 +0000 UTC m=+1449.718098276" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.668538 4972 scope.go:117] "RemoveContainer" containerID="4e680d6cdc146edd49c7480a589768447776d27d89950c1f345f0a8f44c5bb69" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.687125 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.711150 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.730390 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 10:05:04 crc kubenswrapper[4972]: E1121 10:05:04.730793 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" containerName="cinder-scheduler" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.730810 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" containerName="cinder-scheduler" Nov 21 10:05:04 crc kubenswrapper[4972]: E1121 10:05:04.730850 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" containerName="probe" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.730856 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" containerName="probe" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.731017 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" containerName="cinder-scheduler" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.731051 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" containerName="probe" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.732001 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.739704 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.756998 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.761295 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-5wd2j"] Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.763553 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5wd2j" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.797329 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5wd2j"] Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.819807 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/befdbf4d-7d20-40ca-9985-8309a0295dad-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.819885 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnt7v\" (UniqueName: \"kubernetes.io/projected/befdbf4d-7d20-40ca-9985-8309a0295dad-kube-api-access-fnt7v\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.819940 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlkm2\" (UniqueName: \"kubernetes.io/projected/e938b27a-060f-4c56-af67-7c971a877d64-kube-api-access-vlkm2\") pod \"nova-api-db-create-5wd2j\" (UID: \"e938b27a-060f-4c56-af67-7c971a877d64\") " pod="openstack/nova-api-db-create-5wd2j" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.819973 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-scripts\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.820032 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.820073 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.820122 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.820139 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e938b27a-060f-4c56-af67-7c971a877d64-operator-scripts\") pod \"nova-api-db-create-5wd2j\" (UID: \"e938b27a-060f-4c56-af67-7c971a877d64\") " pod="openstack/nova-api-db-create-5wd2j" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.855947 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5jd8b"] Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.857027 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5jd8b" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.881317 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5jd8b"] Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.896726 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-748ccf64d9-7vqzf"] Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.921935 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.922025 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e938b27a-060f-4c56-af67-7c971a877d64-operator-scripts\") pod \"nova-api-db-create-5wd2j\" (UID: \"e938b27a-060f-4c56-af67-7c971a877d64\") " pod="openstack/nova-api-db-create-5wd2j" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.922052 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/befdbf4d-7d20-40ca-9985-8309a0295dad-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.922084 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnt7v\" (UniqueName: \"kubernetes.io/projected/befdbf4d-7d20-40ca-9985-8309a0295dad-kube-api-access-fnt7v\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.922134 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlkm2\" (UniqueName: \"kubernetes.io/projected/e938b27a-060f-4c56-af67-7c971a877d64-kube-api-access-vlkm2\") pod \"nova-api-db-create-5wd2j\" (UID: \"e938b27a-060f-4c56-af67-7c971a877d64\") " pod="openstack/nova-api-db-create-5wd2j" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.922171 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-scripts\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.922196 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2grl\" (UniqueName: \"kubernetes.io/projected/43333bc2-9532-4de9-ada0-761a687b1640-kube-api-access-n2grl\") pod \"nova-cell0-db-create-5jd8b\" (UID: \"43333bc2-9532-4de9-ada0-761a687b1640\") " pod="openstack/nova-cell0-db-create-5jd8b" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.922224 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43333bc2-9532-4de9-ada0-761a687b1640-operator-scripts\") pod \"nova-cell0-db-create-5jd8b\" (UID: \"43333bc2-9532-4de9-ada0-761a687b1640\") " pod="openstack/nova-cell0-db-create-5jd8b" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.922277 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.922323 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.923070 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e938b27a-060f-4c56-af67-7c971a877d64-operator-scripts\") pod \"nova-api-db-create-5wd2j\" (UID: \"e938b27a-060f-4c56-af67-7c971a877d64\") " pod="openstack/nova-api-db-create-5wd2j" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.923377 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/befdbf4d-7d20-40ca-9985-8309a0295dad-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.941802 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-scripts\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.942361 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.946943 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.953920 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.958645 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlkm2\" (UniqueName: \"kubernetes.io/projected/e938b27a-060f-4c56-af67-7c971a877d64-kube-api-access-vlkm2\") pod \"nova-api-db-create-5wd2j\" (UID: \"e938b27a-060f-4c56-af67-7c971a877d64\") " pod="openstack/nova-api-db-create-5wd2j" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.962349 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4vsj6"] Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.973634 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4vsj6" Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.985035 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4vsj6"] Nov 21 10:05:04 crc kubenswrapper[4972]: I1121 10:05:04.996474 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnt7v\" (UniqueName: \"kubernetes.io/projected/befdbf4d-7d20-40ca-9985-8309a0295dad-kube-api-access-fnt7v\") pod \"cinder-scheduler-0\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " pod="openstack/cinder-scheduler-0" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.005156 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-501f-account-create-g2sgs"] Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.006486 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-501f-account-create-g2sgs" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.008477 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.015916 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-501f-account-create-g2sgs"] Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.024603 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5sf6\" (UniqueName: \"kubernetes.io/projected/86c343dd-b3e5-4822-b5c0-f12c8a7530bb-kube-api-access-b5sf6\") pod \"nova-cell1-db-create-4vsj6\" (UID: \"86c343dd-b3e5-4822-b5c0-f12c8a7530bb\") " pod="openstack/nova-cell1-db-create-4vsj6" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.024669 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c343dd-b3e5-4822-b5c0-f12c8a7530bb-operator-scripts\") pod \"nova-cell1-db-create-4vsj6\" (UID: \"86c343dd-b3e5-4822-b5c0-f12c8a7530bb\") " pod="openstack/nova-cell1-db-create-4vsj6" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.024728 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2grl\" (UniqueName: \"kubernetes.io/projected/43333bc2-9532-4de9-ada0-761a687b1640-kube-api-access-n2grl\") pod \"nova-cell0-db-create-5jd8b\" (UID: \"43333bc2-9532-4de9-ada0-761a687b1640\") " pod="openstack/nova-cell0-db-create-5jd8b" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.024748 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43333bc2-9532-4de9-ada0-761a687b1640-operator-scripts\") pod \"nova-cell0-db-create-5jd8b\" (UID: \"43333bc2-9532-4de9-ada0-761a687b1640\") " pod="openstack/nova-cell0-db-create-5jd8b" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.025325 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43333bc2-9532-4de9-ada0-761a687b1640-operator-scripts\") pod \"nova-cell0-db-create-5jd8b\" (UID: \"43333bc2-9532-4de9-ada0-761a687b1640\") " pod="openstack/nova-cell0-db-create-5jd8b" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.054651 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2grl\" (UniqueName: \"kubernetes.io/projected/43333bc2-9532-4de9-ada0-761a687b1640-kube-api-access-n2grl\") pod \"nova-cell0-db-create-5jd8b\" (UID: \"43333bc2-9532-4de9-ada0-761a687b1640\") " pod="openstack/nova-cell0-db-create-5jd8b" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.063819 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.100315 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5wd2j" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.126202 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b94453-0fa8-42b2-9093-b233661916af-operator-scripts\") pod \"nova-api-501f-account-create-g2sgs\" (UID: \"48b94453-0fa8-42b2-9093-b233661916af\") " pod="openstack/nova-api-501f-account-create-g2sgs" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.126291 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5sf6\" (UniqueName: \"kubernetes.io/projected/86c343dd-b3e5-4822-b5c0-f12c8a7530bb-kube-api-access-b5sf6\") pod \"nova-cell1-db-create-4vsj6\" (UID: \"86c343dd-b3e5-4822-b5c0-f12c8a7530bb\") " pod="openstack/nova-cell1-db-create-4vsj6" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.126339 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zqb2\" (UniqueName: \"kubernetes.io/projected/48b94453-0fa8-42b2-9093-b233661916af-kube-api-access-4zqb2\") pod \"nova-api-501f-account-create-g2sgs\" (UID: \"48b94453-0fa8-42b2-9093-b233661916af\") " pod="openstack/nova-api-501f-account-create-g2sgs" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.126363 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c343dd-b3e5-4822-b5c0-f12c8a7530bb-operator-scripts\") pod \"nova-cell1-db-create-4vsj6\" (UID: \"86c343dd-b3e5-4822-b5c0-f12c8a7530bb\") " pod="openstack/nova-cell1-db-create-4vsj6" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.126993 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c343dd-b3e5-4822-b5c0-f12c8a7530bb-operator-scripts\") pod \"nova-cell1-db-create-4vsj6\" (UID: \"86c343dd-b3e5-4822-b5c0-f12c8a7530bb\") " pod="openstack/nova-cell1-db-create-4vsj6" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.157311 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5sf6\" (UniqueName: \"kubernetes.io/projected/86c343dd-b3e5-4822-b5c0-f12c8a7530bb-kube-api-access-b5sf6\") pod \"nova-cell1-db-create-4vsj6\" (UID: \"86c343dd-b3e5-4822-b5c0-f12c8a7530bb\") " pod="openstack/nova-cell1-db-create-4vsj6" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.171702 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5330-account-create-v8qd8"] Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.173284 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5330-account-create-v8qd8" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.174030 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5330-account-create-v8qd8"] Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.175309 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5jd8b" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.175892 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.230308 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zqb2\" (UniqueName: \"kubernetes.io/projected/48b94453-0fa8-42b2-9093-b233661916af-kube-api-access-4zqb2\") pod \"nova-api-501f-account-create-g2sgs\" (UID: \"48b94453-0fa8-42b2-9093-b233661916af\") " pod="openstack/nova-api-501f-account-create-g2sgs" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.230386 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klt46\" (UniqueName: \"kubernetes.io/projected/62ab9b95-78b8-49ca-ad65-2b63990c55a9-kube-api-access-klt46\") pod \"nova-cell0-5330-account-create-v8qd8\" (UID: \"62ab9b95-78b8-49ca-ad65-2b63990c55a9\") " pod="openstack/nova-cell0-5330-account-create-v8qd8" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.230434 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ab9b95-78b8-49ca-ad65-2b63990c55a9-operator-scripts\") pod \"nova-cell0-5330-account-create-v8qd8\" (UID: \"62ab9b95-78b8-49ca-ad65-2b63990c55a9\") " pod="openstack/nova-cell0-5330-account-create-v8qd8" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.230527 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b94453-0fa8-42b2-9093-b233661916af-operator-scripts\") pod \"nova-api-501f-account-create-g2sgs\" (UID: \"48b94453-0fa8-42b2-9093-b233661916af\") " pod="openstack/nova-api-501f-account-create-g2sgs" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.231347 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b94453-0fa8-42b2-9093-b233661916af-operator-scripts\") pod \"nova-api-501f-account-create-g2sgs\" (UID: \"48b94453-0fa8-42b2-9093-b233661916af\") " pod="openstack/nova-api-501f-account-create-g2sgs" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.252823 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zqb2\" (UniqueName: \"kubernetes.io/projected/48b94453-0fa8-42b2-9093-b233661916af-kube-api-access-4zqb2\") pod \"nova-api-501f-account-create-g2sgs\" (UID: \"48b94453-0fa8-42b2-9093-b233661916af\") " pod="openstack/nova-api-501f-account-create-g2sgs" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.333434 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ab9b95-78b8-49ca-ad65-2b63990c55a9-operator-scripts\") pod \"nova-cell0-5330-account-create-v8qd8\" (UID: \"62ab9b95-78b8-49ca-ad65-2b63990c55a9\") " pod="openstack/nova-cell0-5330-account-create-v8qd8" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.333618 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klt46\" (UniqueName: \"kubernetes.io/projected/62ab9b95-78b8-49ca-ad65-2b63990c55a9-kube-api-access-klt46\") pod \"nova-cell0-5330-account-create-v8qd8\" (UID: \"62ab9b95-78b8-49ca-ad65-2b63990c55a9\") " pod="openstack/nova-cell0-5330-account-create-v8qd8" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.334565 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ab9b95-78b8-49ca-ad65-2b63990c55a9-operator-scripts\") pod \"nova-cell0-5330-account-create-v8qd8\" (UID: \"62ab9b95-78b8-49ca-ad65-2b63990c55a9\") " pod="openstack/nova-cell0-5330-account-create-v8qd8" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.361250 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4vsj6" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.363897 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klt46\" (UniqueName: \"kubernetes.io/projected/62ab9b95-78b8-49ca-ad65-2b63990c55a9-kube-api-access-klt46\") pod \"nova-cell0-5330-account-create-v8qd8\" (UID: \"62ab9b95-78b8-49ca-ad65-2b63990c55a9\") " pod="openstack/nova-cell0-5330-account-create-v8qd8" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.363993 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ac8a-account-create-nnzz2"] Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.365157 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ac8a-account-create-nnzz2" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.366156 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-501f-account-create-g2sgs" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.370706 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.376253 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ac8a-account-create-nnzz2"] Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.435412 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448f2d01-d067-4694-9545-6771e700e52b-operator-scripts\") pod \"nova-cell1-ac8a-account-create-nnzz2\" (UID: \"448f2d01-d067-4694-9545-6771e700e52b\") " pod="openstack/nova-cell1-ac8a-account-create-nnzz2" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.435470 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw6jq\" (UniqueName: \"kubernetes.io/projected/448f2d01-d067-4694-9545-6771e700e52b-kube-api-access-rw6jq\") pod \"nova-cell1-ac8a-account-create-nnzz2\" (UID: \"448f2d01-d067-4694-9545-6771e700e52b\") " pod="openstack/nova-cell1-ac8a-account-create-nnzz2" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.510316 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5330-account-create-v8qd8" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.537574 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448f2d01-d067-4694-9545-6771e700e52b-operator-scripts\") pod \"nova-cell1-ac8a-account-create-nnzz2\" (UID: \"448f2d01-d067-4694-9545-6771e700e52b\") " pod="openstack/nova-cell1-ac8a-account-create-nnzz2" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.537610 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw6jq\" (UniqueName: \"kubernetes.io/projected/448f2d01-d067-4694-9545-6771e700e52b-kube-api-access-rw6jq\") pod \"nova-cell1-ac8a-account-create-nnzz2\" (UID: \"448f2d01-d067-4694-9545-6771e700e52b\") " pod="openstack/nova-cell1-ac8a-account-create-nnzz2" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.538557 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448f2d01-d067-4694-9545-6771e700e52b-operator-scripts\") pod \"nova-cell1-ac8a-account-create-nnzz2\" (UID: \"448f2d01-d067-4694-9545-6771e700e52b\") " pod="openstack/nova-cell1-ac8a-account-create-nnzz2" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.599313 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw6jq\" (UniqueName: \"kubernetes.io/projected/448f2d01-d067-4694-9545-6771e700e52b-kube-api-access-rw6jq\") pod \"nova-cell1-ac8a-account-create-nnzz2\" (UID: \"448f2d01-d067-4694-9545-6771e700e52b\") " pod="openstack/nova-cell1-ac8a-account-create-nnzz2" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.673737 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c9dee05-0b10-4cbf-af72-70d6928f8c8e","Type":"ContainerStarted","Data":"d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30"} Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.708418 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ac8a-account-create-nnzz2" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.710741 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.723411 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.723395501 podStartE2EDuration="3.723395501s" podCreationTimestamp="2025-11-21 10:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:05.720307688 +0000 UTC m=+1450.829450186" watchObservedRunningTime="2025-11-21 10:05:05.723395501 +0000 UTC m=+1450.832537999" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.730132 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-748ccf64d9-7vqzf" event={"ID":"f69d7d80-dc29-4483-917c-c25921b56e9c","Type":"ContainerStarted","Data":"8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c"} Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.730170 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-748ccf64d9-7vqzf" event={"ID":"f69d7d80-dc29-4483-917c-c25921b56e9c","Type":"ContainerStarted","Data":"aa0614f91fc32679c8f0eada54c29c340b17b21d5a6f8cace15db38baa847d5a"} Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.804441 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31088e5c-be9d-4551-8ed2-6c4fc1c71ee9" path="/var/lib/kubelet/pods/31088e5c-be9d-4551-8ed2-6c4fc1c71ee9/volumes" Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.805560 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5wd2j"] Nov 21 10:05:05 crc kubenswrapper[4972]: I1121 10:05:05.936077 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5jd8b"] Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.182697 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-501f-account-create-g2sgs"] Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.199986 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4vsj6"] Nov 21 10:05:06 crc kubenswrapper[4972]: W1121 10:05:06.195128 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86c343dd_b3e5_4822_b5c0_f12c8a7530bb.slice/crio-d6c8189ab18a49d64b4cfad28f08bb5e49c54d4353cf007fa023813203b5ac4f WatchSource:0}: Error finding container d6c8189ab18a49d64b4cfad28f08bb5e49c54d4353cf007fa023813203b5ac4f: Status 404 returned error can't find the container with id d6c8189ab18a49d64b4cfad28f08bb5e49c54d4353cf007fa023813203b5ac4f Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.336361 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5330-account-create-v8qd8"] Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.459037 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ac8a-account-create-nnzz2"] Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.746403 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-501f-account-create-g2sgs" event={"ID":"48b94453-0fa8-42b2-9093-b233661916af","Type":"ContainerStarted","Data":"e72281d879c70916014fd08fab0961f2ce60d1723ec476c4e8eaac55838d50c5"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.746802 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-501f-account-create-g2sgs" event={"ID":"48b94453-0fa8-42b2-9093-b233661916af","Type":"ContainerStarted","Data":"5e9376229aa0cc1058c575c452a9fc7f0b2d78521c77556e7409140671f8729b"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.755080 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"befdbf4d-7d20-40ca-9985-8309a0295dad","Type":"ContainerStarted","Data":"9b6463b15ecb9b8779ca71cddabf1926d3ac4ddaaf5846829a356fc2d428be5e"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.766302 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-748ccf64d9-7vqzf" event={"ID":"f69d7d80-dc29-4483-917c-c25921b56e9c","Type":"ContainerStarted","Data":"b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.766350 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.766321 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-501f-account-create-g2sgs" podStartSLOduration=2.7662974719999998 podStartE2EDuration="2.766297472s" podCreationTimestamp="2025-11-21 10:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:06.762147151 +0000 UTC m=+1451.871289659" watchObservedRunningTime="2025-11-21 10:05:06.766297472 +0000 UTC m=+1451.875439970" Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.766361 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.781365 4972 generic.go:334] "Generic (PLEG): container finished" podID="e938b27a-060f-4c56-af67-7c971a877d64" containerID="de1b1ead1be9bb4700eff0120e321dcdcc207643851d89741743ca89c0feb9be" exitCode=0 Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.781530 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5wd2j" event={"ID":"e938b27a-060f-4c56-af67-7c971a877d64","Type":"ContainerDied","Data":"de1b1ead1be9bb4700eff0120e321dcdcc207643851d89741743ca89c0feb9be"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.781553 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5wd2j" event={"ID":"e938b27a-060f-4c56-af67-7c971a877d64","Type":"ContainerStarted","Data":"e358d1875634ddb2099dc5bbfab97cf92b6adc5a23c83c2a5fc3ac98aa642e06"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.786709 4972 generic.go:334] "Generic (PLEG): container finished" podID="43333bc2-9532-4de9-ada0-761a687b1640" containerID="3d2cc2221d9f2b7335dabef232673e1b90f3e68b118b80db724d2b99225db57e" exitCode=0 Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.786811 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5jd8b" event={"ID":"43333bc2-9532-4de9-ada0-761a687b1640","Type":"ContainerDied","Data":"3d2cc2221d9f2b7335dabef232673e1b90f3e68b118b80db724d2b99225db57e"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.786862 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5jd8b" event={"ID":"43333bc2-9532-4de9-ada0-761a687b1640","Type":"ContainerStarted","Data":"ea8e1be5ef47f5e907a3c236a0a86ee6c1701fe8ed75d44e5e34ee8ff1e65435"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.796286 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-748ccf64d9-7vqzf" podStartSLOduration=3.796267143 podStartE2EDuration="3.796267143s" podCreationTimestamp="2025-11-21 10:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:06.786572854 +0000 UTC m=+1451.895715362" watchObservedRunningTime="2025-11-21 10:05:06.796267143 +0000 UTC m=+1451.905409641" Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.797120 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5330-account-create-v8qd8" event={"ID":"62ab9b95-78b8-49ca-ad65-2b63990c55a9","Type":"ContainerStarted","Data":"f5f7c37dc9ae815f57a02f21d1296f31ab5066f826a28262e76dc7b2ea449e3c"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.797164 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5330-account-create-v8qd8" event={"ID":"62ab9b95-78b8-49ca-ad65-2b63990c55a9","Type":"ContainerStarted","Data":"cbfb2624d8d62a73979099826a8af5acf2cff3b4c7f083d0f66432e85b29c07f"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.807903 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ac8a-account-create-nnzz2" event={"ID":"448f2d01-d067-4694-9545-6771e700e52b","Type":"ContainerStarted","Data":"eb63228ca804311e8bc116ce226f0d7c71b1341aec906d9cd9ca9031277f6654"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.810986 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4vsj6" event={"ID":"86c343dd-b3e5-4822-b5c0-f12c8a7530bb","Type":"ContainerStarted","Data":"b74ee39cc4ce9125ba478173dc5e89ad053a7a0c316ad87b411b9c18475c318b"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.811032 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4vsj6" event={"ID":"86c343dd-b3e5-4822-b5c0-f12c8a7530bb","Type":"ContainerStarted","Data":"d6c8189ab18a49d64b4cfad28f08bb5e49c54d4353cf007fa023813203b5ac4f"} Nov 21 10:05:06 crc kubenswrapper[4972]: I1121 10:05:06.817364 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5330-account-create-v8qd8" podStartSLOduration=1.817349597 podStartE2EDuration="1.817349597s" podCreationTimestamp="2025-11-21 10:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:06.815417285 +0000 UTC m=+1451.924559783" watchObservedRunningTime="2025-11-21 10:05:06.817349597 +0000 UTC m=+1451.926492095" Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.478983 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.563266 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.678693 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-558565fb6f-25dzm"] Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.678934 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" podUID="74c175b7-4f32-4315-ad7b-927f17c4bf1e" containerName="dnsmasq-dns" containerID="cri-o://50db32f5bf6f606b782276822a09d015f11ad1d042607ca873d9e6783ac3ce78" gracePeriod=10 Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.878466 4972 generic.go:334] "Generic (PLEG): container finished" podID="448f2d01-d067-4694-9545-6771e700e52b" containerID="be888e4c1ff80facebd8b83ac71955557b3ee41fc26a589d45f4417d3e5dd817" exitCode=0 Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.878935 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ac8a-account-create-nnzz2" event={"ID":"448f2d01-d067-4694-9545-6771e700e52b","Type":"ContainerDied","Data":"be888e4c1ff80facebd8b83ac71955557b3ee41fc26a589d45f4417d3e5dd817"} Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.896965 4972 generic.go:334] "Generic (PLEG): container finished" podID="86c343dd-b3e5-4822-b5c0-f12c8a7530bb" containerID="b74ee39cc4ce9125ba478173dc5e89ad053a7a0c316ad87b411b9c18475c318b" exitCode=0 Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.897063 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4vsj6" event={"ID":"86c343dd-b3e5-4822-b5c0-f12c8a7530bb","Type":"ContainerDied","Data":"b74ee39cc4ce9125ba478173dc5e89ad053a7a0c316ad87b411b9c18475c318b"} Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.920115 4972 generic.go:334] "Generic (PLEG): container finished" podID="48b94453-0fa8-42b2-9093-b233661916af" containerID="e72281d879c70916014fd08fab0961f2ce60d1723ec476c4e8eaac55838d50c5" exitCode=0 Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.920203 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-501f-account-create-g2sgs" event={"ID":"48b94453-0fa8-42b2-9093-b233661916af","Type":"ContainerDied","Data":"e72281d879c70916014fd08fab0961f2ce60d1723ec476c4e8eaac55838d50c5"} Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.927137 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"befdbf4d-7d20-40ca-9985-8309a0295dad","Type":"ContainerStarted","Data":"a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75"} Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.974267 4972 generic.go:334] "Generic (PLEG): container finished" podID="74c175b7-4f32-4315-ad7b-927f17c4bf1e" containerID="50db32f5bf6f606b782276822a09d015f11ad1d042607ca873d9e6783ac3ce78" exitCode=0 Nov 21 10:05:07 crc kubenswrapper[4972]: I1121 10:05:07.974395 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" event={"ID":"74c175b7-4f32-4315-ad7b-927f17c4bf1e","Type":"ContainerDied","Data":"50db32f5bf6f606b782276822a09d015f11ad1d042607ca873d9e6783ac3ce78"} Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.000287 4972 generic.go:334] "Generic (PLEG): container finished" podID="1e58da07-71c8-4739-848a-94e49b6c473c" containerID="62f86be96e036f2ac23fd13150fbf6bacfb4b8ce5f2ba708160bbc54de2e0910" exitCode=0 Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.000351 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e58da07-71c8-4739-848a-94e49b6c473c","Type":"ContainerDied","Data":"62f86be96e036f2ac23fd13150fbf6bacfb4b8ce5f2ba708160bbc54de2e0910"} Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.002050 4972 generic.go:334] "Generic (PLEG): container finished" podID="62ab9b95-78b8-49ca-ad65-2b63990c55a9" containerID="f5f7c37dc9ae815f57a02f21d1296f31ab5066f826a28262e76dc7b2ea449e3c" exitCode=0 Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.002527 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5330-account-create-v8qd8" event={"ID":"62ab9b95-78b8-49ca-ad65-2b63990c55a9","Type":"ContainerDied","Data":"f5f7c37dc9ae815f57a02f21d1296f31ab5066f826a28262e76dc7b2ea449e3c"} Nov 21 10:05:08 crc kubenswrapper[4972]: E1121 10:05:08.059062 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74c175b7_4f32_4315_ad7b_927f17c4bf1e.slice/crio-conmon-50db32f5bf6f606b782276822a09d015f11ad1d042607ca873d9e6783ac3ce78.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74c175b7_4f32_4315_ad7b_927f17c4bf1e.slice/crio-50db32f5bf6f606b782276822a09d015f11ad1d042607ca873d9e6783ac3ce78.scope\": RecentStats: unable to find data in memory cache]" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.125708 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.237446 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-scripts\") pod \"1e58da07-71c8-4739-848a-94e49b6c473c\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.237772 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-config-data\") pod \"1e58da07-71c8-4739-848a-94e49b6c473c\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.237809 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blh98\" (UniqueName: \"kubernetes.io/projected/1e58da07-71c8-4739-848a-94e49b6c473c-kube-api-access-blh98\") pod \"1e58da07-71c8-4739-848a-94e49b6c473c\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.237915 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-sg-core-conf-yaml\") pod \"1e58da07-71c8-4739-848a-94e49b6c473c\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.237994 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e58da07-71c8-4739-848a-94e49b6c473c-run-httpd\") pod \"1e58da07-71c8-4739-848a-94e49b6c473c\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.238044 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e58da07-71c8-4739-848a-94e49b6c473c-log-httpd\") pod \"1e58da07-71c8-4739-848a-94e49b6c473c\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.238075 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-combined-ca-bundle\") pod \"1e58da07-71c8-4739-848a-94e49b6c473c\" (UID: \"1e58da07-71c8-4739-848a-94e49b6c473c\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.243763 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-scripts" (OuterVolumeSpecName: "scripts") pod "1e58da07-71c8-4739-848a-94e49b6c473c" (UID: "1e58da07-71c8-4739-848a-94e49b6c473c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.244114 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e58da07-71c8-4739-848a-94e49b6c473c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1e58da07-71c8-4739-848a-94e49b6c473c" (UID: "1e58da07-71c8-4739-848a-94e49b6c473c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.245468 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e58da07-71c8-4739-848a-94e49b6c473c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1e58da07-71c8-4739-848a-94e49b6c473c" (UID: "1e58da07-71c8-4739-848a-94e49b6c473c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.260050 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e58da07-71c8-4739-848a-94e49b6c473c-kube-api-access-blh98" (OuterVolumeSpecName: "kube-api-access-blh98") pod "1e58da07-71c8-4739-848a-94e49b6c473c" (UID: "1e58da07-71c8-4739-848a-94e49b6c473c"). InnerVolumeSpecName "kube-api-access-blh98". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.299039 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1e58da07-71c8-4739-848a-94e49b6c473c" (UID: "1e58da07-71c8-4739-848a-94e49b6c473c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.341537 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blh98\" (UniqueName: \"kubernetes.io/projected/1e58da07-71c8-4739-848a-94e49b6c473c-kube-api-access-blh98\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.341581 4972 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.341592 4972 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e58da07-71c8-4739-848a-94e49b6c473c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.341601 4972 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e58da07-71c8-4739-848a-94e49b6c473c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.341609 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.374743 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-config-data" (OuterVolumeSpecName: "config-data") pod "1e58da07-71c8-4739-848a-94e49b6c473c" (UID: "1e58da07-71c8-4739-848a-94e49b6c473c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.377680 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e58da07-71c8-4739-848a-94e49b6c473c" (UID: "1e58da07-71c8-4739-848a-94e49b6c473c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.403053 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.443294 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.443321 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e58da07-71c8-4739-848a-94e49b6c473c-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.544558 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-config\") pod \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.544699 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-ovsdbserver-nb\") pod \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.544785 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-dns-svc\") pod \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.544937 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdx4m\" (UniqueName: \"kubernetes.io/projected/74c175b7-4f32-4315-ad7b-927f17c4bf1e-kube-api-access-qdx4m\") pod \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.544991 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-ovsdbserver-sb\") pod \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\" (UID: \"74c175b7-4f32-4315-ad7b-927f17c4bf1e\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.558002 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c175b7-4f32-4315-ad7b-927f17c4bf1e-kube-api-access-qdx4m" (OuterVolumeSpecName: "kube-api-access-qdx4m") pod "74c175b7-4f32-4315-ad7b-927f17c4bf1e" (UID: "74c175b7-4f32-4315-ad7b-927f17c4bf1e"). InnerVolumeSpecName "kube-api-access-qdx4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.605649 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-config" (OuterVolumeSpecName: "config") pod "74c175b7-4f32-4315-ad7b-927f17c4bf1e" (UID: "74c175b7-4f32-4315-ad7b-927f17c4bf1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.646938 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.646972 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdx4m\" (UniqueName: \"kubernetes.io/projected/74c175b7-4f32-4315-ad7b-927f17c4bf1e-kube-api-access-qdx4m\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.682393 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "74c175b7-4f32-4315-ad7b-927f17c4bf1e" (UID: "74c175b7-4f32-4315-ad7b-927f17c4bf1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.687286 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "74c175b7-4f32-4315-ad7b-927f17c4bf1e" (UID: "74c175b7-4f32-4315-ad7b-927f17c4bf1e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.717455 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "74c175b7-4f32-4315-ad7b-927f17c4bf1e" (UID: "74c175b7-4f32-4315-ad7b-927f17c4bf1e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.748195 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.748241 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.748276 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74c175b7-4f32-4315-ad7b-927f17c4bf1e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.830788 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5wd2j" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.844349 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4vsj6" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.857481 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5jd8b" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.951857 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e938b27a-060f-4c56-af67-7c971a877d64-operator-scripts\") pod \"e938b27a-060f-4c56-af67-7c971a877d64\" (UID: \"e938b27a-060f-4c56-af67-7c971a877d64\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.951952 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5sf6\" (UniqueName: \"kubernetes.io/projected/86c343dd-b3e5-4822-b5c0-f12c8a7530bb-kube-api-access-b5sf6\") pod \"86c343dd-b3e5-4822-b5c0-f12c8a7530bb\" (UID: \"86c343dd-b3e5-4822-b5c0-f12c8a7530bb\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.952002 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2grl\" (UniqueName: \"kubernetes.io/projected/43333bc2-9532-4de9-ada0-761a687b1640-kube-api-access-n2grl\") pod \"43333bc2-9532-4de9-ada0-761a687b1640\" (UID: \"43333bc2-9532-4de9-ada0-761a687b1640\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.952042 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c343dd-b3e5-4822-b5c0-f12c8a7530bb-operator-scripts\") pod \"86c343dd-b3e5-4822-b5c0-f12c8a7530bb\" (UID: \"86c343dd-b3e5-4822-b5c0-f12c8a7530bb\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.952156 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlkm2\" (UniqueName: \"kubernetes.io/projected/e938b27a-060f-4c56-af67-7c971a877d64-kube-api-access-vlkm2\") pod \"e938b27a-060f-4c56-af67-7c971a877d64\" (UID: \"e938b27a-060f-4c56-af67-7c971a877d64\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.952478 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e938b27a-060f-4c56-af67-7c971a877d64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e938b27a-060f-4c56-af67-7c971a877d64" (UID: "e938b27a-060f-4c56-af67-7c971a877d64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.952870 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86c343dd-b3e5-4822-b5c0-f12c8a7530bb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86c343dd-b3e5-4822-b5c0-f12c8a7530bb" (UID: "86c343dd-b3e5-4822-b5c0-f12c8a7530bb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.952901 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43333bc2-9532-4de9-ada0-761a687b1640-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43333bc2-9532-4de9-ada0-761a687b1640" (UID: "43333bc2-9532-4de9-ada0-761a687b1640"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.952239 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43333bc2-9532-4de9-ada0-761a687b1640-operator-scripts\") pod \"43333bc2-9532-4de9-ada0-761a687b1640\" (UID: \"43333bc2-9532-4de9-ada0-761a687b1640\") " Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.953507 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86c343dd-b3e5-4822-b5c0-f12c8a7530bb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.953538 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43333bc2-9532-4de9-ada0-761a687b1640-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.953551 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e938b27a-060f-4c56-af67-7c971a877d64-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.956735 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c343dd-b3e5-4822-b5c0-f12c8a7530bb-kube-api-access-b5sf6" (OuterVolumeSpecName: "kube-api-access-b5sf6") pod "86c343dd-b3e5-4822-b5c0-f12c8a7530bb" (UID: "86c343dd-b3e5-4822-b5c0-f12c8a7530bb"). InnerVolumeSpecName "kube-api-access-b5sf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.957143 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43333bc2-9532-4de9-ada0-761a687b1640-kube-api-access-n2grl" (OuterVolumeSpecName: "kube-api-access-n2grl") pod "43333bc2-9532-4de9-ada0-761a687b1640" (UID: "43333bc2-9532-4de9-ada0-761a687b1640"). InnerVolumeSpecName "kube-api-access-n2grl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:08 crc kubenswrapper[4972]: I1121 10:05:08.962549 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e938b27a-060f-4c56-af67-7c971a877d64-kube-api-access-vlkm2" (OuterVolumeSpecName: "kube-api-access-vlkm2") pod "e938b27a-060f-4c56-af67-7c971a877d64" (UID: "e938b27a-060f-4c56-af67-7c971a877d64"). InnerVolumeSpecName "kube-api-access-vlkm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.014156 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4vsj6" event={"ID":"86c343dd-b3e5-4822-b5c0-f12c8a7530bb","Type":"ContainerDied","Data":"d6c8189ab18a49d64b4cfad28f08bb5e49c54d4353cf007fa023813203b5ac4f"} Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.014173 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4vsj6" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.014193 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6c8189ab18a49d64b4cfad28f08bb5e49c54d4353cf007fa023813203b5ac4f" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.016519 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"befdbf4d-7d20-40ca-9985-8309a0295dad","Type":"ContainerStarted","Data":"317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c"} Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.019923 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" event={"ID":"74c175b7-4f32-4315-ad7b-927f17c4bf1e","Type":"ContainerDied","Data":"35884de49fda08a0c957ad781a16e3a403e6b6dacb12314b44e678d77f0d6c3c"} Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.019975 4972 scope.go:117] "RemoveContainer" containerID="50db32f5bf6f606b782276822a09d015f11ad1d042607ca873d9e6783ac3ce78" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.019980 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-558565fb6f-25dzm" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.025047 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.025081 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e58da07-71c8-4739-848a-94e49b6c473c","Type":"ContainerDied","Data":"3a1841833b6830d544c75999eef76fda9526a7505f7b724df7c3ca827dfb67a9"} Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.030709 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5wd2j" event={"ID":"e938b27a-060f-4c56-af67-7c971a877d64","Type":"ContainerDied","Data":"e358d1875634ddb2099dc5bbfab97cf92b6adc5a23c83c2a5fc3ac98aa642e06"} Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.030751 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e358d1875634ddb2099dc5bbfab97cf92b6adc5a23c83c2a5fc3ac98aa642e06" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.030719 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5wd2j" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.038250 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.038231418 podStartE2EDuration="5.038231418s" podCreationTimestamp="2025-11-21 10:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:09.037227591 +0000 UTC m=+1454.146370099" watchObservedRunningTime="2025-11-21 10:05:09.038231418 +0000 UTC m=+1454.147373916" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.039223 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5jd8b" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.040090 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5jd8b" event={"ID":"43333bc2-9532-4de9-ada0-761a687b1640","Type":"ContainerDied","Data":"ea8e1be5ef47f5e907a3c236a0a86ee6c1701fe8ed75d44e5e34ee8ff1e65435"} Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.040214 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea8e1be5ef47f5e907a3c236a0a86ee6c1701fe8ed75d44e5e34ee8ff1e65435" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.057733 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5sf6\" (UniqueName: \"kubernetes.io/projected/86c343dd-b3e5-4822-b5c0-f12c8a7530bb-kube-api-access-b5sf6\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.060558 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2grl\" (UniqueName: \"kubernetes.io/projected/43333bc2-9532-4de9-ada0-761a687b1640-kube-api-access-n2grl\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.062616 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlkm2\" (UniqueName: \"kubernetes.io/projected/e938b27a-060f-4c56-af67-7c971a877d64-kube-api-access-vlkm2\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.083884 4972 scope.go:117] "RemoveContainer" containerID="f8497958ba95bde86c455d4dc4ef3f8b28e080270a33b2d955a42874f362e1f7" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.109672 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.124241 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.137384 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-558565fb6f-25dzm"] Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.143130 4972 scope.go:117] "RemoveContainer" containerID="aeab841033fd01e3f4e3ea8935c42be4a459c6ac89c4166b63e1de3e9f14cdbd" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.144679 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:09 crc kubenswrapper[4972]: E1121 10:05:09.145183 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43333bc2-9532-4de9-ada0-761a687b1640" containerName="mariadb-database-create" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145205 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="43333bc2-9532-4de9-ada0-761a687b1640" containerName="mariadb-database-create" Nov 21 10:05:09 crc kubenswrapper[4972]: E1121 10:05:09.145229 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="sg-core" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145239 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="sg-core" Nov 21 10:05:09 crc kubenswrapper[4972]: E1121 10:05:09.145262 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="ceilometer-central-agent" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145271 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="ceilometer-central-agent" Nov 21 10:05:09 crc kubenswrapper[4972]: E1121 10:05:09.145294 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c175b7-4f32-4315-ad7b-927f17c4bf1e" containerName="init" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145303 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c175b7-4f32-4315-ad7b-927f17c4bf1e" containerName="init" Nov 21 10:05:09 crc kubenswrapper[4972]: E1121 10:05:09.145315 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e938b27a-060f-4c56-af67-7c971a877d64" containerName="mariadb-database-create" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145323 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e938b27a-060f-4c56-af67-7c971a877d64" containerName="mariadb-database-create" Nov 21 10:05:09 crc kubenswrapper[4972]: E1121 10:05:09.145342 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="ceilometer-notification-agent" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145352 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="ceilometer-notification-agent" Nov 21 10:05:09 crc kubenswrapper[4972]: E1121 10:05:09.145368 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c343dd-b3e5-4822-b5c0-f12c8a7530bb" containerName="mariadb-database-create" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145378 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c343dd-b3e5-4822-b5c0-f12c8a7530bb" containerName="mariadb-database-create" Nov 21 10:05:09 crc kubenswrapper[4972]: E1121 10:05:09.145391 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="proxy-httpd" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145429 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="proxy-httpd" Nov 21 10:05:09 crc kubenswrapper[4972]: E1121 10:05:09.145447 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c175b7-4f32-4315-ad7b-927f17c4bf1e" containerName="dnsmasq-dns" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145456 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c175b7-4f32-4315-ad7b-927f17c4bf1e" containerName="dnsmasq-dns" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145696 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="ceilometer-central-agent" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145714 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="sg-core" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145729 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="proxy-httpd" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145745 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c343dd-b3e5-4822-b5c0-f12c8a7530bb" containerName="mariadb-database-create" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145759 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" containerName="ceilometer-notification-agent" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145789 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e938b27a-060f-4c56-af67-7c971a877d64" containerName="mariadb-database-create" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145804 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="43333bc2-9532-4de9-ada0-761a687b1640" containerName="mariadb-database-create" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.145822 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c175b7-4f32-4315-ad7b-927f17c4bf1e" containerName="dnsmasq-dns" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.162709 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-558565fb6f-25dzm"] Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.163257 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.170008 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.170070 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.170601 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.248644 4972 scope.go:117] "RemoveContainer" containerID="93de2e8b696fe5a07f80ebff1526da274e75d0fbfd512cadffc83d5b337356aa" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.274178 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.274249 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.274274 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92949068-45c7-407c-bf24-7e587555ccd9-run-httpd\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.274324 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65z5b\" (UniqueName: \"kubernetes.io/projected/92949068-45c7-407c-bf24-7e587555ccd9-kube-api-access-65z5b\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.274364 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-scripts\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.274379 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92949068-45c7-407c-bf24-7e587555ccd9-log-httpd\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.274407 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-config-data\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.279047 4972 scope.go:117] "RemoveContainer" containerID="62f86be96e036f2ac23fd13150fbf6bacfb4b8ce5f2ba708160bbc54de2e0910" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.302962 4972 scope.go:117] "RemoveContainer" containerID="b2e7ad04d06d5cf578cf608137e619982339ee1dce176875b0863adfbcd2c5b4" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.376896 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-scripts\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.376941 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92949068-45c7-407c-bf24-7e587555ccd9-log-httpd\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.376970 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-config-data\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.377019 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.377104 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.377123 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92949068-45c7-407c-bf24-7e587555ccd9-run-httpd\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.377172 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65z5b\" (UniqueName: \"kubernetes.io/projected/92949068-45c7-407c-bf24-7e587555ccd9-kube-api-access-65z5b\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.377416 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92949068-45c7-407c-bf24-7e587555ccd9-log-httpd\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.378006 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92949068-45c7-407c-bf24-7e587555ccd9-run-httpd\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.381272 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-config-data\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.381397 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-scripts\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.382522 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.386199 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.398926 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65z5b\" (UniqueName: \"kubernetes.io/projected/92949068-45c7-407c-bf24-7e587555ccd9-kube-api-access-65z5b\") pod \"ceilometer-0\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.477227 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5330-account-create-v8qd8" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.480281 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.581074 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klt46\" (UniqueName: \"kubernetes.io/projected/62ab9b95-78b8-49ca-ad65-2b63990c55a9-kube-api-access-klt46\") pod \"62ab9b95-78b8-49ca-ad65-2b63990c55a9\" (UID: \"62ab9b95-78b8-49ca-ad65-2b63990c55a9\") " Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.581210 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ab9b95-78b8-49ca-ad65-2b63990c55a9-operator-scripts\") pod \"62ab9b95-78b8-49ca-ad65-2b63990c55a9\" (UID: \"62ab9b95-78b8-49ca-ad65-2b63990c55a9\") " Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.581748 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ab9b95-78b8-49ca-ad65-2b63990c55a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62ab9b95-78b8-49ca-ad65-2b63990c55a9" (UID: "62ab9b95-78b8-49ca-ad65-2b63990c55a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.582246 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ab9b95-78b8-49ca-ad65-2b63990c55a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.584638 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ab9b95-78b8-49ca-ad65-2b63990c55a9-kube-api-access-klt46" (OuterVolumeSpecName: "kube-api-access-klt46") pod "62ab9b95-78b8-49ca-ad65-2b63990c55a9" (UID: "62ab9b95-78b8-49ca-ad65-2b63990c55a9"). InnerVolumeSpecName "kube-api-access-klt46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.683404 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klt46\" (UniqueName: \"kubernetes.io/projected/62ab9b95-78b8-49ca-ad65-2b63990c55a9-kube-api-access-klt46\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.691374 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ac8a-account-create-nnzz2" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.697066 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-501f-account-create-g2sgs" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.772984 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e58da07-71c8-4739-848a-94e49b6c473c" path="/var/lib/kubelet/pods/1e58da07-71c8-4739-848a-94e49b6c473c/volumes" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.774035 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74c175b7-4f32-4315-ad7b-927f17c4bf1e" path="/var/lib/kubelet/pods/74c175b7-4f32-4315-ad7b-927f17c4bf1e/volumes" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.785488 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448f2d01-d067-4694-9545-6771e700e52b-operator-scripts\") pod \"448f2d01-d067-4694-9545-6771e700e52b\" (UID: \"448f2d01-d067-4694-9545-6771e700e52b\") " Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.785661 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b94453-0fa8-42b2-9093-b233661916af-operator-scripts\") pod \"48b94453-0fa8-42b2-9093-b233661916af\" (UID: \"48b94453-0fa8-42b2-9093-b233661916af\") " Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.785729 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zqb2\" (UniqueName: \"kubernetes.io/projected/48b94453-0fa8-42b2-9093-b233661916af-kube-api-access-4zqb2\") pod \"48b94453-0fa8-42b2-9093-b233661916af\" (UID: \"48b94453-0fa8-42b2-9093-b233661916af\") " Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.785755 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw6jq\" (UniqueName: \"kubernetes.io/projected/448f2d01-d067-4694-9545-6771e700e52b-kube-api-access-rw6jq\") pod \"448f2d01-d067-4694-9545-6771e700e52b\" (UID: \"448f2d01-d067-4694-9545-6771e700e52b\") " Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.786772 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/448f2d01-d067-4694-9545-6771e700e52b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "448f2d01-d067-4694-9545-6771e700e52b" (UID: "448f2d01-d067-4694-9545-6771e700e52b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.787124 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b94453-0fa8-42b2-9093-b233661916af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48b94453-0fa8-42b2-9093-b233661916af" (UID: "48b94453-0fa8-42b2-9093-b233661916af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.788170 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448f2d01-d067-4694-9545-6771e700e52b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.788198 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b94453-0fa8-42b2-9093-b233661916af-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.792680 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/448f2d01-d067-4694-9545-6771e700e52b-kube-api-access-rw6jq" (OuterVolumeSpecName: "kube-api-access-rw6jq") pod "448f2d01-d067-4694-9545-6771e700e52b" (UID: "448f2d01-d067-4694-9545-6771e700e52b"). InnerVolumeSpecName "kube-api-access-rw6jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.823572 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b94453-0fa8-42b2-9093-b233661916af-kube-api-access-4zqb2" (OuterVolumeSpecName: "kube-api-access-4zqb2") pod "48b94453-0fa8-42b2-9093-b233661916af" (UID: "48b94453-0fa8-42b2-9093-b233661916af"). InnerVolumeSpecName "kube-api-access-4zqb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.892340 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zqb2\" (UniqueName: \"kubernetes.io/projected/48b94453-0fa8-42b2-9093-b233661916af-kube-api-access-4zqb2\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:09 crc kubenswrapper[4972]: I1121 10:05:09.892398 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw6jq\" (UniqueName: \"kubernetes.io/projected/448f2d01-d067-4694-9545-6771e700e52b-kube-api-access-rw6jq\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:10 crc kubenswrapper[4972]: I1121 10:05:10.057456 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-501f-account-create-g2sgs" event={"ID":"48b94453-0fa8-42b2-9093-b233661916af","Type":"ContainerDied","Data":"5e9376229aa0cc1058c575c452a9fc7f0b2d78521c77556e7409140671f8729b"} Nov 21 10:05:10 crc kubenswrapper[4972]: I1121 10:05:10.057503 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e9376229aa0cc1058c575c452a9fc7f0b2d78521c77556e7409140671f8729b" Nov 21 10:05:10 crc kubenswrapper[4972]: I1121 10:05:10.057561 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-501f-account-create-g2sgs" Nov 21 10:05:10 crc kubenswrapper[4972]: I1121 10:05:10.063922 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 21 10:05:10 crc kubenswrapper[4972]: I1121 10:05:10.068543 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:10 crc kubenswrapper[4972]: I1121 10:05:10.069787 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5330-account-create-v8qd8" event={"ID":"62ab9b95-78b8-49ca-ad65-2b63990c55a9","Type":"ContainerDied","Data":"cbfb2624d8d62a73979099826a8af5acf2cff3b4c7f083d0f66432e85b29c07f"} Nov 21 10:05:10 crc kubenswrapper[4972]: I1121 10:05:10.069822 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbfb2624d8d62a73979099826a8af5acf2cff3b4c7f083d0f66432e85b29c07f" Nov 21 10:05:10 crc kubenswrapper[4972]: I1121 10:05:10.069902 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5330-account-create-v8qd8" Nov 21 10:05:10 crc kubenswrapper[4972]: I1121 10:05:10.080447 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ac8a-account-create-nnzz2" event={"ID":"448f2d01-d067-4694-9545-6771e700e52b","Type":"ContainerDied","Data":"eb63228ca804311e8bc116ce226f0d7c71b1341aec906d9cd9ca9031277f6654"} Nov 21 10:05:10 crc kubenswrapper[4972]: I1121 10:05:10.080482 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb63228ca804311e8bc116ce226f0d7c71b1341aec906d9cd9ca9031277f6654" Nov 21 10:05:10 crc kubenswrapper[4972]: I1121 10:05:10.081128 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ac8a-account-create-nnzz2" Nov 21 10:05:10 crc kubenswrapper[4972]: I1121 10:05:10.590306 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:05:11 crc kubenswrapper[4972]: I1121 10:05:11.094446 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92949068-45c7-407c-bf24-7e587555ccd9","Type":"ContainerStarted","Data":"e1751226d6271243a00fd110e416c53682a11953d4636e9fb40f7263ffdc7b71"} Nov 21 10:05:12 crc kubenswrapper[4972]: I1121 10:05:12.076198 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:12 crc kubenswrapper[4972]: I1121 10:05:12.121699 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92949068-45c7-407c-bf24-7e587555ccd9","Type":"ContainerStarted","Data":"6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c"} Nov 21 10:05:12 crc kubenswrapper[4972]: I1121 10:05:12.137326 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 10:05:12 crc kubenswrapper[4972]: I1121 10:05:12.137369 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 10:05:12 crc kubenswrapper[4972]: I1121 10:05:12.174326 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 10:05:12 crc kubenswrapper[4972]: I1121 10:05:12.189221 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 10:05:12 crc kubenswrapper[4972]: I1121 10:05:12.927802 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:12 crc kubenswrapper[4972]: I1121 10:05:12.928160 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:12 crc kubenswrapper[4972]: I1121 10:05:12.966858 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:12 crc kubenswrapper[4972]: I1121 10:05:12.983530 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:13 crc kubenswrapper[4972]: I1121 10:05:13.143505 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92949068-45c7-407c-bf24-7e587555ccd9","Type":"ContainerStarted","Data":"de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06"} Nov 21 10:05:13 crc kubenswrapper[4972]: I1121 10:05:13.144144 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:13 crc kubenswrapper[4972]: I1121 10:05:13.144180 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 10:05:13 crc kubenswrapper[4972]: I1121 10:05:13.144193 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 10:05:13 crc kubenswrapper[4972]: I1121 10:05:13.144205 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:14 crc kubenswrapper[4972]: I1121 10:05:14.147027 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:05:14 crc kubenswrapper[4972]: I1121 10:05:14.207767 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-799fdbb85b-bfzq9"] Nov 21 10:05:14 crc kubenswrapper[4972]: I1121 10:05:14.208012 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-799fdbb85b-bfzq9" podUID="3df233db-ea36-4a96-9a2f-4f7e5be4a73c" containerName="neutron-api" containerID="cri-o://205e4e594658f925dc0b429281b5275d2b2fc6c063e62b641ff6d463318934dd" gracePeriod=30 Nov 21 10:05:14 crc kubenswrapper[4972]: I1121 10:05:14.208120 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-799fdbb85b-bfzq9" podUID="3df233db-ea36-4a96-9a2f-4f7e5be4a73c" containerName="neutron-httpd" containerID="cri-o://804834ec69c3f51d8d6ac7baf9e6f02dc3f1d86f9b43589d3e761b16c7cb0a0e" gracePeriod=30 Nov 21 10:05:14 crc kubenswrapper[4972]: I1121 10:05:14.260701 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:14 crc kubenswrapper[4972]: I1121 10:05:14.264935 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.176064 4972 generic.go:334] "Generic (PLEG): container finished" podID="3df233db-ea36-4a96-9a2f-4f7e5be4a73c" containerID="804834ec69c3f51d8d6ac7baf9e6f02dc3f1d86f9b43589d3e761b16c7cb0a0e" exitCode=0 Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.176203 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-799fdbb85b-bfzq9" event={"ID":"3df233db-ea36-4a96-9a2f-4f7e5be4a73c","Type":"ContainerDied","Data":"804834ec69c3f51d8d6ac7baf9e6f02dc3f1d86f9b43589d3e761b16c7cb0a0e"} Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.405655 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8dt78"] Nov 21 10:05:15 crc kubenswrapper[4972]: E1121 10:05:15.406299 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448f2d01-d067-4694-9545-6771e700e52b" containerName="mariadb-account-create" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.406347 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="448f2d01-d067-4694-9545-6771e700e52b" containerName="mariadb-account-create" Nov 21 10:05:15 crc kubenswrapper[4972]: E1121 10:05:15.406382 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b94453-0fa8-42b2-9093-b233661916af" containerName="mariadb-account-create" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.406413 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b94453-0fa8-42b2-9093-b233661916af" containerName="mariadb-account-create" Nov 21 10:05:15 crc kubenswrapper[4972]: E1121 10:05:15.406435 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ab9b95-78b8-49ca-ad65-2b63990c55a9" containerName="mariadb-account-create" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.406442 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ab9b95-78b8-49ca-ad65-2b63990c55a9" containerName="mariadb-account-create" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.406690 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b94453-0fa8-42b2-9093-b233661916af" containerName="mariadb-account-create" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.406746 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="448f2d01-d067-4694-9545-6771e700e52b" containerName="mariadb-account-create" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.406761 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ab9b95-78b8-49ca-ad65-2b63990c55a9" containerName="mariadb-account-create" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.407660 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.407880 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.410696 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.411331 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.411373 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-h8p78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.435303 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8dt78"] Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.543703 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8dt78\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.543745 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-config-data\") pod \"nova-cell0-conductor-db-sync-8dt78\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.543881 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rcbs\" (UniqueName: \"kubernetes.io/projected/b4543ed6-f2db-43da-8e2e-a63720b6cf67-kube-api-access-7rcbs\") pod \"nova-cell0-conductor-db-sync-8dt78\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.543941 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-scripts\") pod \"nova-cell0-conductor-db-sync-8dt78\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.645267 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rcbs\" (UniqueName: \"kubernetes.io/projected/b4543ed6-f2db-43da-8e2e-a63720b6cf67-kube-api-access-7rcbs\") pod \"nova-cell0-conductor-db-sync-8dt78\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.645362 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-scripts\") pod \"nova-cell0-conductor-db-sync-8dt78\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.645407 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8dt78\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.645428 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-config-data\") pod \"nova-cell0-conductor-db-sync-8dt78\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.652545 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-config-data\") pod \"nova-cell0-conductor-db-sync-8dt78\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.657002 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-scripts\") pod \"nova-cell0-conductor-db-sync-8dt78\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.674649 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8dt78\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.708525 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rcbs\" (UniqueName: \"kubernetes.io/projected/b4543ed6-f2db-43da-8e2e-a63720b6cf67-kube-api-access-7rcbs\") pod \"nova-cell0-conductor-db-sync-8dt78\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:15 crc kubenswrapper[4972]: I1121 10:05:15.748505 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:16 crc kubenswrapper[4972]: I1121 10:05:16.492552 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:16 crc kubenswrapper[4972]: I1121 10:05:16.492651 4972 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 10:05:16 crc kubenswrapper[4972]: I1121 10:05:16.532454 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:16 crc kubenswrapper[4972]: I1121 10:05:16.585223 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 10:05:16 crc kubenswrapper[4972]: I1121 10:05:16.585324 4972 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 10:05:16 crc kubenswrapper[4972]: I1121 10:05:16.619533 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 10:05:18 crc kubenswrapper[4972]: I1121 10:05:18.213164 4972 generic.go:334] "Generic (PLEG): container finished" podID="3df233db-ea36-4a96-9a2f-4f7e5be4a73c" containerID="205e4e594658f925dc0b429281b5275d2b2fc6c063e62b641ff6d463318934dd" exitCode=0 Nov 21 10:05:18 crc kubenswrapper[4972]: I1121 10:05:18.213238 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-799fdbb85b-bfzq9" event={"ID":"3df233db-ea36-4a96-9a2f-4f7e5be4a73c","Type":"ContainerDied","Data":"205e4e594658f925dc0b429281b5275d2b2fc6c063e62b641ff6d463318934dd"} Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.240106 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"88c81504-7f14-498f-bd8d-4fa74aebf2d2","Type":"ContainerStarted","Data":"30837ba80ae724788ccc279d47025486e335f00269e93508eaa7eabb78466914"} Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.252025 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.204535971 podStartE2EDuration="18.252008932s" podCreationTimestamp="2025-11-21 10:05:02 +0000 UTC" firstStartedPulling="2025-11-21 10:05:03.768723333 +0000 UTC m=+1448.877865831" lastFinishedPulling="2025-11-21 10:05:19.816196294 +0000 UTC m=+1464.925338792" observedRunningTime="2025-11-21 10:05:20.251470527 +0000 UTC m=+1465.360613025" watchObservedRunningTime="2025-11-21 10:05:20.252008932 +0000 UTC m=+1465.361151430" Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.273527 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.334157 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8dt78"] Nov 21 10:05:20 crc kubenswrapper[4972]: W1121 10:05:20.339803 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4543ed6_f2db_43da_8e2e_a63720b6cf67.slice/crio-4765c58fc8dcb05f0a8b581f43b374a9ec0460118bcad8cc2b6235358fe004ac WatchSource:0}: Error finding container 4765c58fc8dcb05f0a8b581f43b374a9ec0460118bcad8cc2b6235358fe004ac: Status 404 returned error can't find the container with id 4765c58fc8dcb05f0a8b581f43b374a9ec0460118bcad8cc2b6235358fe004ac Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.444916 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-ovndb-tls-certs\") pod \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.444983 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-httpd-config\") pod \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.445014 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-combined-ca-bundle\") pod \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.445110 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw5kg\" (UniqueName: \"kubernetes.io/projected/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-kube-api-access-qw5kg\") pod \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.445182 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-config\") pod \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\" (UID: \"3df233db-ea36-4a96-9a2f-4f7e5be4a73c\") " Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.451185 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3df233db-ea36-4a96-9a2f-4f7e5be4a73c" (UID: "3df233db-ea36-4a96-9a2f-4f7e5be4a73c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.452286 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-kube-api-access-qw5kg" (OuterVolumeSpecName: "kube-api-access-qw5kg") pod "3df233db-ea36-4a96-9a2f-4f7e5be4a73c" (UID: "3df233db-ea36-4a96-9a2f-4f7e5be4a73c"). InnerVolumeSpecName "kube-api-access-qw5kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.496641 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3df233db-ea36-4a96-9a2f-4f7e5be4a73c" (UID: "3df233db-ea36-4a96-9a2f-4f7e5be4a73c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.504720 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-config" (OuterVolumeSpecName: "config") pod "3df233db-ea36-4a96-9a2f-4f7e5be4a73c" (UID: "3df233db-ea36-4a96-9a2f-4f7e5be4a73c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.525786 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3df233db-ea36-4a96-9a2f-4f7e5be4a73c" (UID: "3df233db-ea36-4a96-9a2f-4f7e5be4a73c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.550416 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw5kg\" (UniqueName: \"kubernetes.io/projected/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-kube-api-access-qw5kg\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.550457 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.550470 4972 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.550480 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:20 crc kubenswrapper[4972]: I1121 10:05:20.550490 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df233db-ea36-4a96-9a2f-4f7e5be4a73c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:21 crc kubenswrapper[4972]: I1121 10:05:21.253725 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-799fdbb85b-bfzq9" Nov 21 10:05:21 crc kubenswrapper[4972]: I1121 10:05:21.253718 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-799fdbb85b-bfzq9" event={"ID":"3df233db-ea36-4a96-9a2f-4f7e5be4a73c","Type":"ContainerDied","Data":"7f27ff66420d477a2c5a023be0af54d1b40a6f54c3a980aa9339b676c2b45419"} Nov 21 10:05:21 crc kubenswrapper[4972]: I1121 10:05:21.253885 4972 scope.go:117] "RemoveContainer" containerID="804834ec69c3f51d8d6ac7baf9e6f02dc3f1d86f9b43589d3e761b16c7cb0a0e" Nov 21 10:05:21 crc kubenswrapper[4972]: I1121 10:05:21.258783 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92949068-45c7-407c-bf24-7e587555ccd9","Type":"ContainerStarted","Data":"257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3"} Nov 21 10:05:21 crc kubenswrapper[4972]: I1121 10:05:21.260358 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8dt78" event={"ID":"b4543ed6-f2db-43da-8e2e-a63720b6cf67","Type":"ContainerStarted","Data":"4765c58fc8dcb05f0a8b581f43b374a9ec0460118bcad8cc2b6235358fe004ac"} Nov 21 10:05:21 crc kubenswrapper[4972]: I1121 10:05:21.281176 4972 scope.go:117] "RemoveContainer" containerID="205e4e594658f925dc0b429281b5275d2b2fc6c063e62b641ff6d463318934dd" Nov 21 10:05:21 crc kubenswrapper[4972]: I1121 10:05:21.306366 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-799fdbb85b-bfzq9"] Nov 21 10:05:21 crc kubenswrapper[4972]: I1121 10:05:21.317006 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-799fdbb85b-bfzq9"] Nov 21 10:05:21 crc kubenswrapper[4972]: I1121 10:05:21.781539 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df233db-ea36-4a96-9a2f-4f7e5be4a73c" path="/var/lib/kubelet/pods/3df233db-ea36-4a96-9a2f-4f7e5be4a73c/volumes" Nov 21 10:05:22 crc kubenswrapper[4972]: I1121 10:05:22.278908 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92949068-45c7-407c-bf24-7e587555ccd9","Type":"ContainerStarted","Data":"c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec"} Nov 21 10:05:22 crc kubenswrapper[4972]: I1121 10:05:22.279074 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="ceilometer-central-agent" containerID="cri-o://6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c" gracePeriod=30 Nov 21 10:05:22 crc kubenswrapper[4972]: I1121 10:05:22.279188 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 10:05:22 crc kubenswrapper[4972]: I1121 10:05:22.279668 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="proxy-httpd" containerID="cri-o://c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec" gracePeriod=30 Nov 21 10:05:22 crc kubenswrapper[4972]: I1121 10:05:22.279726 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="sg-core" containerID="cri-o://257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3" gracePeriod=30 Nov 21 10:05:22 crc kubenswrapper[4972]: I1121 10:05:22.279790 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="ceilometer-notification-agent" containerID="cri-o://de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06" gracePeriod=30 Nov 21 10:05:22 crc kubenswrapper[4972]: I1121 10:05:22.305799 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6450385299999999 podStartE2EDuration="13.305782138s" podCreationTimestamp="2025-11-21 10:05:09 +0000 UTC" firstStartedPulling="2025-11-21 10:05:10.087275294 +0000 UTC m=+1455.196417792" lastFinishedPulling="2025-11-21 10:05:21.748018902 +0000 UTC m=+1466.857161400" observedRunningTime="2025-11-21 10:05:22.300154258 +0000 UTC m=+1467.409296756" watchObservedRunningTime="2025-11-21 10:05:22.305782138 +0000 UTC m=+1467.414924636" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.290856 4972 generic.go:334] "Generic (PLEG): container finished" podID="92949068-45c7-407c-bf24-7e587555ccd9" containerID="c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec" exitCode=0 Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.291254 4972 generic.go:334] "Generic (PLEG): container finished" podID="92949068-45c7-407c-bf24-7e587555ccd9" containerID="257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3" exitCode=2 Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.291266 4972 generic.go:334] "Generic (PLEG): container finished" podID="92949068-45c7-407c-bf24-7e587555ccd9" containerID="6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c" exitCode=0 Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.290865 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92949068-45c7-407c-bf24-7e587555ccd9","Type":"ContainerDied","Data":"c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec"} Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.291328 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92949068-45c7-407c-bf24-7e587555ccd9","Type":"ContainerDied","Data":"257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3"} Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.291347 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92949068-45c7-407c-bf24-7e587555ccd9","Type":"ContainerDied","Data":"6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c"} Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.716824 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.814716 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65z5b\" (UniqueName: \"kubernetes.io/projected/92949068-45c7-407c-bf24-7e587555ccd9-kube-api-access-65z5b\") pod \"92949068-45c7-407c-bf24-7e587555ccd9\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.814777 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92949068-45c7-407c-bf24-7e587555ccd9-log-httpd\") pod \"92949068-45c7-407c-bf24-7e587555ccd9\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.815007 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-combined-ca-bundle\") pod \"92949068-45c7-407c-bf24-7e587555ccd9\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.815062 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92949068-45c7-407c-bf24-7e587555ccd9-run-httpd\") pod \"92949068-45c7-407c-bf24-7e587555ccd9\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.815084 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-sg-core-conf-yaml\") pod \"92949068-45c7-407c-bf24-7e587555ccd9\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.815196 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-scripts\") pod \"92949068-45c7-407c-bf24-7e587555ccd9\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.815217 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-config-data\") pod \"92949068-45c7-407c-bf24-7e587555ccd9\" (UID: \"92949068-45c7-407c-bf24-7e587555ccd9\") " Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.815565 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92949068-45c7-407c-bf24-7e587555ccd9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "92949068-45c7-407c-bf24-7e587555ccd9" (UID: "92949068-45c7-407c-bf24-7e587555ccd9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.815818 4972 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92949068-45c7-407c-bf24-7e587555ccd9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.816713 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92949068-45c7-407c-bf24-7e587555ccd9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "92949068-45c7-407c-bf24-7e587555ccd9" (UID: "92949068-45c7-407c-bf24-7e587555ccd9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.820790 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92949068-45c7-407c-bf24-7e587555ccd9-kube-api-access-65z5b" (OuterVolumeSpecName: "kube-api-access-65z5b") pod "92949068-45c7-407c-bf24-7e587555ccd9" (UID: "92949068-45c7-407c-bf24-7e587555ccd9"). InnerVolumeSpecName "kube-api-access-65z5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.830041 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-scripts" (OuterVolumeSpecName: "scripts") pod "92949068-45c7-407c-bf24-7e587555ccd9" (UID: "92949068-45c7-407c-bf24-7e587555ccd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.849390 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "92949068-45c7-407c-bf24-7e587555ccd9" (UID: "92949068-45c7-407c-bf24-7e587555ccd9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.891401 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92949068-45c7-407c-bf24-7e587555ccd9" (UID: "92949068-45c7-407c-bf24-7e587555ccd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.918011 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65z5b\" (UniqueName: \"kubernetes.io/projected/92949068-45c7-407c-bf24-7e587555ccd9-kube-api-access-65z5b\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.918040 4972 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92949068-45c7-407c-bf24-7e587555ccd9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.918049 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.918058 4972 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.918068 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:23 crc kubenswrapper[4972]: I1121 10:05:23.921035 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-config-data" (OuterVolumeSpecName: "config-data") pod "92949068-45c7-407c-bf24-7e587555ccd9" (UID: "92949068-45c7-407c-bf24-7e587555ccd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.020220 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92949068-45c7-407c-bf24-7e587555ccd9-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.307770 4972 generic.go:334] "Generic (PLEG): container finished" podID="92949068-45c7-407c-bf24-7e587555ccd9" containerID="de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06" exitCode=0 Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.307811 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92949068-45c7-407c-bf24-7e587555ccd9","Type":"ContainerDied","Data":"de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06"} Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.307853 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92949068-45c7-407c-bf24-7e587555ccd9","Type":"ContainerDied","Data":"e1751226d6271243a00fd110e416c53682a11953d4636e9fb40f7263ffdc7b71"} Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.307873 4972 scope.go:117] "RemoveContainer" containerID="c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.307970 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.359775 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.398138 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.415889 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:24 crc kubenswrapper[4972]: E1121 10:05:24.422025 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="sg-core" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.424441 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="sg-core" Nov 21 10:05:24 crc kubenswrapper[4972]: E1121 10:05:24.424528 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df233db-ea36-4a96-9a2f-4f7e5be4a73c" containerName="neutron-api" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.424582 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df233db-ea36-4a96-9a2f-4f7e5be4a73c" containerName="neutron-api" Nov 21 10:05:24 crc kubenswrapper[4972]: E1121 10:05:24.424647 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="proxy-httpd" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.424697 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="proxy-httpd" Nov 21 10:05:24 crc kubenswrapper[4972]: E1121 10:05:24.424784 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="ceilometer-central-agent" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.424868 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="ceilometer-central-agent" Nov 21 10:05:24 crc kubenswrapper[4972]: E1121 10:05:24.425115 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df233db-ea36-4a96-9a2f-4f7e5be4a73c" containerName="neutron-httpd" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.425174 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df233db-ea36-4a96-9a2f-4f7e5be4a73c" containerName="neutron-httpd" Nov 21 10:05:24 crc kubenswrapper[4972]: E1121 10:05:24.425235 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="ceilometer-notification-agent" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.425316 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="ceilometer-notification-agent" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.425566 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="sg-core" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.425639 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df233db-ea36-4a96-9a2f-4f7e5be4a73c" containerName="neutron-httpd" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.425701 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df233db-ea36-4a96-9a2f-4f7e5be4a73c" containerName="neutron-api" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.425766 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="ceilometer-central-agent" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.425821 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="proxy-httpd" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.425897 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="92949068-45c7-407c-bf24-7e587555ccd9" containerName="ceilometer-notification-agent" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.427561 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.431800 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.431992 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.432730 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.528410 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9e929f2-d98a-4a86-a607-a93925afed51-run-httpd\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.528687 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.528907 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-config-data\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.529070 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9krqc\" (UniqueName: \"kubernetes.io/projected/a9e929f2-d98a-4a86-a607-a93925afed51-kube-api-access-9krqc\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.529171 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.529307 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-scripts\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.529444 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9e929f2-d98a-4a86-a607-a93925afed51-log-httpd\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.631617 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9krqc\" (UniqueName: \"kubernetes.io/projected/a9e929f2-d98a-4a86-a607-a93925afed51-kube-api-access-9krqc\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.631696 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.631807 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-scripts\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.631895 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9e929f2-d98a-4a86-a607-a93925afed51-log-httpd\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.632054 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9e929f2-d98a-4a86-a607-a93925afed51-run-httpd\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.632113 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.632576 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9e929f2-d98a-4a86-a607-a93925afed51-log-httpd\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.632807 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-config-data\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.632910 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9e929f2-d98a-4a86-a607-a93925afed51-run-httpd\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.645637 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.648737 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-scripts\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.654257 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.656290 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-config-data\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.657182 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9krqc\" (UniqueName: \"kubernetes.io/projected/a9e929f2-d98a-4a86-a607-a93925afed51-kube-api-access-9krqc\") pod \"ceilometer-0\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " pod="openstack/ceilometer-0" Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.687254 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.687493 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4ceb104d-0967-4ef1-87d7-23149492461f" containerName="glance-log" containerID="cri-o://970b2cf0e7c0d0678a5e91fb11daab6d9b02b23cb875fda14cfe0c82fc707282" gracePeriod=30 Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.687819 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4ceb104d-0967-4ef1-87d7-23149492461f" containerName="glance-httpd" containerID="cri-o://2921c78fe04ce5e118035b984bf13a134833efbd0278281daf335ac8e8cdab45" gracePeriod=30 Nov 21 10:05:24 crc kubenswrapper[4972]: I1121 10:05:24.762778 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:25 crc kubenswrapper[4972]: I1121 10:05:25.317268 4972 generic.go:334] "Generic (PLEG): container finished" podID="4ceb104d-0967-4ef1-87d7-23149492461f" containerID="970b2cf0e7c0d0678a5e91fb11daab6d9b02b23cb875fda14cfe0c82fc707282" exitCode=143 Nov 21 10:05:25 crc kubenswrapper[4972]: I1121 10:05:25.317441 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4ceb104d-0967-4ef1-87d7-23149492461f","Type":"ContainerDied","Data":"970b2cf0e7c0d0678a5e91fb11daab6d9b02b23cb875fda14cfe0c82fc707282"} Nov 21 10:05:25 crc kubenswrapper[4972]: I1121 10:05:25.770422 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92949068-45c7-407c-bf24-7e587555ccd9" path="/var/lib/kubelet/pods/92949068-45c7-407c-bf24-7e587555ccd9/volumes" Nov 21 10:05:26 crc kubenswrapper[4972]: I1121 10:05:26.316043 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:05:26 crc kubenswrapper[4972]: I1121 10:05:26.316312 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5c9dee05-0b10-4cbf-af72-70d6928f8c8e" containerName="glance-log" containerID="cri-o://605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79" gracePeriod=30 Nov 21 10:05:26 crc kubenswrapper[4972]: I1121 10:05:26.316545 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5c9dee05-0b10-4cbf-af72-70d6928f8c8e" containerName="glance-httpd" containerID="cri-o://d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30" gracePeriod=30 Nov 21 10:05:26 crc kubenswrapper[4972]: I1121 10:05:26.706555 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:27 crc kubenswrapper[4972]: I1121 10:05:27.337624 4972 generic.go:334] "Generic (PLEG): container finished" podID="5c9dee05-0b10-4cbf-af72-70d6928f8c8e" containerID="605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79" exitCode=143 Nov 21 10:05:27 crc kubenswrapper[4972]: I1121 10:05:27.337709 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c9dee05-0b10-4cbf-af72-70d6928f8c8e","Type":"ContainerDied","Data":"605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79"} Nov 21 10:05:28 crc kubenswrapper[4972]: I1121 10:05:28.361887 4972 generic.go:334] "Generic (PLEG): container finished" podID="4ceb104d-0967-4ef1-87d7-23149492461f" containerID="2921c78fe04ce5e118035b984bf13a134833efbd0278281daf335ac8e8cdab45" exitCode=0 Nov 21 10:05:28 crc kubenswrapper[4972]: I1121 10:05:28.361948 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4ceb104d-0967-4ef1-87d7-23149492461f","Type":"ContainerDied","Data":"2921c78fe04ce5e118035b984bf13a134833efbd0278281daf335ac8e8cdab45"} Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.178132 4972 scope.go:117] "RemoveContainer" containerID="257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.250422 4972 scope.go:117] "RemoveContainer" containerID="de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.391712 4972 scope.go:117] "RemoveContainer" containerID="6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.400207 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4ceb104d-0967-4ef1-87d7-23149492461f","Type":"ContainerDied","Data":"e0f16971b3c58d62c6551395c6b441fdea9a4e83d9c108bed44398a93b94acbf"} Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.400257 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0f16971b3c58d62c6551395c6b441fdea9a4e83d9c108bed44398a93b94acbf" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.400410 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.410342 4972 scope.go:117] "RemoveContainer" containerID="c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec" Nov 21 10:05:29 crc kubenswrapper[4972]: E1121 10:05:29.412037 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec\": container with ID starting with c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec not found: ID does not exist" containerID="c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.412091 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec"} err="failed to get container status \"c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec\": rpc error: code = NotFound desc = could not find container \"c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec\": container with ID starting with c1ded6d2a50b453099b3125e4b565fe8822c6c5c0aaa06f8118e6d60b293beec not found: ID does not exist" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.412145 4972 scope.go:117] "RemoveContainer" containerID="257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3" Nov 21 10:05:29 crc kubenswrapper[4972]: E1121 10:05:29.412512 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3\": container with ID starting with 257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3 not found: ID does not exist" containerID="257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.412539 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3"} err="failed to get container status \"257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3\": rpc error: code = NotFound desc = could not find container \"257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3\": container with ID starting with 257c52ba0b79f8690899ab57857b6af2d9a07530570a64732dbb3957a84e0fa3 not found: ID does not exist" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.412561 4972 scope.go:117] "RemoveContainer" containerID="de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06" Nov 21 10:05:29 crc kubenswrapper[4972]: E1121 10:05:29.412738 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06\": container with ID starting with de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06 not found: ID does not exist" containerID="de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.412758 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06"} err="failed to get container status \"de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06\": rpc error: code = NotFound desc = could not find container \"de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06\": container with ID starting with de9c5b263a26322b1dbdaa18b99441db13b62e2aca4d3c90b9e2530426aa6d06 not found: ID does not exist" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.412770 4972 scope.go:117] "RemoveContainer" containerID="6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c" Nov 21 10:05:29 crc kubenswrapper[4972]: E1121 10:05:29.413270 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c\": container with ID starting with 6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c not found: ID does not exist" containerID="6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.413295 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c"} err="failed to get container status \"6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c\": rpc error: code = NotFound desc = could not find container \"6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c\": container with ID starting with 6d5a49f1bf48b0e57f04ddcad634666c84e64790e84cbfbae19d6a6eff209b7c not found: ID does not exist" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.536563 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-config-data\") pod \"4ceb104d-0967-4ef1-87d7-23149492461f\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.536994 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ceb104d-0967-4ef1-87d7-23149492461f-httpd-run\") pod \"4ceb104d-0967-4ef1-87d7-23149492461f\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.537114 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-combined-ca-bundle\") pod \"4ceb104d-0967-4ef1-87d7-23149492461f\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.537153 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ceb104d-0967-4ef1-87d7-23149492461f-logs\") pod \"4ceb104d-0967-4ef1-87d7-23149492461f\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.537182 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-public-tls-certs\") pod \"4ceb104d-0967-4ef1-87d7-23149492461f\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.537215 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"4ceb104d-0967-4ef1-87d7-23149492461f\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.537260 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-scripts\") pod \"4ceb104d-0967-4ef1-87d7-23149492461f\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.537295 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp74n\" (UniqueName: \"kubernetes.io/projected/4ceb104d-0967-4ef1-87d7-23149492461f-kube-api-access-gp74n\") pod \"4ceb104d-0967-4ef1-87d7-23149492461f\" (UID: \"4ceb104d-0967-4ef1-87d7-23149492461f\") " Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.538155 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ceb104d-0967-4ef1-87d7-23149492461f-logs" (OuterVolumeSpecName: "logs") pod "4ceb104d-0967-4ef1-87d7-23149492461f" (UID: "4ceb104d-0967-4ef1-87d7-23149492461f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.538617 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ceb104d-0967-4ef1-87d7-23149492461f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4ceb104d-0967-4ef1-87d7-23149492461f" (UID: "4ceb104d-0967-4ef1-87d7-23149492461f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.543783 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-scripts" (OuterVolumeSpecName: "scripts") pod "4ceb104d-0967-4ef1-87d7-23149492461f" (UID: "4ceb104d-0967-4ef1-87d7-23149492461f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.546070 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "4ceb104d-0967-4ef1-87d7-23149492461f" (UID: "4ceb104d-0967-4ef1-87d7-23149492461f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.550254 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ceb104d-0967-4ef1-87d7-23149492461f-kube-api-access-gp74n" (OuterVolumeSpecName: "kube-api-access-gp74n") pod "4ceb104d-0967-4ef1-87d7-23149492461f" (UID: "4ceb104d-0967-4ef1-87d7-23149492461f"). InnerVolumeSpecName "kube-api-access-gp74n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.581063 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ceb104d-0967-4ef1-87d7-23149492461f" (UID: "4ceb104d-0967-4ef1-87d7-23149492461f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.599379 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4ceb104d-0967-4ef1-87d7-23149492461f" (UID: "4ceb104d-0967-4ef1-87d7-23149492461f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.603951 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-config-data" (OuterVolumeSpecName: "config-data") pod "4ceb104d-0967-4ef1-87d7-23149492461f" (UID: "4ceb104d-0967-4ef1-87d7-23149492461f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.639119 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ceb104d-0967-4ef1-87d7-23149492461f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.639159 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.639174 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ceb104d-0967-4ef1-87d7-23149492461f-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.639185 4972 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.639233 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.639245 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.639256 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp74n\" (UniqueName: \"kubernetes.io/projected/4ceb104d-0967-4ef1-87d7-23149492461f-kube-api-access-gp74n\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.639268 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceb104d-0967-4ef1-87d7-23149492461f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.658285 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.673522 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.741120 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:29 crc kubenswrapper[4972]: I1121 10:05:29.964206 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.046385 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fccfs\" (UniqueName: \"kubernetes.io/projected/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-kube-api-access-fccfs\") pod \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.046493 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-config-data\") pod \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.046548 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-internal-tls-certs\") pod \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.046597 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-scripts\") pod \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.046638 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.046676 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-logs\") pod \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.046698 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-combined-ca-bundle\") pod \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.046777 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-httpd-run\") pod \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\" (UID: \"5c9dee05-0b10-4cbf-af72-70d6928f8c8e\") " Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.047624 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5c9dee05-0b10-4cbf-af72-70d6928f8c8e" (UID: "5c9dee05-0b10-4cbf-af72-70d6928f8c8e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.047673 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-logs" (OuterVolumeSpecName: "logs") pod "5c9dee05-0b10-4cbf-af72-70d6928f8c8e" (UID: "5c9dee05-0b10-4cbf-af72-70d6928f8c8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.053215 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "5c9dee05-0b10-4cbf-af72-70d6928f8c8e" (UID: "5c9dee05-0b10-4cbf-af72-70d6928f8c8e"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.053655 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-scripts" (OuterVolumeSpecName: "scripts") pod "5c9dee05-0b10-4cbf-af72-70d6928f8c8e" (UID: "5c9dee05-0b10-4cbf-af72-70d6928f8c8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.057169 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-kube-api-access-fccfs" (OuterVolumeSpecName: "kube-api-access-fccfs") pod "5c9dee05-0b10-4cbf-af72-70d6928f8c8e" (UID: "5c9dee05-0b10-4cbf-af72-70d6928f8c8e"). InnerVolumeSpecName "kube-api-access-fccfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.088220 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c9dee05-0b10-4cbf-af72-70d6928f8c8e" (UID: "5c9dee05-0b10-4cbf-af72-70d6928f8c8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.124179 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-config-data" (OuterVolumeSpecName: "config-data") pod "5c9dee05-0b10-4cbf-af72-70d6928f8c8e" (UID: "5c9dee05-0b10-4cbf-af72-70d6928f8c8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.136317 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5c9dee05-0b10-4cbf-af72-70d6928f8c8e" (UID: "5c9dee05-0b10-4cbf-af72-70d6928f8c8e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.148189 4972 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.148226 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.148267 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.148283 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.148298 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.148309 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.148321 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fccfs\" (UniqueName: \"kubernetes.io/projected/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-kube-api-access-fccfs\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.148333 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9dee05-0b10-4cbf-af72-70d6928f8c8e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.171657 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.250271 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.412327 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8dt78" event={"ID":"b4543ed6-f2db-43da-8e2e-a63720b6cf67","Type":"ContainerStarted","Data":"7aba1f8585fdd39e8cd959cda54a96eaf1e17261fba2d57da5be6f64f842deb5"} Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.415324 4972 generic.go:334] "Generic (PLEG): container finished" podID="5c9dee05-0b10-4cbf-af72-70d6928f8c8e" containerID="d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30" exitCode=0 Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.415397 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c9dee05-0b10-4cbf-af72-70d6928f8c8e","Type":"ContainerDied","Data":"d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30"} Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.415418 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.415439 4972 scope.go:117] "RemoveContainer" containerID="d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.415425 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c9dee05-0b10-4cbf-af72-70d6928f8c8e","Type":"ContainerDied","Data":"4a147f29470dcb7f7901b5814d58d7c1a1dc665518d850c6fe6b92836c92fc70"} Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.418067 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9e929f2-d98a-4a86-a607-a93925afed51","Type":"ContainerStarted","Data":"a0ee9081ff62a216e692fa89f99e9cc77b1afc8d2e57aa04b8414f1fb2007783"} Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.418243 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9e929f2-d98a-4a86-a607-a93925afed51","Type":"ContainerStarted","Data":"ac76b1e0cf57f4d4f794a029c69866cf784b0d0919856301cc063d90ce3fa399"} Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.424218 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.431942 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-8dt78" podStartSLOduration=6.51894886 podStartE2EDuration="15.431924025s" podCreationTimestamp="2025-11-21 10:05:15 +0000 UTC" firstStartedPulling="2025-11-21 10:05:20.342643144 +0000 UTC m=+1465.451785642" lastFinishedPulling="2025-11-21 10:05:29.255618309 +0000 UTC m=+1474.364760807" observedRunningTime="2025-11-21 10:05:30.431212156 +0000 UTC m=+1475.540354654" watchObservedRunningTime="2025-11-21 10:05:30.431924025 +0000 UTC m=+1475.541066523" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.449994 4972 scope.go:117] "RemoveContainer" containerID="605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.458743 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.488992 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.514455 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:05:30 crc kubenswrapper[4972]: E1121 10:05:30.520490 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9dee05-0b10-4cbf-af72-70d6928f8c8e" containerName="glance-httpd" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.520534 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9dee05-0b10-4cbf-af72-70d6928f8c8e" containerName="glance-httpd" Nov 21 10:05:30 crc kubenswrapper[4972]: E1121 10:05:30.520564 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9dee05-0b10-4cbf-af72-70d6928f8c8e" containerName="glance-log" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.520575 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9dee05-0b10-4cbf-af72-70d6928f8c8e" containerName="glance-log" Nov 21 10:05:30 crc kubenswrapper[4972]: E1121 10:05:30.520597 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ceb104d-0967-4ef1-87d7-23149492461f" containerName="glance-log" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.520606 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ceb104d-0967-4ef1-87d7-23149492461f" containerName="glance-log" Nov 21 10:05:30 crc kubenswrapper[4972]: E1121 10:05:30.520620 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ceb104d-0967-4ef1-87d7-23149492461f" containerName="glance-httpd" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.520627 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ceb104d-0967-4ef1-87d7-23149492461f" containerName="glance-httpd" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.520861 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9dee05-0b10-4cbf-af72-70d6928f8c8e" containerName="glance-httpd" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.520883 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9dee05-0b10-4cbf-af72-70d6928f8c8e" containerName="glance-log" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.520903 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ceb104d-0967-4ef1-87d7-23149492461f" containerName="glance-httpd" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.520919 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ceb104d-0967-4ef1-87d7-23149492461f" containerName="glance-log" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.522050 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.522158 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.526341 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.541096 4972 scope.go:117] "RemoveContainer" containerID="d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.542190 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.542495 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.542672 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.542754 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-56jfv" Nov 21 10:05:30 crc kubenswrapper[4972]: E1121 10:05:30.543132 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30\": container with ID starting with d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30 not found: ID does not exist" containerID="d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.543191 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30"} err="failed to get container status \"d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30\": rpc error: code = NotFound desc = could not find container \"d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30\": container with ID starting with d0e92126c7e0ca6eb4ffd02145cba1acb9cd470a1558240b8d70c2cb9312bd30 not found: ID does not exist" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.543220 4972 scope.go:117] "RemoveContainer" containerID="605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79" Nov 21 10:05:30 crc kubenswrapper[4972]: E1121 10:05:30.547158 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79\": container with ID starting with 605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79 not found: ID does not exist" containerID="605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.547228 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79"} err="failed to get container status \"605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79\": rpc error: code = NotFound desc = could not find container \"605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79\": container with ID starting with 605471666edc328b26f0c0dc8dde00104fb13dac6ed8ffee30de91c762be8e79 not found: ID does not exist" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.555796 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.572656 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.578103 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.582403 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.582694 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.596907 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.660915 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.660982 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661041 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661064 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661094 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661124 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2069a31-382b-4fc4-acee-cf202be1de1e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661173 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckhsl\" (UniqueName: \"kubernetes.io/projected/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-kube-api-access-ckhsl\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661191 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-logs\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661213 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661235 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltzjg\" (UniqueName: \"kubernetes.io/projected/b2069a31-382b-4fc4-acee-cf202be1de1e-kube-api-access-ltzjg\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661267 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661296 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661324 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2069a31-382b-4fc4-acee-cf202be1de1e-logs\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661349 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661375 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.661399 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.763748 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.763799 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.763839 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2069a31-382b-4fc4-acee-cf202be1de1e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.763884 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckhsl\" (UniqueName: \"kubernetes.io/projected/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-kube-api-access-ckhsl\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.763901 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-logs\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.763916 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.763935 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltzjg\" (UniqueName: \"kubernetes.io/projected/b2069a31-382b-4fc4-acee-cf202be1de1e-kube-api-access-ltzjg\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.763956 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.763975 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.763995 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2069a31-382b-4fc4-acee-cf202be1de1e-logs\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.764013 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.764032 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.764054 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.764089 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.764109 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.764147 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.765295 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.766283 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2069a31-382b-4fc4-acee-cf202be1de1e-logs\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.770735 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.772186 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2069a31-382b-4fc4-acee-cf202be1de1e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.772637 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-logs\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.776370 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.783419 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.785471 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.787134 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.791509 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.793173 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.793808 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.795209 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.796116 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.816995 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckhsl\" (UniqueName: \"kubernetes.io/projected/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-kube-api-access-ckhsl\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.844514 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.844710 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltzjg\" (UniqueName: \"kubernetes.io/projected/b2069a31-382b-4fc4-acee-cf202be1de1e-kube-api-access-ltzjg\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.848484 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " pod="openstack/glance-default-internal-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.872369 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 10:05:30 crc kubenswrapper[4972]: I1121 10:05:30.923220 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:31 crc kubenswrapper[4972]: I1121 10:05:31.434578 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9e929f2-d98a-4a86-a607-a93925afed51","Type":"ContainerStarted","Data":"60f0dcdaf9e8685cf4fa4f911fd7ccfbdc2a60034e089eddd97b41b09cfd3204"} Nov 21 10:05:31 crc kubenswrapper[4972]: I1121 10:05:31.476278 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:05:31 crc kubenswrapper[4972]: I1121 10:05:31.572584 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:05:31 crc kubenswrapper[4972]: I1121 10:05:31.790797 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ceb104d-0967-4ef1-87d7-23149492461f" path="/var/lib/kubelet/pods/4ceb104d-0967-4ef1-87d7-23149492461f/volumes" Nov 21 10:05:31 crc kubenswrapper[4972]: I1121 10:05:31.792308 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9dee05-0b10-4cbf-af72-70d6928f8c8e" path="/var/lib/kubelet/pods/5c9dee05-0b10-4cbf-af72-70d6928f8c8e/volumes" Nov 21 10:05:32 crc kubenswrapper[4972]: I1121 10:05:32.467629 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8","Type":"ContainerStarted","Data":"e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b"} Nov 21 10:05:32 crc kubenswrapper[4972]: I1121 10:05:32.467955 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8","Type":"ContainerStarted","Data":"5f319e5d8068cd5edda3eaed9c694b6d5830fa3a201ee443c4327495b63f7502"} Nov 21 10:05:32 crc kubenswrapper[4972]: I1121 10:05:32.471587 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9e929f2-d98a-4a86-a607-a93925afed51","Type":"ContainerStarted","Data":"5f0a3c1d5dd8d9afe06bdfab317f0c63c46a0b21a5336177c462fe8f12aa0af6"} Nov 21 10:05:32 crc kubenswrapper[4972]: I1121 10:05:32.474006 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b2069a31-382b-4fc4-acee-cf202be1de1e","Type":"ContainerStarted","Data":"70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3"} Nov 21 10:05:32 crc kubenswrapper[4972]: I1121 10:05:32.476997 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b2069a31-382b-4fc4-acee-cf202be1de1e","Type":"ContainerStarted","Data":"6e9245d79aa3c0a020fd2b36c94da9c130d9560936d8f6b5e773469dfbbd13a6"} Nov 21 10:05:33 crc kubenswrapper[4972]: I1121 10:05:33.486280 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8","Type":"ContainerStarted","Data":"176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff"} Nov 21 10:05:33 crc kubenswrapper[4972]: I1121 10:05:33.513383 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.5133615750000002 podStartE2EDuration="3.513361575s" podCreationTimestamp="2025-11-21 10:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:33.505597807 +0000 UTC m=+1478.614740315" watchObservedRunningTime="2025-11-21 10:05:33.513361575 +0000 UTC m=+1478.622504073" Nov 21 10:05:34 crc kubenswrapper[4972]: I1121 10:05:34.497486 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9e929f2-d98a-4a86-a607-a93925afed51","Type":"ContainerStarted","Data":"56dd9005691c473a0b51ebf67937b0ef196ef54cb8af7552f8676afe8dd98365"} Nov 21 10:05:34 crc kubenswrapper[4972]: I1121 10:05:34.497909 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 10:05:34 crc kubenswrapper[4972]: I1121 10:05:34.497639 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="proxy-httpd" containerID="cri-o://56dd9005691c473a0b51ebf67937b0ef196ef54cb8af7552f8676afe8dd98365" gracePeriod=30 Nov 21 10:05:34 crc kubenswrapper[4972]: I1121 10:05:34.497591 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="ceilometer-central-agent" containerID="cri-o://a0ee9081ff62a216e692fa89f99e9cc77b1afc8d2e57aa04b8414f1fb2007783" gracePeriod=30 Nov 21 10:05:34 crc kubenswrapper[4972]: I1121 10:05:34.497713 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="ceilometer-notification-agent" containerID="cri-o://60f0dcdaf9e8685cf4fa4f911fd7ccfbdc2a60034e089eddd97b41b09cfd3204" gracePeriod=30 Nov 21 10:05:34 crc kubenswrapper[4972]: I1121 10:05:34.497734 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="sg-core" containerID="cri-o://5f0a3c1d5dd8d9afe06bdfab317f0c63c46a0b21a5336177c462fe8f12aa0af6" gracePeriod=30 Nov 21 10:05:34 crc kubenswrapper[4972]: I1121 10:05:34.502470 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b2069a31-382b-4fc4-acee-cf202be1de1e","Type":"ContainerStarted","Data":"fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1"} Nov 21 10:05:34 crc kubenswrapper[4972]: I1121 10:05:34.537473 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.303072876 podStartE2EDuration="10.537455483s" podCreationTimestamp="2025-11-21 10:05:24 +0000 UTC" firstStartedPulling="2025-11-21 10:05:29.67546619 +0000 UTC m=+1474.784608688" lastFinishedPulling="2025-11-21 10:05:32.909848797 +0000 UTC m=+1478.018991295" observedRunningTime="2025-11-21 10:05:34.526159741 +0000 UTC m=+1479.635302259" watchObservedRunningTime="2025-11-21 10:05:34.537455483 +0000 UTC m=+1479.646597981" Nov 21 10:05:34 crc kubenswrapper[4972]: I1121 10:05:34.556343 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.556325798 podStartE2EDuration="4.556325798s" podCreationTimestamp="2025-11-21 10:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:34.551974841 +0000 UTC m=+1479.661117359" watchObservedRunningTime="2025-11-21 10:05:34.556325798 +0000 UTC m=+1479.665468296" Nov 21 10:05:35 crc kubenswrapper[4972]: I1121 10:05:35.517989 4972 generic.go:334] "Generic (PLEG): container finished" podID="a9e929f2-d98a-4a86-a607-a93925afed51" containerID="56dd9005691c473a0b51ebf67937b0ef196ef54cb8af7552f8676afe8dd98365" exitCode=0 Nov 21 10:05:35 crc kubenswrapper[4972]: I1121 10:05:35.518307 4972 generic.go:334] "Generic (PLEG): container finished" podID="a9e929f2-d98a-4a86-a607-a93925afed51" containerID="5f0a3c1d5dd8d9afe06bdfab317f0c63c46a0b21a5336177c462fe8f12aa0af6" exitCode=2 Nov 21 10:05:35 crc kubenswrapper[4972]: I1121 10:05:35.518320 4972 generic.go:334] "Generic (PLEG): container finished" podID="a9e929f2-d98a-4a86-a607-a93925afed51" containerID="60f0dcdaf9e8685cf4fa4f911fd7ccfbdc2a60034e089eddd97b41b09cfd3204" exitCode=0 Nov 21 10:05:35 crc kubenswrapper[4972]: I1121 10:05:35.518059 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9e929f2-d98a-4a86-a607-a93925afed51","Type":"ContainerDied","Data":"56dd9005691c473a0b51ebf67937b0ef196ef54cb8af7552f8676afe8dd98365"} Nov 21 10:05:35 crc kubenswrapper[4972]: I1121 10:05:35.518593 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9e929f2-d98a-4a86-a607-a93925afed51","Type":"ContainerDied","Data":"5f0a3c1d5dd8d9afe06bdfab317f0c63c46a0b21a5336177c462fe8f12aa0af6"} Nov 21 10:05:35 crc kubenswrapper[4972]: I1121 10:05:35.518607 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9e929f2-d98a-4a86-a607-a93925afed51","Type":"ContainerDied","Data":"60f0dcdaf9e8685cf4fa4f911fd7ccfbdc2a60034e089eddd97b41b09cfd3204"} Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:40.874219 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:40.874959 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:40.921002 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:40.924428 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:40.924804 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:40.955696 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:40.976384 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:41.012534 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:41.575176 4972 generic.go:334] "Generic (PLEG): container finished" podID="a9e929f2-d98a-4a86-a607-a93925afed51" containerID="a0ee9081ff62a216e692fa89f99e9cc77b1afc8d2e57aa04b8414f1fb2007783" exitCode=0 Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:41.575247 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9e929f2-d98a-4a86-a607-a93925afed51","Type":"ContainerDied","Data":"a0ee9081ff62a216e692fa89f99e9cc77b1afc8d2e57aa04b8414f1fb2007783"} Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:41.575958 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:41.576078 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:41.576171 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:41 crc kubenswrapper[4972]: I1121 10:05:41.576263 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.837400 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.939908 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-config-data\") pod \"a9e929f2-d98a-4a86-a607-a93925afed51\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.939990 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9e929f2-d98a-4a86-a607-a93925afed51-log-httpd\") pod \"a9e929f2-d98a-4a86-a607-a93925afed51\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.940016 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9e929f2-d98a-4a86-a607-a93925afed51-run-httpd\") pod \"a9e929f2-d98a-4a86-a607-a93925afed51\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.940060 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-sg-core-conf-yaml\") pod \"a9e929f2-d98a-4a86-a607-a93925afed51\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.940217 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-combined-ca-bundle\") pod \"a9e929f2-d98a-4a86-a607-a93925afed51\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.940243 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-scripts\") pod \"a9e929f2-d98a-4a86-a607-a93925afed51\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.940270 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9krqc\" (UniqueName: \"kubernetes.io/projected/a9e929f2-d98a-4a86-a607-a93925afed51-kube-api-access-9krqc\") pod \"a9e929f2-d98a-4a86-a607-a93925afed51\" (UID: \"a9e929f2-d98a-4a86-a607-a93925afed51\") " Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.940576 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e929f2-d98a-4a86-a607-a93925afed51-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a9e929f2-d98a-4a86-a607-a93925afed51" (UID: "a9e929f2-d98a-4a86-a607-a93925afed51"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.941388 4972 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9e929f2-d98a-4a86-a607-a93925afed51-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.941394 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e929f2-d98a-4a86-a607-a93925afed51-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a9e929f2-d98a-4a86-a607-a93925afed51" (UID: "a9e929f2-d98a-4a86-a607-a93925afed51"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.947342 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-scripts" (OuterVolumeSpecName: "scripts") pod "a9e929f2-d98a-4a86-a607-a93925afed51" (UID: "a9e929f2-d98a-4a86-a607-a93925afed51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.950100 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e929f2-d98a-4a86-a607-a93925afed51-kube-api-access-9krqc" (OuterVolumeSpecName: "kube-api-access-9krqc") pod "a9e929f2-d98a-4a86-a607-a93925afed51" (UID: "a9e929f2-d98a-4a86-a607-a93925afed51"). InnerVolumeSpecName "kube-api-access-9krqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:42 crc kubenswrapper[4972]: I1121 10:05:42.993676 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a9e929f2-d98a-4a86-a607-a93925afed51" (UID: "a9e929f2-d98a-4a86-a607-a93925afed51"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.030421 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9e929f2-d98a-4a86-a607-a93925afed51" (UID: "a9e929f2-d98a-4a86-a607-a93925afed51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.043039 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.043268 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.043366 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9krqc\" (UniqueName: \"kubernetes.io/projected/a9e929f2-d98a-4a86-a607-a93925afed51-kube-api-access-9krqc\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.043450 4972 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9e929f2-d98a-4a86-a607-a93925afed51-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.043675 4972 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.055889 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-config-data" (OuterVolumeSpecName: "config-data") pod "a9e929f2-d98a-4a86-a607-a93925afed51" (UID: "a9e929f2-d98a-4a86-a607-a93925afed51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.147016 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e929f2-d98a-4a86-a607-a93925afed51-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.598302 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.600522 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.617818 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.619405 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9e929f2-d98a-4a86-a607-a93925afed51","Type":"ContainerDied","Data":"ac76b1e0cf57f4d4f794a029c69866cf784b0d0919856301cc063d90ce3fa399"} Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.619530 4972 scope.go:117] "RemoveContainer" containerID="56dd9005691c473a0b51ebf67937b0ef196ef54cb8af7552f8676afe8dd98365" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.620975 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.621123 4972 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.624231 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.654000 4972 scope.go:117] "RemoveContainer" containerID="5f0a3c1d5dd8d9afe06bdfab317f0c63c46a0b21a5336177c462fe8f12aa0af6" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.691009 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.702411 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.705023 4972 scope.go:117] "RemoveContainer" containerID="60f0dcdaf9e8685cf4fa4f911fd7ccfbdc2a60034e089eddd97b41b09cfd3204" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.719263 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:43 crc kubenswrapper[4972]: E1121 10:05:43.719597 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="proxy-httpd" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.719608 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="proxy-httpd" Nov 21 10:05:43 crc kubenswrapper[4972]: E1121 10:05:43.719624 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="ceilometer-notification-agent" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.719629 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="ceilometer-notification-agent" Nov 21 10:05:43 crc kubenswrapper[4972]: E1121 10:05:43.719652 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="sg-core" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.719658 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="sg-core" Nov 21 10:05:43 crc kubenswrapper[4972]: E1121 10:05:43.719677 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="ceilometer-central-agent" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.719683 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="ceilometer-central-agent" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.719861 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="proxy-httpd" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.719874 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="ceilometer-central-agent" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.719891 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="ceilometer-notification-agent" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.719898 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" containerName="sg-core" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.722124 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.725110 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.725537 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.742586 4972 scope.go:117] "RemoveContainer" containerID="a0ee9081ff62a216e692fa89f99e9cc77b1afc8d2e57aa04b8414f1fb2007783" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.776093 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9e929f2-d98a-4a86-a607-a93925afed51" path="/var/lib/kubelet/pods/a9e929f2-d98a-4a86-a607-a93925afed51/volumes" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.776801 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.860268 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ed066e3-8526-4ed0-9786-2982544e2ab9-run-httpd\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.860311 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-scripts\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.860428 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-config-data\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.860461 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.860488 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ed066e3-8526-4ed0-9786-2982544e2ab9-log-httpd\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.860546 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.860575 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd4fp\" (UniqueName: \"kubernetes.io/projected/5ed066e3-8526-4ed0-9786-2982544e2ab9-kube-api-access-nd4fp\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.966532 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.966602 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd4fp\" (UniqueName: \"kubernetes.io/projected/5ed066e3-8526-4ed0-9786-2982544e2ab9-kube-api-access-nd4fp\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.966639 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ed066e3-8526-4ed0-9786-2982544e2ab9-run-httpd\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.966668 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-scripts\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.966762 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-config-data\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.966802 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.966824 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ed066e3-8526-4ed0-9786-2982544e2ab9-log-httpd\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.967322 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ed066e3-8526-4ed0-9786-2982544e2ab9-log-httpd\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.967763 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ed066e3-8526-4ed0-9786-2982544e2ab9-run-httpd\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.970935 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.971481 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-scripts\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.971560 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.973987 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-config-data\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:43 crc kubenswrapper[4972]: I1121 10:05:43.993638 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd4fp\" (UniqueName: \"kubernetes.io/projected/5ed066e3-8526-4ed0-9786-2982544e2ab9-kube-api-access-nd4fp\") pod \"ceilometer-0\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " pod="openstack/ceilometer-0" Nov 21 10:05:44 crc kubenswrapper[4972]: I1121 10:05:44.051090 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:44 crc kubenswrapper[4972]: I1121 10:05:44.511518 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:44 crc kubenswrapper[4972]: I1121 10:05:44.630471 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ed066e3-8526-4ed0-9786-2982544e2ab9","Type":"ContainerStarted","Data":"c77ad64ac68f53fc809368b35798110d7bb874b26f47372bb16894c3faec7bb9"} Nov 21 10:05:45 crc kubenswrapper[4972]: I1121 10:05:45.366271 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:45 crc kubenswrapper[4972]: I1121 10:05:45.638585 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ed066e3-8526-4ed0-9786-2982544e2ab9","Type":"ContainerStarted","Data":"d7c94a0ec2d64bdac85ed6b31b82d18571c8887351544e015a31a7312cb84e80"} Nov 21 10:05:46 crc kubenswrapper[4972]: I1121 10:05:46.649571 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ed066e3-8526-4ed0-9786-2982544e2ab9","Type":"ContainerStarted","Data":"907a74a8da1e0c67123d39c0abc7dfdbefecbc274401b4ca5d2db3f701c867f3"} Nov 21 10:05:46 crc kubenswrapper[4972]: I1121 10:05:46.649935 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ed066e3-8526-4ed0-9786-2982544e2ab9","Type":"ContainerStarted","Data":"fe81fdd868966682f89fccba99d42378bc0de72b90894870947da9e565b84dd0"} Nov 21 10:05:46 crc kubenswrapper[4972]: I1121 10:05:46.651547 4972 generic.go:334] "Generic (PLEG): container finished" podID="b4543ed6-f2db-43da-8e2e-a63720b6cf67" containerID="7aba1f8585fdd39e8cd959cda54a96eaf1e17261fba2d57da5be6f64f842deb5" exitCode=0 Nov 21 10:05:46 crc kubenswrapper[4972]: I1121 10:05:46.651609 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8dt78" event={"ID":"b4543ed6-f2db-43da-8e2e-a63720b6cf67","Type":"ContainerDied","Data":"7aba1f8585fdd39e8cd959cda54a96eaf1e17261fba2d57da5be6f64f842deb5"} Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.048081 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.051691 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-combined-ca-bundle\") pod \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.051777 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rcbs\" (UniqueName: \"kubernetes.io/projected/b4543ed6-f2db-43da-8e2e-a63720b6cf67-kube-api-access-7rcbs\") pod \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.052594 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-config-data\") pod \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.052716 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-scripts\") pod \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\" (UID: \"b4543ed6-f2db-43da-8e2e-a63720b6cf67\") " Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.059760 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-scripts" (OuterVolumeSpecName: "scripts") pod "b4543ed6-f2db-43da-8e2e-a63720b6cf67" (UID: "b4543ed6-f2db-43da-8e2e-a63720b6cf67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.059766 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4543ed6-f2db-43da-8e2e-a63720b6cf67-kube-api-access-7rcbs" (OuterVolumeSpecName: "kube-api-access-7rcbs") pod "b4543ed6-f2db-43da-8e2e-a63720b6cf67" (UID: "b4543ed6-f2db-43da-8e2e-a63720b6cf67"). InnerVolumeSpecName "kube-api-access-7rcbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.124968 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-config-data" (OuterVolumeSpecName: "config-data") pod "b4543ed6-f2db-43da-8e2e-a63720b6cf67" (UID: "b4543ed6-f2db-43da-8e2e-a63720b6cf67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.125006 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4543ed6-f2db-43da-8e2e-a63720b6cf67" (UID: "b4543ed6-f2db-43da-8e2e-a63720b6cf67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.155345 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.155591 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.155605 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rcbs\" (UniqueName: \"kubernetes.io/projected/b4543ed6-f2db-43da-8e2e-a63720b6cf67-kube-api-access-7rcbs\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.155621 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4543ed6-f2db-43da-8e2e-a63720b6cf67-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.673061 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="ceilometer-central-agent" containerID="cri-o://d7c94a0ec2d64bdac85ed6b31b82d18571c8887351544e015a31a7312cb84e80" gracePeriod=30 Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.673397 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ed066e3-8526-4ed0-9786-2982544e2ab9","Type":"ContainerStarted","Data":"06c737d7c83c0bc7a7bdce717dd3e7a0257f299bce0776d4d99cc1591e820d5f"} Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.673435 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.673668 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="proxy-httpd" containerID="cri-o://06c737d7c83c0bc7a7bdce717dd3e7a0257f299bce0776d4d99cc1591e820d5f" gracePeriod=30 Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.673729 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="sg-core" containerID="cri-o://907a74a8da1e0c67123d39c0abc7dfdbefecbc274401b4ca5d2db3f701c867f3" gracePeriod=30 Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.673768 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="ceilometer-notification-agent" containerID="cri-o://fe81fdd868966682f89fccba99d42378bc0de72b90894870947da9e565b84dd0" gracePeriod=30 Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.677421 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8dt78" event={"ID":"b4543ed6-f2db-43da-8e2e-a63720b6cf67","Type":"ContainerDied","Data":"4765c58fc8dcb05f0a8b581f43b374a9ec0460118bcad8cc2b6235358fe004ac"} Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.677460 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4765c58fc8dcb05f0a8b581f43b374a9ec0460118bcad8cc2b6235358fe004ac" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.677531 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8dt78" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.777287 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.619126389 podStartE2EDuration="5.777268326s" podCreationTimestamp="2025-11-21 10:05:43 +0000 UTC" firstStartedPulling="2025-11-21 10:05:44.514962037 +0000 UTC m=+1489.624104525" lastFinishedPulling="2025-11-21 10:05:47.673103964 +0000 UTC m=+1492.782246462" observedRunningTime="2025-11-21 10:05:48.713204735 +0000 UTC m=+1493.822347253" watchObservedRunningTime="2025-11-21 10:05:48.777268326 +0000 UTC m=+1493.886410824" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.791446 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 10:05:48 crc kubenswrapper[4972]: E1121 10:05:48.792342 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4543ed6-f2db-43da-8e2e-a63720b6cf67" containerName="nova-cell0-conductor-db-sync" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.792369 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4543ed6-f2db-43da-8e2e-a63720b6cf67" containerName="nova-cell0-conductor-db-sync" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.792619 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4543ed6-f2db-43da-8e2e-a63720b6cf67" containerName="nova-cell0-conductor-db-sync" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.793344 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.800404 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.803212 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-h8p78" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.803886 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.867018 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb9b3f4-6710-4818-b94c-494958fe31ad-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5eb9b3f4-6710-4818-b94c-494958fe31ad\") " pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.867121 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5gm7\" (UniqueName: \"kubernetes.io/projected/5eb9b3f4-6710-4818-b94c-494958fe31ad-kube-api-access-t5gm7\") pod \"nova-cell0-conductor-0\" (UID: \"5eb9b3f4-6710-4818-b94c-494958fe31ad\") " pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.867234 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb9b3f4-6710-4818-b94c-494958fe31ad-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5eb9b3f4-6710-4818-b94c-494958fe31ad\") " pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.968677 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5gm7\" (UniqueName: \"kubernetes.io/projected/5eb9b3f4-6710-4818-b94c-494958fe31ad-kube-api-access-t5gm7\") pod \"nova-cell0-conductor-0\" (UID: \"5eb9b3f4-6710-4818-b94c-494958fe31ad\") " pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.968801 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb9b3f4-6710-4818-b94c-494958fe31ad-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5eb9b3f4-6710-4818-b94c-494958fe31ad\") " pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.968867 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb9b3f4-6710-4818-b94c-494958fe31ad-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5eb9b3f4-6710-4818-b94c-494958fe31ad\") " pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.977872 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb9b3f4-6710-4818-b94c-494958fe31ad-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5eb9b3f4-6710-4818-b94c-494958fe31ad\") " pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.977918 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb9b3f4-6710-4818-b94c-494958fe31ad-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5eb9b3f4-6710-4818-b94c-494958fe31ad\") " pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:48 crc kubenswrapper[4972]: I1121 10:05:48.986108 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5gm7\" (UniqueName: \"kubernetes.io/projected/5eb9b3f4-6710-4818-b94c-494958fe31ad-kube-api-access-t5gm7\") pod \"nova-cell0-conductor-0\" (UID: \"5eb9b3f4-6710-4818-b94c-494958fe31ad\") " pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:49 crc kubenswrapper[4972]: I1121 10:05:49.158439 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:49 crc kubenswrapper[4972]: I1121 10:05:49.660783 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 10:05:49 crc kubenswrapper[4972]: W1121 10:05:49.674290 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5eb9b3f4_6710_4818_b94c_494958fe31ad.slice/crio-06d36ccb42c26305d47f1a332d6c36c70d9d5d3384c9db9b4e9d5122cd135302 WatchSource:0}: Error finding container 06d36ccb42c26305d47f1a332d6c36c70d9d5d3384c9db9b4e9d5122cd135302: Status 404 returned error can't find the container with id 06d36ccb42c26305d47f1a332d6c36c70d9d5d3384c9db9b4e9d5122cd135302 Nov 21 10:05:49 crc kubenswrapper[4972]: I1121 10:05:49.698991 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerID="06c737d7c83c0bc7a7bdce717dd3e7a0257f299bce0776d4d99cc1591e820d5f" exitCode=0 Nov 21 10:05:49 crc kubenswrapper[4972]: I1121 10:05:49.699064 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ed066e3-8526-4ed0-9786-2982544e2ab9","Type":"ContainerDied","Data":"06c737d7c83c0bc7a7bdce717dd3e7a0257f299bce0776d4d99cc1591e820d5f"} Nov 21 10:05:49 crc kubenswrapper[4972]: I1121 10:05:49.699128 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ed066e3-8526-4ed0-9786-2982544e2ab9","Type":"ContainerDied","Data":"907a74a8da1e0c67123d39c0abc7dfdbefecbc274401b4ca5d2db3f701c867f3"} Nov 21 10:05:49 crc kubenswrapper[4972]: I1121 10:05:49.699099 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerID="907a74a8da1e0c67123d39c0abc7dfdbefecbc274401b4ca5d2db3f701c867f3" exitCode=2 Nov 21 10:05:49 crc kubenswrapper[4972]: I1121 10:05:49.699250 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerID="fe81fdd868966682f89fccba99d42378bc0de72b90894870947da9e565b84dd0" exitCode=0 Nov 21 10:05:49 crc kubenswrapper[4972]: I1121 10:05:49.699275 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ed066e3-8526-4ed0-9786-2982544e2ab9","Type":"ContainerDied","Data":"fe81fdd868966682f89fccba99d42378bc0de72b90894870947da9e565b84dd0"} Nov 21 10:05:49 crc kubenswrapper[4972]: I1121 10:05:49.702327 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5eb9b3f4-6710-4818-b94c-494958fe31ad","Type":"ContainerStarted","Data":"06d36ccb42c26305d47f1a332d6c36c70d9d5d3384c9db9b4e9d5122cd135302"} Nov 21 10:05:50 crc kubenswrapper[4972]: I1121 10:05:50.714309 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5eb9b3f4-6710-4818-b94c-494958fe31ad","Type":"ContainerStarted","Data":"c8508fa7978ab01e9ed41259f44d3ae46c68a33ace18516f62967a60ede2d29a"} Nov 21 10:05:50 crc kubenswrapper[4972]: I1121 10:05:50.740886 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.740870284 podStartE2EDuration="2.740870284s" podCreationTimestamp="2025-11-21 10:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:50.734162035 +0000 UTC m=+1495.843304553" watchObservedRunningTime="2025-11-21 10:05:50.740870284 +0000 UTC m=+1495.850012782" Nov 21 10:05:51 crc kubenswrapper[4972]: I1121 10:05:51.734064 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.207954 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.671080 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mf7mv"] Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.672593 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.679071 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.679203 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.693459 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mf7mv"] Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.771433 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerID="d7c94a0ec2d64bdac85ed6b31b82d18571c8887351544e015a31a7312cb84e80" exitCode=0 Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.771492 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ed066e3-8526-4ed0-9786-2982544e2ab9","Type":"ContainerDied","Data":"d7c94a0ec2d64bdac85ed6b31b82d18571c8887351544e015a31a7312cb84e80"} Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.771519 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ed066e3-8526-4ed0-9786-2982544e2ab9","Type":"ContainerDied","Data":"c77ad64ac68f53fc809368b35798110d7bb874b26f47372bb16894c3faec7bb9"} Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.771528 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c77ad64ac68f53fc809368b35798110d7bb874b26f47372bb16894c3faec7bb9" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.801101 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mf7mv\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.801200 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-scripts\") pod \"nova-cell0-cell-mapping-mf7mv\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.801238 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmbtq\" (UniqueName: \"kubernetes.io/projected/84200540-581b-4ed8-b86e-0e744be73aba-kube-api-access-cmbtq\") pod \"nova-cell0-cell-mapping-mf7mv\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.801400 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-config-data\") pod \"nova-cell0-cell-mapping-mf7mv\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.841626 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.896759 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 10:05:54 crc kubenswrapper[4972]: E1121 10:05:54.897347 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="ceilometer-central-agent" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.897376 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="ceilometer-central-agent" Nov 21 10:05:54 crc kubenswrapper[4972]: E1121 10:05:54.897417 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="ceilometer-notification-agent" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.897425 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="ceilometer-notification-agent" Nov 21 10:05:54 crc kubenswrapper[4972]: E1121 10:05:54.897438 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="proxy-httpd" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.897446 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="proxy-httpd" Nov 21 10:05:54 crc kubenswrapper[4972]: E1121 10:05:54.897478 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="sg-core" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.897486 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="sg-core" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.897673 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="ceilometer-notification-agent" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.897700 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="proxy-httpd" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.897714 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="sg-core" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.897730 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" containerName="ceilometer-central-agent" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.898895 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.902290 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.903189 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ed066e3-8526-4ed0-9786-2982544e2ab9-run-httpd\") pod \"5ed066e3-8526-4ed0-9786-2982544e2ab9\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.903898 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed066e3-8526-4ed0-9786-2982544e2ab9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5ed066e3-8526-4ed0-9786-2982544e2ab9" (UID: "5ed066e3-8526-4ed0-9786-2982544e2ab9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.903982 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ed066e3-8526-4ed0-9786-2982544e2ab9-log-httpd\") pod \"5ed066e3-8526-4ed0-9786-2982544e2ab9\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.904104 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mf7mv\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.904154 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-scripts\") pod \"nova-cell0-cell-mapping-mf7mv\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.904194 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmbtq\" (UniqueName: \"kubernetes.io/projected/84200540-581b-4ed8-b86e-0e744be73aba-kube-api-access-cmbtq\") pod \"nova-cell0-cell-mapping-mf7mv\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.904289 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-logs\") pod \"nova-api-0\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " pod="openstack/nova-api-0" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.904333 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " pod="openstack/nova-api-0" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.904358 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-config-data\") pod \"nova-api-0\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " pod="openstack/nova-api-0" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.904382 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsnl6\" (UniqueName: \"kubernetes.io/projected/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-kube-api-access-gsnl6\") pod \"nova-api-0\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " pod="openstack/nova-api-0" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.904434 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-config-data\") pod \"nova-cell0-cell-mapping-mf7mv\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.904477 4972 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ed066e3-8526-4ed0-9786-2982544e2ab9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.910328 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-config-data\") pod \"nova-cell0-cell-mapping-mf7mv\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.911656 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.912381 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mf7mv\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.924994 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed066e3-8526-4ed0-9786-2982544e2ab9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5ed066e3-8526-4ed0-9786-2982544e2ab9" (UID: "5ed066e3-8526-4ed0-9786-2982544e2ab9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.929288 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-scripts\") pod \"nova-cell0-cell-mapping-mf7mv\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:54 crc kubenswrapper[4972]: I1121 10:05:54.932746 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmbtq\" (UniqueName: \"kubernetes.io/projected/84200540-581b-4ed8-b86e-0e744be73aba-kube-api-access-cmbtq\") pod \"nova-cell0-cell-mapping-mf7mv\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.001282 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.004728 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.006414 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-scripts\") pod \"5ed066e3-8526-4ed0-9786-2982544e2ab9\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.006518 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-combined-ca-bundle\") pod \"5ed066e3-8526-4ed0-9786-2982544e2ab9\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.006550 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-sg-core-conf-yaml\") pod \"5ed066e3-8526-4ed0-9786-2982544e2ab9\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.006785 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd4fp\" (UniqueName: \"kubernetes.io/projected/5ed066e3-8526-4ed0-9786-2982544e2ab9-kube-api-access-nd4fp\") pod \"5ed066e3-8526-4ed0-9786-2982544e2ab9\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.006861 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-config-data\") pod \"5ed066e3-8526-4ed0-9786-2982544e2ab9\" (UID: \"5ed066e3-8526-4ed0-9786-2982544e2ab9\") " Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.007282 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " pod="openstack/nova-api-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.007321 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-config-data\") pod \"nova-api-0\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " pod="openstack/nova-api-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.007429 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsnl6\" (UniqueName: \"kubernetes.io/projected/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-kube-api-access-gsnl6\") pod \"nova-api-0\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " pod="openstack/nova-api-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.007602 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-logs\") pod \"nova-api-0\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " pod="openstack/nova-api-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.009453 4972 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ed066e3-8526-4ed0-9786-2982544e2ab9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.009903 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-logs\") pod \"nova-api-0\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " pod="openstack/nova-api-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.011549 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed066e3-8526-4ed0-9786-2982544e2ab9-kube-api-access-nd4fp" (OuterVolumeSpecName: "kube-api-access-nd4fp") pod "5ed066e3-8526-4ed0-9786-2982544e2ab9" (UID: "5ed066e3-8526-4ed0-9786-2982544e2ab9"). InnerVolumeSpecName "kube-api-access-nd4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.013661 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.021823 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " pod="openstack/nova-api-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.025066 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.033463 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-config-data\") pod \"nova-api-0\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " pod="openstack/nova-api-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.048470 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-scripts" (OuterVolumeSpecName: "scripts") pod "5ed066e3-8526-4ed0-9786-2982544e2ab9" (UID: "5ed066e3-8526-4ed0-9786-2982544e2ab9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.112984 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9ccdbf-9738-438a-92bd-f2bc90106e17-config-data\") pod \"nova-metadata-0\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.113340 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9ccdbf-9738-438a-92bd-f2bc90106e17-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.113515 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9ccdbf-9738-438a-92bd-f2bc90106e17-logs\") pod \"nova-metadata-0\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.113738 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbcvx\" (UniqueName: \"kubernetes.io/projected/2f9ccdbf-9738-438a-92bd-f2bc90106e17-kube-api-access-dbcvx\") pod \"nova-metadata-0\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.114027 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd4fp\" (UniqueName: \"kubernetes.io/projected/5ed066e3-8526-4ed0-9786-2982544e2ab9-kube-api-access-nd4fp\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.114172 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.114946 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5ed066e3-8526-4ed0-9786-2982544e2ab9" (UID: "5ed066e3-8526-4ed0-9786-2982544e2ab9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.130612 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.131495 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsnl6\" (UniqueName: \"kubernetes.io/projected/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-kube-api-access-gsnl6\") pod \"nova-api-0\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " pod="openstack/nova-api-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.142692 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.144261 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.152874 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.157883 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.173464 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.175095 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.177057 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.187744 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9c9d97f9-qf467"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.189412 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.199854 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c9d97f9-qf467"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.209693 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.211033 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ed066e3-8526-4ed0-9786-2982544e2ab9" (UID: "5ed066e3-8526-4ed0-9786-2982544e2ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.215551 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.215599 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5b77\" (UniqueName: \"kubernetes.io/projected/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-kube-api-access-m5b77\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.215625 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe358a0-767b-495f-ab34-5cc24717f485-config-data\") pod \"nova-scheduler-0\" (UID: \"dfe358a0-767b-495f-ab34-5cc24717f485\") " pod="openstack/nova-scheduler-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.215676 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe358a0-767b-495f-ab34-5cc24717f485-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dfe358a0-767b-495f-ab34-5cc24717f485\") " pod="openstack/nova-scheduler-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.215717 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10495f4a-dd76-4f6d-b078-96fb1dae42dd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.215745 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9ccdbf-9738-438a-92bd-f2bc90106e17-config-data\") pod \"nova-metadata-0\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.215818 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-dns-svc\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.215916 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4mps\" (UniqueName: \"kubernetes.io/projected/10495f4a-dd76-4f6d-b078-96fb1dae42dd-kube-api-access-v4mps\") pod \"nova-cell1-novncproxy-0\" (UID: \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.217658 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9ccdbf-9738-438a-92bd-f2bc90106e17-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.217813 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9ccdbf-9738-438a-92bd-f2bc90106e17-logs\") pod \"nova-metadata-0\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.217938 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10495f4a-dd76-4f6d-b078-96fb1dae42dd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.218024 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-config\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.218048 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.218079 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbcvx\" (UniqueName: \"kubernetes.io/projected/2f9ccdbf-9738-438a-92bd-f2bc90106e17-kube-api-access-dbcvx\") pod \"nova-metadata-0\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.218126 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jwxr\" (UniqueName: \"kubernetes.io/projected/dfe358a0-767b-495f-ab34-5cc24717f485-kube-api-access-4jwxr\") pod \"nova-scheduler-0\" (UID: \"dfe358a0-767b-495f-ab34-5cc24717f485\") " pod="openstack/nova-scheduler-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.218201 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.218304 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.218316 4972 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.218303 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9ccdbf-9738-438a-92bd-f2bc90106e17-logs\") pod \"nova-metadata-0\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.224406 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9ccdbf-9738-438a-92bd-f2bc90106e17-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.227743 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9ccdbf-9738-438a-92bd-f2bc90106e17-config-data\") pod \"nova-metadata-0\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.236163 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbcvx\" (UniqueName: \"kubernetes.io/projected/2f9ccdbf-9738-438a-92bd-f2bc90106e17-kube-api-access-dbcvx\") pod \"nova-metadata-0\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.242801 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-config-data" (OuterVolumeSpecName: "config-data") pod "5ed066e3-8526-4ed0-9786-2982544e2ab9" (UID: "5ed066e3-8526-4ed0-9786-2982544e2ab9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.314102 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.323421 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4mps\" (UniqueName: \"kubernetes.io/projected/10495f4a-dd76-4f6d-b078-96fb1dae42dd-kube-api-access-v4mps\") pod \"nova-cell1-novncproxy-0\" (UID: \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.323522 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10495f4a-dd76-4f6d-b078-96fb1dae42dd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.323573 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-config\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.323599 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.323639 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jwxr\" (UniqueName: \"kubernetes.io/projected/dfe358a0-767b-495f-ab34-5cc24717f485-kube-api-access-4jwxr\") pod \"nova-scheduler-0\" (UID: \"dfe358a0-767b-495f-ab34-5cc24717f485\") " pod="openstack/nova-scheduler-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.323672 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.323699 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.323722 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5b77\" (UniqueName: \"kubernetes.io/projected/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-kube-api-access-m5b77\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.323745 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe358a0-767b-495f-ab34-5cc24717f485-config-data\") pod \"nova-scheduler-0\" (UID: \"dfe358a0-767b-495f-ab34-5cc24717f485\") " pod="openstack/nova-scheduler-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.323794 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe358a0-767b-495f-ab34-5cc24717f485-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dfe358a0-767b-495f-ab34-5cc24717f485\") " pod="openstack/nova-scheduler-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.323849 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10495f4a-dd76-4f6d-b078-96fb1dae42dd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.323914 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-dns-svc\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.324131 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed066e3-8526-4ed0-9786-2982544e2ab9-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.326198 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-dns-svc\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.328641 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.329643 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-config\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.332207 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.333190 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe358a0-767b-495f-ab34-5cc24717f485-config-data\") pod \"nova-scheduler-0\" (UID: \"dfe358a0-767b-495f-ab34-5cc24717f485\") " pod="openstack/nova-scheduler-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.334934 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10495f4a-dd76-4f6d-b078-96fb1dae42dd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.335664 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.336647 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe358a0-767b-495f-ab34-5cc24717f485-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dfe358a0-767b-495f-ab34-5cc24717f485\") " pod="openstack/nova-scheduler-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.352170 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5b77\" (UniqueName: \"kubernetes.io/projected/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-kube-api-access-m5b77\") pod \"dnsmasq-dns-6b9c9d97f9-qf467\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.352170 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4mps\" (UniqueName: \"kubernetes.io/projected/10495f4a-dd76-4f6d-b078-96fb1dae42dd-kube-api-access-v4mps\") pod \"nova-cell1-novncproxy-0\" (UID: \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.352224 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10495f4a-dd76-4f6d-b078-96fb1dae42dd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.358608 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jwxr\" (UniqueName: \"kubernetes.io/projected/dfe358a0-767b-495f-ab34-5cc24717f485-kube-api-access-4jwxr\") pod \"nova-scheduler-0\" (UID: \"dfe358a0-767b-495f-ab34-5cc24717f485\") " pod="openstack/nova-scheduler-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.408241 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.483156 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.514403 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.519349 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.647729 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mf7mv"] Nov 21 10:05:55 crc kubenswrapper[4972]: W1121 10:05:55.662170 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84200540_581b_4ed8_b86e_0e744be73aba.slice/crio-c623790f57c42f780d413d4c635ce354caa31064f0ae0da332b78edb8c7c0450 WatchSource:0}: Error finding container c623790f57c42f780d413d4c635ce354caa31064f0ae0da332b78edb8c7c0450: Status 404 returned error can't find the container with id c623790f57c42f780d413d4c635ce354caa31064f0ae0da332b78edb8c7c0450 Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.790884 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-njh52"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.792126 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-njh52"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.792200 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.796717 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.796898 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.832176 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.832189 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mf7mv" event={"ID":"84200540-581b-4ed8-b86e-0e744be73aba","Type":"ContainerStarted","Data":"c623790f57c42f780d413d4c635ce354caa31064f0ae0da332b78edb8c7c0450"} Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.838839 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-njh52\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.838880 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-scripts\") pod \"nova-cell1-conductor-db-sync-njh52\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.838978 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-config-data\") pod \"nova-cell1-conductor-db-sync-njh52\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.839018 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvd2r\" (UniqueName: \"kubernetes.io/projected/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-kube-api-access-nvd2r\") pod \"nova-cell1-conductor-db-sync-njh52\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.855332 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.881796 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.916500 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.935732 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.964156 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.968322 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.968545 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.972119 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-config-data\") pod \"nova-cell1-conductor-db-sync-njh52\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.972718 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvd2r\" (UniqueName: \"kubernetes.io/projected/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-kube-api-access-nvd2r\") pod \"nova-cell1-conductor-db-sync-njh52\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.972930 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-njh52\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.973002 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-scripts\") pod \"nova-cell1-conductor-db-sync-njh52\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.973151 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.989178 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-njh52\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.990100 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-config-data\") pod \"nova-cell1-conductor-db-sync-njh52\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:55 crc kubenswrapper[4972]: I1121 10:05:55.994221 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvd2r\" (UniqueName: \"kubernetes.io/projected/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-kube-api-access-nvd2r\") pod \"nova-cell1-conductor-db-sync-njh52\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.002335 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-scripts\") pod \"nova-cell1-conductor-db-sync-njh52\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.075191 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f18e62-c72d-4a5c-8738-975c81f1d724-log-httpd\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.075369 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f18e62-c72d-4a5c-8738-975c81f1d724-run-httpd\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.075560 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gqh\" (UniqueName: \"kubernetes.io/projected/d7f18e62-c72d-4a5c-8738-975c81f1d724-kube-api-access-h4gqh\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.075810 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-config-data\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.075963 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-scripts\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.076086 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.076165 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.080224 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.155408 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.181168 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f18e62-c72d-4a5c-8738-975c81f1d724-run-httpd\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.181635 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f18e62-c72d-4a5c-8738-975c81f1d724-run-httpd\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.181869 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4gqh\" (UniqueName: \"kubernetes.io/projected/d7f18e62-c72d-4a5c-8738-975c81f1d724-kube-api-access-h4gqh\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.182158 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-config-data\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.182192 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-scripts\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.182261 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.182730 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.182812 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f18e62-c72d-4a5c-8738-975c81f1d724-log-httpd\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.183203 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f18e62-c72d-4a5c-8738-975c81f1d724-log-httpd\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.188488 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.190065 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-scripts\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.195089 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.196676 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-config-data\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.199869 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4gqh\" (UniqueName: \"kubernetes.io/projected/d7f18e62-c72d-4a5c-8738-975c81f1d724-kube-api-access-h4gqh\") pod \"ceilometer-0\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.203980 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c9d97f9-qf467"] Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.216521 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.313982 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.346267 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:05:56 crc kubenswrapper[4972]: W1121 10:05:56.379585 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9ccdbf_9738_438a_92bd_f2bc90106e17.slice/crio-61010fd351a39ba94c7195a461af0a0081c82695df33fd1a74e67897000dd68a WatchSource:0}: Error finding container 61010fd351a39ba94c7195a461af0a0081c82695df33fd1a74e67897000dd68a: Status 404 returned error can't find the container with id 61010fd351a39ba94c7195a461af0a0081c82695df33fd1a74e67897000dd68a Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.449085 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-njh52"] Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.843603 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.859241 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-njh52" event={"ID":"12c0ee36-eaf7-4101-80d5-6dfca43ebde7","Type":"ContainerStarted","Data":"815f529eea17e9f7242ec1816284b3062a0b5d36440a8c09059c8536e6dd206a"} Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.859313 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-njh52" event={"ID":"12c0ee36-eaf7-4101-80d5-6dfca43ebde7","Type":"ContainerStarted","Data":"a39fd5dbce115020aaeca30b4903db910d3f1128ac6e6f776a7d1359cb206889"} Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.867106 4972 generic.go:334] "Generic (PLEG): container finished" podID="28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" containerID="38fd9da6f8719034bf40709cf28aa6814f879a28702bd232b8381e45b82cda98" exitCode=0 Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.869311 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" event={"ID":"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596","Type":"ContainerDied","Data":"38fd9da6f8719034bf40709cf28aa6814f879a28702bd232b8381e45b82cda98"} Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.869366 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" event={"ID":"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596","Type":"ContainerStarted","Data":"1df9f39b586a847f0a07d08fc56ebd51878cd96c57dab61ee9b21dd6b8525bf8"} Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.884561 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dfe358a0-767b-495f-ab34-5cc24717f485","Type":"ContainerStarted","Data":"1e162527f118c219cc6c5d33341c369d2d7272bdd0b33fd69985d30b0f539c14"} Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.905452 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"10495f4a-dd76-4f6d-b078-96fb1dae42dd","Type":"ContainerStarted","Data":"7f834da6ba3499bde07021140ebfddb00a74be0912564741765de2f04930bea4"} Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.913428 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-njh52" podStartSLOduration=1.9134033320000001 podStartE2EDuration="1.913403332s" podCreationTimestamp="2025-11-21 10:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:56.876600749 +0000 UTC m=+1501.985743257" watchObservedRunningTime="2025-11-21 10:05:56.913403332 +0000 UTC m=+1502.022545840" Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.934584 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mf7mv" event={"ID":"84200540-581b-4ed8-b86e-0e744be73aba","Type":"ContainerStarted","Data":"dec471ad525a075f9c99d2422dca58d35515dfaf88d497c02e47ce89e9a10e48"} Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.939684 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2d2bd95-7a37-49eb-8e84-6880ba5435dd","Type":"ContainerStarted","Data":"7b0cc802e1e4fa638df8d6de5b3b0ea44b9de0a55de268e912b07a2c35e8e639"} Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.949822 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9ccdbf-9738-438a-92bd-f2bc90106e17","Type":"ContainerStarted","Data":"61010fd351a39ba94c7195a461af0a0081c82695df33fd1a74e67897000dd68a"} Nov 21 10:05:56 crc kubenswrapper[4972]: I1121 10:05:56.959472 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mf7mv" podStartSLOduration=2.959449412 podStartE2EDuration="2.959449412s" podCreationTimestamp="2025-11-21 10:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:56.95186207 +0000 UTC m=+1502.061004568" watchObservedRunningTime="2025-11-21 10:05:56.959449412 +0000 UTC m=+1502.068591920" Nov 21 10:05:57 crc kubenswrapper[4972]: I1121 10:05:57.780556 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed066e3-8526-4ed0-9786-2982544e2ab9" path="/var/lib/kubelet/pods/5ed066e3-8526-4ed0-9786-2982544e2ab9/volumes" Nov 21 10:05:57 crc kubenswrapper[4972]: I1121 10:05:57.963506 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f18e62-c72d-4a5c-8738-975c81f1d724","Type":"ContainerStarted","Data":"d66b790f40180c116e3389eff0767e735ba2f0dd069e745d19d3eaef4eab5504"} Nov 21 10:05:57 crc kubenswrapper[4972]: I1121 10:05:57.969558 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" event={"ID":"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596","Type":"ContainerStarted","Data":"04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d"} Nov 21 10:05:57 crc kubenswrapper[4972]: I1121 10:05:57.969984 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:05:57 crc kubenswrapper[4972]: I1121 10:05:57.989525 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" podStartSLOduration=2.9895095449999998 podStartE2EDuration="2.989509545s" podCreationTimestamp="2025-11-21 10:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:05:57.989125685 +0000 UTC m=+1503.098268203" watchObservedRunningTime="2025-11-21 10:05:57.989509545 +0000 UTC m=+1503.098652043" Nov 21 10:05:58 crc kubenswrapper[4972]: I1121 10:05:58.293970 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 10:05:58 crc kubenswrapper[4972]: I1121 10:05:58.317416 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:05:59 crc kubenswrapper[4972]: I1121 10:05:59.988645 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f18e62-c72d-4a5c-8738-975c81f1d724","Type":"ContainerStarted","Data":"8159b1610bcf54d00c2b259306729ba318b19f6013640a3364c2b54804b2c943"} Nov 21 10:05:59 crc kubenswrapper[4972]: I1121 10:05:59.991482 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2d2bd95-7a37-49eb-8e84-6880ba5435dd","Type":"ContainerStarted","Data":"070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750"} Nov 21 10:05:59 crc kubenswrapper[4972]: I1121 10:05:59.993682 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9ccdbf-9738-438a-92bd-f2bc90106e17","Type":"ContainerStarted","Data":"9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348"} Nov 21 10:05:59 crc kubenswrapper[4972]: I1121 10:05:59.996121 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dfe358a0-767b-495f-ab34-5cc24717f485","Type":"ContainerStarted","Data":"c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3"} Nov 21 10:05:59 crc kubenswrapper[4972]: I1121 10:05:59.998517 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"10495f4a-dd76-4f6d-b078-96fb1dae42dd","Type":"ContainerStarted","Data":"c60b2c13bcf0e6d2ad9127ed62e64813d1e6a8bfa8cfc5f265d4cfb21ca4e9e5"} Nov 21 10:05:59 crc kubenswrapper[4972]: I1121 10:05:59.998610 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="10495f4a-dd76-4f6d-b078-96fb1dae42dd" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c60b2c13bcf0e6d2ad9127ed62e64813d1e6a8bfa8cfc5f265d4cfb21ca4e9e5" gracePeriod=30 Nov 21 10:06:00 crc kubenswrapper[4972]: I1121 10:06:00.018417 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.7405229150000001 podStartE2EDuration="5.018401387s" podCreationTimestamp="2025-11-21 10:05:55 +0000 UTC" firstStartedPulling="2025-11-21 10:05:56.231677993 +0000 UTC m=+1501.340820491" lastFinishedPulling="2025-11-21 10:05:59.509556465 +0000 UTC m=+1504.618698963" observedRunningTime="2025-11-21 10:06:00.013498166 +0000 UTC m=+1505.122640684" watchObservedRunningTime="2025-11-21 10:06:00.018401387 +0000 UTC m=+1505.127543885" Nov 21 10:06:00 crc kubenswrapper[4972]: I1121 10:06:00.036492 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.585289438 podStartE2EDuration="5.036471559s" podCreationTimestamp="2025-11-21 10:05:55 +0000 UTC" firstStartedPulling="2025-11-21 10:05:56.075777669 +0000 UTC m=+1501.184920167" lastFinishedPulling="2025-11-21 10:05:59.52695977 +0000 UTC m=+1504.636102288" observedRunningTime="2025-11-21 10:06:00.026710879 +0000 UTC m=+1505.135853387" watchObservedRunningTime="2025-11-21 10:06:00.036471559 +0000 UTC m=+1505.145614077" Nov 21 10:06:00 crc kubenswrapper[4972]: I1121 10:06:00.483453 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:00 crc kubenswrapper[4972]: I1121 10:06:00.515769 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.013397 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f18e62-c72d-4a5c-8738-975c81f1d724","Type":"ContainerStarted","Data":"044ae04497fc1c9c393d8cc2813cb71c8522e939bc38448b64217eb20db3cdc6"} Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.016493 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2d2bd95-7a37-49eb-8e84-6880ba5435dd","Type":"ContainerStarted","Data":"558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc"} Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.018353 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9ccdbf-9738-438a-92bd-f2bc90106e17","Type":"ContainerStarted","Data":"3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733"} Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.018473 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2f9ccdbf-9738-438a-92bd-f2bc90106e17" containerName="nova-metadata-log" containerID="cri-o://9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348" gracePeriod=30 Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.018592 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2f9ccdbf-9738-438a-92bd-f2bc90106e17" containerName="nova-metadata-metadata" containerID="cri-o://3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733" gracePeriod=30 Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.054567 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.363549125 podStartE2EDuration="7.054545972s" podCreationTimestamp="2025-11-21 10:05:54 +0000 UTC" firstStartedPulling="2025-11-21 10:05:55.844481891 +0000 UTC m=+1500.953624389" lastFinishedPulling="2025-11-21 10:05:59.535478728 +0000 UTC m=+1504.644621236" observedRunningTime="2025-11-21 10:06:01.038115263 +0000 UTC m=+1506.147257771" watchObservedRunningTime="2025-11-21 10:06:01.054545972 +0000 UTC m=+1506.163688480" Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.062767 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.944818281 podStartE2EDuration="7.062747971s" podCreationTimestamp="2025-11-21 10:05:54 +0000 UTC" firstStartedPulling="2025-11-21 10:05:56.392765676 +0000 UTC m=+1501.501908174" lastFinishedPulling="2025-11-21 10:05:59.510695366 +0000 UTC m=+1504.619837864" observedRunningTime="2025-11-21 10:06:01.054717467 +0000 UTC m=+1506.163859975" watchObservedRunningTime="2025-11-21 10:06:01.062747971 +0000 UTC m=+1506.171890479" Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.656950 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.821488 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbcvx\" (UniqueName: \"kubernetes.io/projected/2f9ccdbf-9738-438a-92bd-f2bc90106e17-kube-api-access-dbcvx\") pod \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.822232 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9ccdbf-9738-438a-92bd-f2bc90106e17-config-data\") pod \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.822426 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9ccdbf-9738-438a-92bd-f2bc90106e17-logs\") pod \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.822682 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9ccdbf-9738-438a-92bd-f2bc90106e17-combined-ca-bundle\") pod \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\" (UID: \"2f9ccdbf-9738-438a-92bd-f2bc90106e17\") " Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.822793 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f9ccdbf-9738-438a-92bd-f2bc90106e17-logs" (OuterVolumeSpecName: "logs") pod "2f9ccdbf-9738-438a-92bd-f2bc90106e17" (UID: "2f9ccdbf-9738-438a-92bd-f2bc90106e17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.823429 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9ccdbf-9738-438a-92bd-f2bc90106e17-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.838068 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f9ccdbf-9738-438a-92bd-f2bc90106e17-kube-api-access-dbcvx" (OuterVolumeSpecName: "kube-api-access-dbcvx") pod "2f9ccdbf-9738-438a-92bd-f2bc90106e17" (UID: "2f9ccdbf-9738-438a-92bd-f2bc90106e17"). InnerVolumeSpecName "kube-api-access-dbcvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.854749 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9ccdbf-9738-438a-92bd-f2bc90106e17-config-data" (OuterVolumeSpecName: "config-data") pod "2f9ccdbf-9738-438a-92bd-f2bc90106e17" (UID: "2f9ccdbf-9738-438a-92bd-f2bc90106e17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.862916 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9ccdbf-9738-438a-92bd-f2bc90106e17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f9ccdbf-9738-438a-92bd-f2bc90106e17" (UID: "2f9ccdbf-9738-438a-92bd-f2bc90106e17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.925111 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbcvx\" (UniqueName: \"kubernetes.io/projected/2f9ccdbf-9738-438a-92bd-f2bc90106e17-kube-api-access-dbcvx\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.925154 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9ccdbf-9738-438a-92bd-f2bc90106e17-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:01 crc kubenswrapper[4972]: I1121 10:06:01.925169 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9ccdbf-9738-438a-92bd-f2bc90106e17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.031947 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f18e62-c72d-4a5c-8738-975c81f1d724","Type":"ContainerStarted","Data":"e7268272f40c6f2753a46444f0106ac9251e513804f7234f4eace92b311fe9ee"} Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.034805 4972 generic.go:334] "Generic (PLEG): container finished" podID="2f9ccdbf-9738-438a-92bd-f2bc90106e17" containerID="3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733" exitCode=0 Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.034864 4972 generic.go:334] "Generic (PLEG): container finished" podID="2f9ccdbf-9738-438a-92bd-f2bc90106e17" containerID="9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348" exitCode=143 Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.036606 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.045129 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9ccdbf-9738-438a-92bd-f2bc90106e17","Type":"ContainerDied","Data":"3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733"} Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.045208 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9ccdbf-9738-438a-92bd-f2bc90106e17","Type":"ContainerDied","Data":"9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348"} Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.045228 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9ccdbf-9738-438a-92bd-f2bc90106e17","Type":"ContainerDied","Data":"61010fd351a39ba94c7195a461af0a0081c82695df33fd1a74e67897000dd68a"} Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.045253 4972 scope.go:117] "RemoveContainer" containerID="3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.084667 4972 scope.go:117] "RemoveContainer" containerID="9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.092716 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.101943 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.112772 4972 scope.go:117] "RemoveContainer" containerID="3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733" Nov 21 10:06:02 crc kubenswrapper[4972]: E1121 10:06:02.113394 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733\": container with ID starting with 3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733 not found: ID does not exist" containerID="3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.113429 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733"} err="failed to get container status \"3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733\": rpc error: code = NotFound desc = could not find container \"3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733\": container with ID starting with 3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733 not found: ID does not exist" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.113447 4972 scope.go:117] "RemoveContainer" containerID="9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348" Nov 21 10:06:02 crc kubenswrapper[4972]: E1121 10:06:02.114000 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348\": container with ID starting with 9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348 not found: ID does not exist" containerID="9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.114031 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348"} err="failed to get container status \"9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348\": rpc error: code = NotFound desc = could not find container \"9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348\": container with ID starting with 9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348 not found: ID does not exist" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.114049 4972 scope.go:117] "RemoveContainer" containerID="3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.114404 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733"} err="failed to get container status \"3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733\": rpc error: code = NotFound desc = could not find container \"3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733\": container with ID starting with 3671fcbe73e0c17924d8a3a7b65f0f7b61effec0bbe9fdf9328b4ab10840a733 not found: ID does not exist" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.114468 4972 scope.go:117] "RemoveContainer" containerID="9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.114754 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348"} err="failed to get container status \"9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348\": rpc error: code = NotFound desc = could not find container \"9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348\": container with ID starting with 9fed71f178e32d22816a001ef9d865e0d3558da34043e31a750000792d4c2348 not found: ID does not exist" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.125615 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:02 crc kubenswrapper[4972]: E1121 10:06:02.126119 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9ccdbf-9738-438a-92bd-f2bc90106e17" containerName="nova-metadata-log" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.126136 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9ccdbf-9738-438a-92bd-f2bc90106e17" containerName="nova-metadata-log" Nov 21 10:06:02 crc kubenswrapper[4972]: E1121 10:06:02.126150 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9ccdbf-9738-438a-92bd-f2bc90106e17" containerName="nova-metadata-metadata" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.126157 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9ccdbf-9738-438a-92bd-f2bc90106e17" containerName="nova-metadata-metadata" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.126333 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f9ccdbf-9738-438a-92bd-f2bc90106e17" containerName="nova-metadata-log" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.126353 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f9ccdbf-9738-438a-92bd-f2bc90106e17" containerName="nova-metadata-metadata" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.127298 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.129510 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.134409 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.142969 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.229722 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.230161 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsz24\" (UniqueName: \"kubernetes.io/projected/462aefa3-d440-4f2d-8f82-d3b6329aca63-kube-api-access-zsz24\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.230274 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-config-data\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.230418 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.230506 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462aefa3-d440-4f2d-8f82-d3b6329aca63-logs\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.332301 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.332353 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462aefa3-d440-4f2d-8f82-d3b6329aca63-logs\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.332426 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.332458 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsz24\" (UniqueName: \"kubernetes.io/projected/462aefa3-d440-4f2d-8f82-d3b6329aca63-kube-api-access-zsz24\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.332524 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-config-data\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.333988 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462aefa3-d440-4f2d-8f82-d3b6329aca63-logs\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.336935 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.337149 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.337908 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-config-data\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.354334 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsz24\" (UniqueName: \"kubernetes.io/projected/462aefa3-d440-4f2d-8f82-d3b6329aca63-kube-api-access-zsz24\") pod \"nova-metadata-0\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.457718 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:06:02 crc kubenswrapper[4972]: I1121 10:06:02.938641 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:02 crc kubenswrapper[4972]: W1121 10:06:02.942901 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod462aefa3_d440_4f2d_8f82_d3b6329aca63.slice/crio-2e9f0aa766b6677ba11744a11b1fe688a68079bb7579abe7acee0469b19b9a02 WatchSource:0}: Error finding container 2e9f0aa766b6677ba11744a11b1fe688a68079bb7579abe7acee0469b19b9a02: Status 404 returned error can't find the container with id 2e9f0aa766b6677ba11744a11b1fe688a68079bb7579abe7acee0469b19b9a02 Nov 21 10:06:03 crc kubenswrapper[4972]: I1121 10:06:03.046771 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f18e62-c72d-4a5c-8738-975c81f1d724","Type":"ContainerStarted","Data":"0e258527fd35117af22410e22340c7e8ea64053488f1523721ac50d5007a8d2a"} Nov 21 10:06:03 crc kubenswrapper[4972]: I1121 10:06:03.046889 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 10:06:03 crc kubenswrapper[4972]: I1121 10:06:03.048786 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"462aefa3-d440-4f2d-8f82-d3b6329aca63","Type":"ContainerStarted","Data":"2e9f0aa766b6677ba11744a11b1fe688a68079bb7579abe7acee0469b19b9a02"} Nov 21 10:06:03 crc kubenswrapper[4972]: I1121 10:06:03.070930 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.295378164 podStartE2EDuration="8.070911469s" podCreationTimestamp="2025-11-21 10:05:55 +0000 UTC" firstStartedPulling="2025-11-21 10:05:56.840553616 +0000 UTC m=+1501.949696114" lastFinishedPulling="2025-11-21 10:06:02.616086921 +0000 UTC m=+1507.725229419" observedRunningTime="2025-11-21 10:06:03.068908356 +0000 UTC m=+1508.178050874" watchObservedRunningTime="2025-11-21 10:06:03.070911469 +0000 UTC m=+1508.180053967" Nov 21 10:06:03 crc kubenswrapper[4972]: I1121 10:06:03.769178 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f9ccdbf-9738-438a-92bd-f2bc90106e17" path="/var/lib/kubelet/pods/2f9ccdbf-9738-438a-92bd-f2bc90106e17/volumes" Nov 21 10:06:04 crc kubenswrapper[4972]: I1121 10:06:04.066267 4972 generic.go:334] "Generic (PLEG): container finished" podID="84200540-581b-4ed8-b86e-0e744be73aba" containerID="dec471ad525a075f9c99d2422dca58d35515dfaf88d497c02e47ce89e9a10e48" exitCode=0 Nov 21 10:06:04 crc kubenswrapper[4972]: I1121 10:06:04.066332 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mf7mv" event={"ID":"84200540-581b-4ed8-b86e-0e744be73aba","Type":"ContainerDied","Data":"dec471ad525a075f9c99d2422dca58d35515dfaf88d497c02e47ce89e9a10e48"} Nov 21 10:06:04 crc kubenswrapper[4972]: I1121 10:06:04.070291 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"462aefa3-d440-4f2d-8f82-d3b6329aca63","Type":"ContainerStarted","Data":"902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626"} Nov 21 10:06:04 crc kubenswrapper[4972]: I1121 10:06:04.070328 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"462aefa3-d440-4f2d-8f82-d3b6329aca63","Type":"ContainerStarted","Data":"9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222"} Nov 21 10:06:04 crc kubenswrapper[4972]: I1121 10:06:04.101442 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.101423934 podStartE2EDuration="2.101423934s" podCreationTimestamp="2025-11-21 10:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:06:04.094973002 +0000 UTC m=+1509.204115510" watchObservedRunningTime="2025-11-21 10:06:04.101423934 +0000 UTC m=+1509.210566432" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.079301 4972 generic.go:334] "Generic (PLEG): container finished" podID="12c0ee36-eaf7-4101-80d5-6dfca43ebde7" containerID="815f529eea17e9f7242ec1816284b3062a0b5d36440a8c09059c8536e6dd206a" exitCode=0 Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.079474 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-njh52" event={"ID":"12c0ee36-eaf7-4101-80d5-6dfca43ebde7","Type":"ContainerDied","Data":"815f529eea17e9f7242ec1816284b3062a0b5d36440a8c09059c8536e6dd206a"} Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.315076 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.315117 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.480236 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.516269 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.522519 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.586190 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7445585cd9-xnz5q"] Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.586435 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" podUID="42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" containerName="dnsmasq-dns" containerID="cri-o://cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325" gracePeriod=10 Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.595502 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-config-data\") pod \"84200540-581b-4ed8-b86e-0e744be73aba\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.595642 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-combined-ca-bundle\") pod \"84200540-581b-4ed8-b86e-0e744be73aba\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.595758 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmbtq\" (UniqueName: \"kubernetes.io/projected/84200540-581b-4ed8-b86e-0e744be73aba-kube-api-access-cmbtq\") pod \"84200540-581b-4ed8-b86e-0e744be73aba\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.595798 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-scripts\") pod \"84200540-581b-4ed8-b86e-0e744be73aba\" (UID: \"84200540-581b-4ed8-b86e-0e744be73aba\") " Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.603822 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.605153 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84200540-581b-4ed8-b86e-0e744be73aba-kube-api-access-cmbtq" (OuterVolumeSpecName: "kube-api-access-cmbtq") pod "84200540-581b-4ed8-b86e-0e744be73aba" (UID: "84200540-581b-4ed8-b86e-0e744be73aba"). InnerVolumeSpecName "kube-api-access-cmbtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.609391 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-scripts" (OuterVolumeSpecName: "scripts") pod "84200540-581b-4ed8-b86e-0e744be73aba" (UID: "84200540-581b-4ed8-b86e-0e744be73aba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.682022 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-config-data" (OuterVolumeSpecName: "config-data") pod "84200540-581b-4ed8-b86e-0e744be73aba" (UID: "84200540-581b-4ed8-b86e-0e744be73aba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.682182 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84200540-581b-4ed8-b86e-0e744be73aba" (UID: "84200540-581b-4ed8-b86e-0e744be73aba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.703185 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.703222 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.703233 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmbtq\" (UniqueName: \"kubernetes.io/projected/84200540-581b-4ed8-b86e-0e744be73aba-kube-api-access-cmbtq\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:05 crc kubenswrapper[4972]: I1121 10:06:05.703241 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84200540-581b-4ed8-b86e-0e744be73aba-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.049682 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.087348 4972 generic.go:334] "Generic (PLEG): container finished" podID="42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" containerID="cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325" exitCode=0 Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.087402 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" event={"ID":"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d","Type":"ContainerDied","Data":"cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325"} Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.087425 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" event={"ID":"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d","Type":"ContainerDied","Data":"cf264e1e86e03cbcbfbd14a27614dab0c99711155388af5b1e96725188c34847"} Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.087443 4972 scope.go:117] "RemoveContainer" containerID="cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.087555 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7445585cd9-xnz5q" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.094809 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mf7mv" event={"ID":"84200540-581b-4ed8-b86e-0e744be73aba","Type":"ContainerDied","Data":"c623790f57c42f780d413d4c635ce354caa31064f0ae0da332b78edb8c7c0450"} Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.094857 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c623790f57c42f780d413d4c635ce354caa31064f0ae0da332b78edb8c7c0450" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.094991 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mf7mv" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.156088 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.167207 4972 scope.go:117] "RemoveContainer" containerID="9d3710f07760bd2cee3b96643dd67f5dbb849b665129b2c9e4bc027efb560d5e" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.202031 4972 scope.go:117] "RemoveContainer" containerID="cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325" Nov 21 10:06:06 crc kubenswrapper[4972]: E1121 10:06:06.203185 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325\": container with ID starting with cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325 not found: ID does not exist" containerID="cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.203230 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325"} err="failed to get container status \"cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325\": rpc error: code = NotFound desc = could not find container \"cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325\": container with ID starting with cc5149ceea467def3db6c97647fb317342320fd556fac1c46df2f3900d6be325 not found: ID does not exist" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.203256 4972 scope.go:117] "RemoveContainer" containerID="9d3710f07760bd2cee3b96643dd67f5dbb849b665129b2c9e4bc027efb560d5e" Nov 21 10:06:06 crc kubenswrapper[4972]: E1121 10:06:06.209215 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d3710f07760bd2cee3b96643dd67f5dbb849b665129b2c9e4bc027efb560d5e\": container with ID starting with 9d3710f07760bd2cee3b96643dd67f5dbb849b665129b2c9e4bc027efb560d5e not found: ID does not exist" containerID="9d3710f07760bd2cee3b96643dd67f5dbb849b665129b2c9e4bc027efb560d5e" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.209486 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d3710f07760bd2cee3b96643dd67f5dbb849b665129b2c9e4bc027efb560d5e"} err="failed to get container status \"9d3710f07760bd2cee3b96643dd67f5dbb849b665129b2c9e4bc027efb560d5e\": rpc error: code = NotFound desc = could not find container \"9d3710f07760bd2cee3b96643dd67f5dbb849b665129b2c9e4bc027efb560d5e\": container with ID starting with 9d3710f07760bd2cee3b96643dd67f5dbb849b665129b2c9e4bc027efb560d5e not found: ID does not exist" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.214261 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-dns-swift-storage-0\") pod \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.214306 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-config\") pod \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.214390 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-ovsdbserver-nb\") pod \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.214427 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-ovsdbserver-sb\") pod \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.214506 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-dns-svc\") pod \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.214539 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pj8d\" (UniqueName: \"kubernetes.io/projected/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-kube-api-access-9pj8d\") pod \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\" (UID: \"42fc5873-eac2-406f-b1a0-0cc24e3a0a4d\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.224251 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-kube-api-access-9pj8d" (OuterVolumeSpecName: "kube-api-access-9pj8d") pod "42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" (UID: "42fc5873-eac2-406f-b1a0-0cc24e3a0a4d"). InnerVolumeSpecName "kube-api-access-9pj8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.229956 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.230176 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" containerName="nova-api-log" containerID="cri-o://070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750" gracePeriod=30 Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.230545 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" containerName="nova-api-api" containerID="cri-o://558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc" gracePeriod=30 Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.238764 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": EOF" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.238779 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": EOF" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.249503 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.251057 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="462aefa3-d440-4f2d-8f82-d3b6329aca63" containerName="nova-metadata-log" containerID="cri-o://9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222" gracePeriod=30 Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.251594 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="462aefa3-d440-4f2d-8f82-d3b6329aca63" containerName="nova-metadata-metadata" containerID="cri-o://902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626" gracePeriod=30 Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.292992 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" (UID: "42fc5873-eac2-406f-b1a0-0cc24e3a0a4d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.312352 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" (UID: "42fc5873-eac2-406f-b1a0-0cc24e3a0a4d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.316767 4972 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.316801 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.316811 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pj8d\" (UniqueName: \"kubernetes.io/projected/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-kube-api-access-9pj8d\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.325345 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" (UID: "42fc5873-eac2-406f-b1a0-0cc24e3a0a4d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.329480 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-config" (OuterVolumeSpecName: "config") pod "42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" (UID: "42fc5873-eac2-406f-b1a0-0cc24e3a0a4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.334233 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" (UID: "42fc5873-eac2-406f-b1a0-0cc24e3a0a4d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.423041 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.423087 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.423099 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.604650 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.609698 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7445585cd9-xnz5q"] Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.618316 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7445585cd9-xnz5q"] Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.726263 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.727416 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-scripts\") pod \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.727563 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-config-data\") pod \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.727641 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvd2r\" (UniqueName: \"kubernetes.io/projected/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-kube-api-access-nvd2r\") pod \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.735981 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-combined-ca-bundle\") pod \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\" (UID: \"12c0ee36-eaf7-4101-80d5-6dfca43ebde7\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.740676 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-kube-api-access-nvd2r" (OuterVolumeSpecName: "kube-api-access-nvd2r") pod "12c0ee36-eaf7-4101-80d5-6dfca43ebde7" (UID: "12c0ee36-eaf7-4101-80d5-6dfca43ebde7"). InnerVolumeSpecName "kube-api-access-nvd2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.756562 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-scripts" (OuterVolumeSpecName: "scripts") pod "12c0ee36-eaf7-4101-80d5-6dfca43ebde7" (UID: "12c0ee36-eaf7-4101-80d5-6dfca43ebde7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.779505 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-config-data" (OuterVolumeSpecName: "config-data") pod "12c0ee36-eaf7-4101-80d5-6dfca43ebde7" (UID: "12c0ee36-eaf7-4101-80d5-6dfca43ebde7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.818868 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12c0ee36-eaf7-4101-80d5-6dfca43ebde7" (UID: "12c0ee36-eaf7-4101-80d5-6dfca43ebde7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.839681 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.839748 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.839763 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.839802 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvd2r\" (UniqueName: \"kubernetes.io/projected/12c0ee36-eaf7-4101-80d5-6dfca43ebde7-kube-api-access-nvd2r\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.857075 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.941501 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462aefa3-d440-4f2d-8f82-d3b6329aca63-logs\") pod \"462aefa3-d440-4f2d-8f82-d3b6329aca63\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.941576 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsz24\" (UniqueName: \"kubernetes.io/projected/462aefa3-d440-4f2d-8f82-d3b6329aca63-kube-api-access-zsz24\") pod \"462aefa3-d440-4f2d-8f82-d3b6329aca63\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.941660 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-config-data\") pod \"462aefa3-d440-4f2d-8f82-d3b6329aca63\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.941727 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-combined-ca-bundle\") pod \"462aefa3-d440-4f2d-8f82-d3b6329aca63\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.941759 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-nova-metadata-tls-certs\") pod \"462aefa3-d440-4f2d-8f82-d3b6329aca63\" (UID: \"462aefa3-d440-4f2d-8f82-d3b6329aca63\") " Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.941781 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/462aefa3-d440-4f2d-8f82-d3b6329aca63-logs" (OuterVolumeSpecName: "logs") pod "462aefa3-d440-4f2d-8f82-d3b6329aca63" (UID: "462aefa3-d440-4f2d-8f82-d3b6329aca63"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.942278 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462aefa3-d440-4f2d-8f82-d3b6329aca63-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.944810 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462aefa3-d440-4f2d-8f82-d3b6329aca63-kube-api-access-zsz24" (OuterVolumeSpecName: "kube-api-access-zsz24") pod "462aefa3-d440-4f2d-8f82-d3b6329aca63" (UID: "462aefa3-d440-4f2d-8f82-d3b6329aca63"). InnerVolumeSpecName "kube-api-access-zsz24". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.970473 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-config-data" (OuterVolumeSpecName: "config-data") pod "462aefa3-d440-4f2d-8f82-d3b6329aca63" (UID: "462aefa3-d440-4f2d-8f82-d3b6329aca63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.978343 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "462aefa3-d440-4f2d-8f82-d3b6329aca63" (UID: "462aefa3-d440-4f2d-8f82-d3b6329aca63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:06 crc kubenswrapper[4972]: I1121 10:06:06.997482 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "462aefa3-d440-4f2d-8f82-d3b6329aca63" (UID: "462aefa3-d440-4f2d-8f82-d3b6329aca63"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.044262 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsz24\" (UniqueName: \"kubernetes.io/projected/462aefa3-d440-4f2d-8f82-d3b6329aca63-kube-api-access-zsz24\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.044548 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.044629 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.044712 4972 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/462aefa3-d440-4f2d-8f82-d3b6329aca63-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.105069 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-njh52" event={"ID":"12c0ee36-eaf7-4101-80d5-6dfca43ebde7","Type":"ContainerDied","Data":"a39fd5dbce115020aaeca30b4903db910d3f1128ac6e6f776a7d1359cb206889"} Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.105119 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a39fd5dbce115020aaeca30b4903db910d3f1128ac6e6f776a7d1359cb206889" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.106594 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-njh52" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.114983 4972 generic.go:334] "Generic (PLEG): container finished" podID="462aefa3-d440-4f2d-8f82-d3b6329aca63" containerID="902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626" exitCode=0 Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.115011 4972 generic.go:334] "Generic (PLEG): container finished" podID="462aefa3-d440-4f2d-8f82-d3b6329aca63" containerID="9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222" exitCode=143 Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.115043 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"462aefa3-d440-4f2d-8f82-d3b6329aca63","Type":"ContainerDied","Data":"902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626"} Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.115063 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"462aefa3-d440-4f2d-8f82-d3b6329aca63","Type":"ContainerDied","Data":"9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222"} Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.115073 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"462aefa3-d440-4f2d-8f82-d3b6329aca63","Type":"ContainerDied","Data":"2e9f0aa766b6677ba11744a11b1fe688a68079bb7579abe7acee0469b19b9a02"} Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.115088 4972 scope.go:117] "RemoveContainer" containerID="902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.115187 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.118793 4972 generic.go:334] "Generic (PLEG): container finished" podID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" containerID="070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750" exitCode=143 Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.119259 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2d2bd95-7a37-49eb-8e84-6880ba5435dd","Type":"ContainerDied","Data":"070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750"} Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.140767 4972 scope.go:117] "RemoveContainer" containerID="9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.171298 4972 scope.go:117] "RemoveContainer" containerID="902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626" Nov 21 10:06:07 crc kubenswrapper[4972]: E1121 10:06:07.171739 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626\": container with ID starting with 902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626 not found: ID does not exist" containerID="902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.171773 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626"} err="failed to get container status \"902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626\": rpc error: code = NotFound desc = could not find container \"902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626\": container with ID starting with 902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626 not found: ID does not exist" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.171794 4972 scope.go:117] "RemoveContainer" containerID="9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222" Nov 21 10:06:07 crc kubenswrapper[4972]: E1121 10:06:07.171981 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222\": container with ID starting with 9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222 not found: ID does not exist" containerID="9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.172008 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222"} err="failed to get container status \"9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222\": rpc error: code = NotFound desc = could not find container \"9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222\": container with ID starting with 9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222 not found: ID does not exist" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.172022 4972 scope.go:117] "RemoveContainer" containerID="902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.172181 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626"} err="failed to get container status \"902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626\": rpc error: code = NotFound desc = could not find container \"902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626\": container with ID starting with 902769e3c9ba3b0d970a8393f9ae7d9e13b87472a8028b5032a031f84e3bf626 not found: ID does not exist" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.172199 4972 scope.go:117] "RemoveContainer" containerID="9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.172380 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222"} err="failed to get container status \"9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222\": rpc error: code = NotFound desc = could not find container \"9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222\": container with ID starting with 9e25607ece00095bb4ebfc955ac0d7c486bab30cdfc9af5cf223f5028dee1222 not found: ID does not exist" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.175534 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.200900 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.207214 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:07 crc kubenswrapper[4972]: E1121 10:06:07.207700 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84200540-581b-4ed8-b86e-0e744be73aba" containerName="nova-manage" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.207721 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="84200540-581b-4ed8-b86e-0e744be73aba" containerName="nova-manage" Nov 21 10:06:07 crc kubenswrapper[4972]: E1121 10:06:07.207740 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" containerName="init" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.207747 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" containerName="init" Nov 21 10:06:07 crc kubenswrapper[4972]: E1121 10:06:07.207757 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c0ee36-eaf7-4101-80d5-6dfca43ebde7" containerName="nova-cell1-conductor-db-sync" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.207763 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c0ee36-eaf7-4101-80d5-6dfca43ebde7" containerName="nova-cell1-conductor-db-sync" Nov 21 10:06:07 crc kubenswrapper[4972]: E1121 10:06:07.207779 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462aefa3-d440-4f2d-8f82-d3b6329aca63" containerName="nova-metadata-metadata" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.207785 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="462aefa3-d440-4f2d-8f82-d3b6329aca63" containerName="nova-metadata-metadata" Nov 21 10:06:07 crc kubenswrapper[4972]: E1121 10:06:07.207798 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462aefa3-d440-4f2d-8f82-d3b6329aca63" containerName="nova-metadata-log" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.207806 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="462aefa3-d440-4f2d-8f82-d3b6329aca63" containerName="nova-metadata-log" Nov 21 10:06:07 crc kubenswrapper[4972]: E1121 10:06:07.207818 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" containerName="dnsmasq-dns" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.207825 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" containerName="dnsmasq-dns" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.208017 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="462aefa3-d440-4f2d-8f82-d3b6329aca63" containerName="nova-metadata-log" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.208027 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" containerName="dnsmasq-dns" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.208042 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c0ee36-eaf7-4101-80d5-6dfca43ebde7" containerName="nova-cell1-conductor-db-sync" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.208055 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="84200540-581b-4ed8-b86e-0e744be73aba" containerName="nova-manage" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.208064 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="462aefa3-d440-4f2d-8f82-d3b6329aca63" containerName="nova-metadata-metadata" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.209293 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.211409 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.212782 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.223692 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.226085 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.228767 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.236491 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.260792 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.352014 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d70968-1cb9-42c5-9e6a-42be7447c211-logs\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.352068 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.352115 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.352293 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.352553 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.352680 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncbmk\" (UniqueName: \"kubernetes.io/projected/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-kube-api-access-ncbmk\") pod \"nova-cell1-conductor-0\" (UID: \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.352760 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-config-data\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.352790 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st9j4\" (UniqueName: \"kubernetes.io/projected/25d70968-1cb9-42c5-9e6a-42be7447c211-kube-api-access-st9j4\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.454050 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.454107 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.454197 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.454243 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncbmk\" (UniqueName: \"kubernetes.io/projected/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-kube-api-access-ncbmk\") pod \"nova-cell1-conductor-0\" (UID: \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.454276 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-config-data\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.454295 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st9j4\" (UniqueName: \"kubernetes.io/projected/25d70968-1cb9-42c5-9e6a-42be7447c211-kube-api-access-st9j4\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.454324 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d70968-1cb9-42c5-9e6a-42be7447c211-logs\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.454341 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.459324 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.459571 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d70968-1cb9-42c5-9e6a-42be7447c211-logs\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.460398 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-config-data\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.463293 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.470502 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.471097 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.477288 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st9j4\" (UniqueName: \"kubernetes.io/projected/25d70968-1cb9-42c5-9e6a-42be7447c211-kube-api-access-st9j4\") pod \"nova-metadata-0\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.477544 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncbmk\" (UniqueName: \"kubernetes.io/projected/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-kube-api-access-ncbmk\") pod \"nova-cell1-conductor-0\" (UID: \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.524432 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.545242 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.771305 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42fc5873-eac2-406f-b1a0-0cc24e3a0a4d" path="/var/lib/kubelet/pods/42fc5873-eac2-406f-b1a0-0cc24e3a0a4d/volumes" Nov 21 10:06:07 crc kubenswrapper[4972]: I1121 10:06:07.772311 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="462aefa3-d440-4f2d-8f82-d3b6329aca63" path="/var/lib/kubelet/pods/462aefa3-d440-4f2d-8f82-d3b6329aca63/volumes" Nov 21 10:06:08 crc kubenswrapper[4972]: I1121 10:06:08.061762 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:08 crc kubenswrapper[4972]: I1121 10:06:08.092662 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 10:06:08 crc kubenswrapper[4972]: W1121 10:06:08.098646 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c7e44b1_0938_480f_9ab1_6e7e16c6c0e8.slice/crio-1f27328f827320adc170a99d6d7a584065f7290ce0b4546472acd6161887137b WatchSource:0}: Error finding container 1f27328f827320adc170a99d6d7a584065f7290ce0b4546472acd6161887137b: Status 404 returned error can't find the container with id 1f27328f827320adc170a99d6d7a584065f7290ce0b4546472acd6161887137b Nov 21 10:06:08 crc kubenswrapper[4972]: I1121 10:06:08.129298 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25d70968-1cb9-42c5-9e6a-42be7447c211","Type":"ContainerStarted","Data":"34c24e4e4f5bd88254afb328fa533ab8b1737264daaec9aa45847e2a1c782d9b"} Nov 21 10:06:08 crc kubenswrapper[4972]: I1121 10:06:08.130673 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8","Type":"ContainerStarted","Data":"1f27328f827320adc170a99d6d7a584065f7290ce0b4546472acd6161887137b"} Nov 21 10:06:08 crc kubenswrapper[4972]: I1121 10:06:08.131892 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="dfe358a0-767b-495f-ab34-5cc24717f485" containerName="nova-scheduler-scheduler" containerID="cri-o://c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3" gracePeriod=30 Nov 21 10:06:09 crc kubenswrapper[4972]: I1121 10:06:09.143108 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25d70968-1cb9-42c5-9e6a-42be7447c211","Type":"ContainerStarted","Data":"7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4"} Nov 21 10:06:09 crc kubenswrapper[4972]: I1121 10:06:09.143476 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25d70968-1cb9-42c5-9e6a-42be7447c211","Type":"ContainerStarted","Data":"39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd"} Nov 21 10:06:09 crc kubenswrapper[4972]: I1121 10:06:09.144598 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8","Type":"ContainerStarted","Data":"2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c"} Nov 21 10:06:09 crc kubenswrapper[4972]: I1121 10:06:09.144705 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:09 crc kubenswrapper[4972]: I1121 10:06:09.179321 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.179305845 podStartE2EDuration="2.179305845s" podCreationTimestamp="2025-11-21 10:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:06:09.160588385 +0000 UTC m=+1514.269730923" watchObservedRunningTime="2025-11-21 10:06:09.179305845 +0000 UTC m=+1514.288448343" Nov 21 10:06:09 crc kubenswrapper[4972]: I1121 10:06:09.185176 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.185167531 podStartE2EDuration="2.185167531s" podCreationTimestamp="2025-11-21 10:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:06:09.176807418 +0000 UTC m=+1514.285949926" watchObservedRunningTime="2025-11-21 10:06:09.185167531 +0000 UTC m=+1514.294310029" Nov 21 10:06:10 crc kubenswrapper[4972]: E1121 10:06:10.519104 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 10:06:10 crc kubenswrapper[4972]: E1121 10:06:10.521271 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 10:06:10 crc kubenswrapper[4972]: E1121 10:06:10.524414 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 10:06:10 crc kubenswrapper[4972]: E1121 10:06:10.524474 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dfe358a0-767b-495f-ab34-5cc24717f485" containerName="nova-scheduler-scheduler" Nov 21 10:06:11 crc kubenswrapper[4972]: I1121 10:06:11.813294 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 10:06:11 crc kubenswrapper[4972]: I1121 10:06:11.860904 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jwxr\" (UniqueName: \"kubernetes.io/projected/dfe358a0-767b-495f-ab34-5cc24717f485-kube-api-access-4jwxr\") pod \"dfe358a0-767b-495f-ab34-5cc24717f485\" (UID: \"dfe358a0-767b-495f-ab34-5cc24717f485\") " Nov 21 10:06:11 crc kubenswrapper[4972]: I1121 10:06:11.860977 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe358a0-767b-495f-ab34-5cc24717f485-config-data\") pod \"dfe358a0-767b-495f-ab34-5cc24717f485\" (UID: \"dfe358a0-767b-495f-ab34-5cc24717f485\") " Nov 21 10:06:11 crc kubenswrapper[4972]: I1121 10:06:11.861178 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe358a0-767b-495f-ab34-5cc24717f485-combined-ca-bundle\") pod \"dfe358a0-767b-495f-ab34-5cc24717f485\" (UID: \"dfe358a0-767b-495f-ab34-5cc24717f485\") " Nov 21 10:06:11 crc kubenswrapper[4972]: I1121 10:06:11.868705 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe358a0-767b-495f-ab34-5cc24717f485-kube-api-access-4jwxr" (OuterVolumeSpecName: "kube-api-access-4jwxr") pod "dfe358a0-767b-495f-ab34-5cc24717f485" (UID: "dfe358a0-767b-495f-ab34-5cc24717f485"). InnerVolumeSpecName "kube-api-access-4jwxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:11 crc kubenswrapper[4972]: I1121 10:06:11.904863 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfe358a0-767b-495f-ab34-5cc24717f485-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfe358a0-767b-495f-ab34-5cc24717f485" (UID: "dfe358a0-767b-495f-ab34-5cc24717f485"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:11 crc kubenswrapper[4972]: I1121 10:06:11.924460 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfe358a0-767b-495f-ab34-5cc24717f485-config-data" (OuterVolumeSpecName: "config-data") pod "dfe358a0-767b-495f-ab34-5cc24717f485" (UID: "dfe358a0-767b-495f-ab34-5cc24717f485"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:11 crc kubenswrapper[4972]: I1121 10:06:11.963581 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jwxr\" (UniqueName: \"kubernetes.io/projected/dfe358a0-767b-495f-ab34-5cc24717f485-kube-api-access-4jwxr\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:11 crc kubenswrapper[4972]: I1121 10:06:11.963616 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe358a0-767b-495f-ab34-5cc24717f485-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:11 crc kubenswrapper[4972]: I1121 10:06:11.963626 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe358a0-767b-495f-ab34-5cc24717f485-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.004344 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.064850 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-logs\") pod \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.065184 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-logs" (OuterVolumeSpecName: "logs") pod "e2d2bd95-7a37-49eb-8e84-6880ba5435dd" (UID: "e2d2bd95-7a37-49eb-8e84-6880ba5435dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.065203 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-combined-ca-bundle\") pod \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.065243 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsnl6\" (UniqueName: \"kubernetes.io/projected/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-kube-api-access-gsnl6\") pod \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.065279 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-config-data\") pod \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\" (UID: \"e2d2bd95-7a37-49eb-8e84-6880ba5435dd\") " Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.066023 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.068382 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-kube-api-access-gsnl6" (OuterVolumeSpecName: "kube-api-access-gsnl6") pod "e2d2bd95-7a37-49eb-8e84-6880ba5435dd" (UID: "e2d2bd95-7a37-49eb-8e84-6880ba5435dd"). InnerVolumeSpecName "kube-api-access-gsnl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.094126 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-config-data" (OuterVolumeSpecName: "config-data") pod "e2d2bd95-7a37-49eb-8e84-6880ba5435dd" (UID: "e2d2bd95-7a37-49eb-8e84-6880ba5435dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.102136 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2d2bd95-7a37-49eb-8e84-6880ba5435dd" (UID: "e2d2bd95-7a37-49eb-8e84-6880ba5435dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.168226 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.168589 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsnl6\" (UniqueName: \"kubernetes.io/projected/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-kube-api-access-gsnl6\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.168738 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d2bd95-7a37-49eb-8e84-6880ba5435dd-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.179957 4972 generic.go:334] "Generic (PLEG): container finished" podID="dfe358a0-767b-495f-ab34-5cc24717f485" containerID="c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3" exitCode=0 Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.180048 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.180056 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dfe358a0-767b-495f-ab34-5cc24717f485","Type":"ContainerDied","Data":"c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3"} Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.180162 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dfe358a0-767b-495f-ab34-5cc24717f485","Type":"ContainerDied","Data":"1e162527f118c219cc6c5d33341c369d2d7272bdd0b33fd69985d30b0f539c14"} Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.180204 4972 scope.go:117] "RemoveContainer" containerID="c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.182423 4972 generic.go:334] "Generic (PLEG): container finished" podID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" containerID="558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc" exitCode=0 Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.182471 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2d2bd95-7a37-49eb-8e84-6880ba5435dd","Type":"ContainerDied","Data":"558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc"} Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.182514 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2d2bd95-7a37-49eb-8e84-6880ba5435dd","Type":"ContainerDied","Data":"7b0cc802e1e4fa638df8d6de5b3b0ea44b9de0a55de268e912b07a2c35e8e639"} Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.182698 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.226383 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.227144 4972 scope.go:117] "RemoveContainer" containerID="c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3" Nov 21 10:06:12 crc kubenswrapper[4972]: E1121 10:06:12.232161 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3\": container with ID starting with c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3 not found: ID does not exist" containerID="c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.232225 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3"} err="failed to get container status \"c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3\": rpc error: code = NotFound desc = could not find container \"c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3\": container with ID starting with c9fce9ad4441a22332867cad183a6e69953f8f3904137a0c672b952dc01950c3 not found: ID does not exist" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.232255 4972 scope.go:117] "RemoveContainer" containerID="558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.241007 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.251311 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.264531 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.278960 4972 scope.go:117] "RemoveContainer" containerID="070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.307167 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:06:12 crc kubenswrapper[4972]: E1121 10:06:12.311899 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" containerName="nova-api-api" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.311929 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" containerName="nova-api-api" Nov 21 10:06:12 crc kubenswrapper[4972]: E1121 10:06:12.311941 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" containerName="nova-api-log" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.311947 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" containerName="nova-api-log" Nov 21 10:06:12 crc kubenswrapper[4972]: E1121 10:06:12.311968 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe358a0-767b-495f-ab34-5cc24717f485" containerName="nova-scheduler-scheduler" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.311974 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe358a0-767b-495f-ab34-5cc24717f485" containerName="nova-scheduler-scheduler" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.312153 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" containerName="nova-api-log" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.312173 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe358a0-767b-495f-ab34-5cc24717f485" containerName="nova-scheduler-scheduler" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.312190 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" containerName="nova-api-api" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.312851 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.317651 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.325219 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.341337 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.345760 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.348178 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.356224 4972 scope.go:117] "RemoveContainer" containerID="558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.356773 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:12 crc kubenswrapper[4972]: E1121 10:06:12.357093 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc\": container with ID starting with 558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc not found: ID does not exist" containerID="558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.357129 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc"} err="failed to get container status \"558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc\": rpc error: code = NotFound desc = could not find container \"558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc\": container with ID starting with 558c53e3ae81e6d24c72f7d0c39c571ce4638e181da0f7eb83c40d842ce2b0fc not found: ID does not exist" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.357154 4972 scope.go:117] "RemoveContainer" containerID="070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750" Nov 21 10:06:12 crc kubenswrapper[4972]: E1121 10:06:12.357769 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750\": container with ID starting with 070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750 not found: ID does not exist" containerID="070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.357808 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750"} err="failed to get container status \"070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750\": rpc error: code = NotFound desc = could not find container \"070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750\": container with ID starting with 070e64229e7bd6dfdcbc0014e64323fe83f14bb9648e54f891e4db1708c8b750 not found: ID does not exist" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.376273 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kn72\" (UniqueName: \"kubernetes.io/projected/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-kube-api-access-5kn72\") pod \"nova-scheduler-0\" (UID: \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.376404 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.376449 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-config-data\") pod \"nova-scheduler-0\" (UID: \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.481066 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kn72\" (UniqueName: \"kubernetes.io/projected/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-kube-api-access-5kn72\") pod \"nova-scheduler-0\" (UID: \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.481187 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b258c455-2d04-41a7-9522-653cdcf78f5c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.481229 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.481303 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-config-data\") pod \"nova-scheduler-0\" (UID: \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.481360 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b258c455-2d04-41a7-9522-653cdcf78f5c-config-data\") pod \"nova-api-0\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.481487 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzcpb\" (UniqueName: \"kubernetes.io/projected/b258c455-2d04-41a7-9522-653cdcf78f5c-kube-api-access-tzcpb\") pod \"nova-api-0\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.481536 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b258c455-2d04-41a7-9522-653cdcf78f5c-logs\") pod \"nova-api-0\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.486314 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.486904 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-config-data\") pod \"nova-scheduler-0\" (UID: \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.506476 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kn72\" (UniqueName: \"kubernetes.io/projected/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-kube-api-access-5kn72\") pod \"nova-scheduler-0\" (UID: \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.525137 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.525178 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.583528 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b258c455-2d04-41a7-9522-653cdcf78f5c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.584268 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b258c455-2d04-41a7-9522-653cdcf78f5c-config-data\") pod \"nova-api-0\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.585104 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzcpb\" (UniqueName: \"kubernetes.io/projected/b258c455-2d04-41a7-9522-653cdcf78f5c-kube-api-access-tzcpb\") pod \"nova-api-0\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.585478 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b258c455-2d04-41a7-9522-653cdcf78f5c-logs\") pod \"nova-api-0\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.585542 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b258c455-2d04-41a7-9522-653cdcf78f5c-logs\") pod \"nova-api-0\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.586925 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b258c455-2d04-41a7-9522-653cdcf78f5c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.588559 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b258c455-2d04-41a7-9522-653cdcf78f5c-config-data\") pod \"nova-api-0\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.601401 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzcpb\" (UniqueName: \"kubernetes.io/projected/b258c455-2d04-41a7-9522-653cdcf78f5c-kube-api-access-tzcpb\") pod \"nova-api-0\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " pod="openstack/nova-api-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.646757 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 10:06:12 crc kubenswrapper[4972]: I1121 10:06:12.658554 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:06:13 crc kubenswrapper[4972]: I1121 10:06:13.088722 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:06:13 crc kubenswrapper[4972]: I1121 10:06:13.169603 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:13 crc kubenswrapper[4972]: W1121 10:06:13.183930 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb258c455_2d04_41a7_9522_653cdcf78f5c.slice/crio-c8b45f9c6d44744314a9bb9860cab53a8cf3972a9d64c545b7fa3c7d06f0d981 WatchSource:0}: Error finding container c8b45f9c6d44744314a9bb9860cab53a8cf3972a9d64c545b7fa3c7d06f0d981: Status 404 returned error can't find the container with id c8b45f9c6d44744314a9bb9860cab53a8cf3972a9d64c545b7fa3c7d06f0d981 Nov 21 10:06:13 crc kubenswrapper[4972]: I1121 10:06:13.205880 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd","Type":"ContainerStarted","Data":"120e1a4513fbe24f45a4eeda9d7faf06501691be3c9bb3fd40b171aac9cb912f"} Nov 21 10:06:13 crc kubenswrapper[4972]: I1121 10:06:13.775432 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe358a0-767b-495f-ab34-5cc24717f485" path="/var/lib/kubelet/pods/dfe358a0-767b-495f-ab34-5cc24717f485/volumes" Nov 21 10:06:13 crc kubenswrapper[4972]: I1121 10:06:13.778880 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d2bd95-7a37-49eb-8e84-6880ba5435dd" path="/var/lib/kubelet/pods/e2d2bd95-7a37-49eb-8e84-6880ba5435dd/volumes" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.221068 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd","Type":"ContainerStarted","Data":"9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce"} Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.227027 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b258c455-2d04-41a7-9522-653cdcf78f5c","Type":"ContainerStarted","Data":"479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34"} Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.227099 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b258c455-2d04-41a7-9522-653cdcf78f5c","Type":"ContainerStarted","Data":"13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf"} Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.227120 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b258c455-2d04-41a7-9522-653cdcf78f5c","Type":"ContainerStarted","Data":"c8b45f9c6d44744314a9bb9860cab53a8cf3972a9d64c545b7fa3c7d06f0d981"} Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.257958 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.257929664 podStartE2EDuration="2.257929664s" podCreationTimestamp="2025-11-21 10:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:06:14.250382583 +0000 UTC m=+1519.359525081" watchObservedRunningTime="2025-11-21 10:06:14.257929664 +0000 UTC m=+1519.367072192" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.286520 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.286491657 podStartE2EDuration="2.286491657s" podCreationTimestamp="2025-11-21 10:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:06:14.276429158 +0000 UTC m=+1519.385571676" watchObservedRunningTime="2025-11-21 10:06:14.286491657 +0000 UTC m=+1519.395634195" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.486343 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-679ks"] Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.493810 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.503850 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-679ks"] Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.627403 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09b49860-6afe-49c6-a841-80251a226e9b-catalog-content\") pod \"community-operators-679ks\" (UID: \"09b49860-6afe-49c6-a841-80251a226e9b\") " pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.627505 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09b49860-6afe-49c6-a841-80251a226e9b-utilities\") pod \"community-operators-679ks\" (UID: \"09b49860-6afe-49c6-a841-80251a226e9b\") " pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.627601 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt577\" (UniqueName: \"kubernetes.io/projected/09b49860-6afe-49c6-a841-80251a226e9b-kube-api-access-kt577\") pod \"community-operators-679ks\" (UID: \"09b49860-6afe-49c6-a841-80251a226e9b\") " pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.728863 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09b49860-6afe-49c6-a841-80251a226e9b-utilities\") pod \"community-operators-679ks\" (UID: \"09b49860-6afe-49c6-a841-80251a226e9b\") " pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.729007 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt577\" (UniqueName: \"kubernetes.io/projected/09b49860-6afe-49c6-a841-80251a226e9b-kube-api-access-kt577\") pod \"community-operators-679ks\" (UID: \"09b49860-6afe-49c6-a841-80251a226e9b\") " pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.729041 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09b49860-6afe-49c6-a841-80251a226e9b-catalog-content\") pod \"community-operators-679ks\" (UID: \"09b49860-6afe-49c6-a841-80251a226e9b\") " pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.729479 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09b49860-6afe-49c6-a841-80251a226e9b-catalog-content\") pod \"community-operators-679ks\" (UID: \"09b49860-6afe-49c6-a841-80251a226e9b\") " pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.729702 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09b49860-6afe-49c6-a841-80251a226e9b-utilities\") pod \"community-operators-679ks\" (UID: \"09b49860-6afe-49c6-a841-80251a226e9b\") " pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.754932 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt577\" (UniqueName: \"kubernetes.io/projected/09b49860-6afe-49c6-a841-80251a226e9b-kube-api-access-kt577\") pod \"community-operators-679ks\" (UID: \"09b49860-6afe-49c6-a841-80251a226e9b\") " pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:14 crc kubenswrapper[4972]: I1121 10:06:14.829633 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:15 crc kubenswrapper[4972]: I1121 10:06:15.335819 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-679ks"] Nov 21 10:06:15 crc kubenswrapper[4972]: W1121 10:06:15.338807 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09b49860_6afe_49c6_a841_80251a226e9b.slice/crio-466342c5d92c426ef01affa273676eaaf3115022ae2025d9c51ee64114a5b37f WatchSource:0}: Error finding container 466342c5d92c426ef01affa273676eaaf3115022ae2025d9c51ee64114a5b37f: Status 404 returned error can't find the container with id 466342c5d92c426ef01affa273676eaaf3115022ae2025d9c51ee64114a5b37f Nov 21 10:06:16 crc kubenswrapper[4972]: I1121 10:06:16.249660 4972 generic.go:334] "Generic (PLEG): container finished" podID="09b49860-6afe-49c6-a841-80251a226e9b" containerID="8a3f46dea62b1bd768ad79de90441d7bdd5669bf6bf74e3b27201fceb73c5348" exitCode=0 Nov 21 10:06:16 crc kubenswrapper[4972]: I1121 10:06:16.249749 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-679ks" event={"ID":"09b49860-6afe-49c6-a841-80251a226e9b","Type":"ContainerDied","Data":"8a3f46dea62b1bd768ad79de90441d7bdd5669bf6bf74e3b27201fceb73c5348"} Nov 21 10:06:16 crc kubenswrapper[4972]: I1121 10:06:16.250105 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-679ks" event={"ID":"09b49860-6afe-49c6-a841-80251a226e9b","Type":"ContainerStarted","Data":"466342c5d92c426ef01affa273676eaaf3115022ae2025d9c51ee64114a5b37f"} Nov 21 10:06:16 crc kubenswrapper[4972]: I1121 10:06:16.253170 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 10:06:17 crc kubenswrapper[4972]: I1121 10:06:17.263181 4972 generic.go:334] "Generic (PLEG): container finished" podID="09b49860-6afe-49c6-a841-80251a226e9b" containerID="fe2da96786b8e736b9b30dee59e8b6d3e0b7e4dea9762424be376e6d8a56126c" exitCode=0 Nov 21 10:06:17 crc kubenswrapper[4972]: I1121 10:06:17.263353 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-679ks" event={"ID":"09b49860-6afe-49c6-a841-80251a226e9b","Type":"ContainerDied","Data":"fe2da96786b8e736b9b30dee59e8b6d3e0b7e4dea9762424be376e6d8a56126c"} Nov 21 10:06:17 crc kubenswrapper[4972]: I1121 10:06:17.525274 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 10:06:17 crc kubenswrapper[4972]: I1121 10:06:17.525608 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 10:06:17 crc kubenswrapper[4972]: I1121 10:06:17.602273 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 21 10:06:17 crc kubenswrapper[4972]: I1121 10:06:17.647758 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 10:06:18 crc kubenswrapper[4972]: I1121 10:06:18.278606 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-679ks" event={"ID":"09b49860-6afe-49c6-a841-80251a226e9b","Type":"ContainerStarted","Data":"5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09"} Nov 21 10:06:18 crc kubenswrapper[4972]: I1121 10:06:18.305281 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-679ks" podStartSLOduration=2.544468019 podStartE2EDuration="4.305262029s" podCreationTimestamp="2025-11-21 10:06:14 +0000 UTC" firstStartedPulling="2025-11-21 10:06:16.252764697 +0000 UTC m=+1521.361907235" lastFinishedPulling="2025-11-21 10:06:18.013558747 +0000 UTC m=+1523.122701245" observedRunningTime="2025-11-21 10:06:18.296907695 +0000 UTC m=+1523.406050203" watchObservedRunningTime="2025-11-21 10:06:18.305262029 +0000 UTC m=+1523.414404527" Nov 21 10:06:18 crc kubenswrapper[4972]: I1121 10:06:18.541990 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 10:06:18 crc kubenswrapper[4972]: I1121 10:06:18.542040 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 10:06:22 crc kubenswrapper[4972]: I1121 10:06:22.647266 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 10:06:22 crc kubenswrapper[4972]: I1121 10:06:22.659245 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 10:06:22 crc kubenswrapper[4972]: I1121 10:06:22.659294 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 10:06:22 crc kubenswrapper[4972]: I1121 10:06:22.677344 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 10:06:23 crc kubenswrapper[4972]: I1121 10:06:23.392015 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 10:06:23 crc kubenswrapper[4972]: I1121 10:06:23.703137 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b258c455-2d04-41a7-9522-653cdcf78f5c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 10:06:23 crc kubenswrapper[4972]: I1121 10:06:23.703165 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b258c455-2d04-41a7-9522-653cdcf78f5c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 10:06:24 crc kubenswrapper[4972]: I1121 10:06:24.830079 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:24 crc kubenswrapper[4972]: I1121 10:06:24.830137 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:24 crc kubenswrapper[4972]: I1121 10:06:24.880307 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:25 crc kubenswrapper[4972]: I1121 10:06:25.404979 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:25 crc kubenswrapper[4972]: I1121 10:06:25.460279 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-679ks"] Nov 21 10:06:26 crc kubenswrapper[4972]: I1121 10:06:26.179454 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:06:26 crc kubenswrapper[4972]: I1121 10:06:26.179889 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:06:26 crc kubenswrapper[4972]: I1121 10:06:26.328599 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 10:06:27 crc kubenswrapper[4972]: I1121 10:06:27.375617 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-679ks" podUID="09b49860-6afe-49c6-a841-80251a226e9b" containerName="registry-server" containerID="cri-o://5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09" gracePeriod=2 Nov 21 10:06:27 crc kubenswrapper[4972]: I1121 10:06:27.531264 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 10:06:27 crc kubenswrapper[4972]: I1121 10:06:27.531631 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 10:06:27 crc kubenswrapper[4972]: I1121 10:06:27.537127 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 10:06:27 crc kubenswrapper[4972]: I1121 10:06:27.538176 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 10:06:27 crc kubenswrapper[4972]: I1121 10:06:27.856354 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:27 crc kubenswrapper[4972]: I1121 10:06:27.905448 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09b49860-6afe-49c6-a841-80251a226e9b-catalog-content\") pod \"09b49860-6afe-49c6-a841-80251a226e9b\" (UID: \"09b49860-6afe-49c6-a841-80251a226e9b\") " Nov 21 10:06:27 crc kubenswrapper[4972]: I1121 10:06:27.905613 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09b49860-6afe-49c6-a841-80251a226e9b-utilities\") pod \"09b49860-6afe-49c6-a841-80251a226e9b\" (UID: \"09b49860-6afe-49c6-a841-80251a226e9b\") " Nov 21 10:06:27 crc kubenswrapper[4972]: I1121 10:06:27.905672 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt577\" (UniqueName: \"kubernetes.io/projected/09b49860-6afe-49c6-a841-80251a226e9b-kube-api-access-kt577\") pod \"09b49860-6afe-49c6-a841-80251a226e9b\" (UID: \"09b49860-6afe-49c6-a841-80251a226e9b\") " Nov 21 10:06:27 crc kubenswrapper[4972]: I1121 10:06:27.908323 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09b49860-6afe-49c6-a841-80251a226e9b-utilities" (OuterVolumeSpecName: "utilities") pod "09b49860-6afe-49c6-a841-80251a226e9b" (UID: "09b49860-6afe-49c6-a841-80251a226e9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:06:27 crc kubenswrapper[4972]: I1121 10:06:27.915223 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b49860-6afe-49c6-a841-80251a226e9b-kube-api-access-kt577" (OuterVolumeSpecName: "kube-api-access-kt577") pod "09b49860-6afe-49c6-a841-80251a226e9b" (UID: "09b49860-6afe-49c6-a841-80251a226e9b"). InnerVolumeSpecName "kube-api-access-kt577". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:27 crc kubenswrapper[4972]: I1121 10:06:27.968363 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09b49860-6afe-49c6-a841-80251a226e9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09b49860-6afe-49c6-a841-80251a226e9b" (UID: "09b49860-6afe-49c6-a841-80251a226e9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.008256 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09b49860-6afe-49c6-a841-80251a226e9b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.008298 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09b49860-6afe-49c6-a841-80251a226e9b-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.008311 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt577\" (UniqueName: \"kubernetes.io/projected/09b49860-6afe-49c6-a841-80251a226e9b-kube-api-access-kt577\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.386431 4972 generic.go:334] "Generic (PLEG): container finished" podID="09b49860-6afe-49c6-a841-80251a226e9b" containerID="5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09" exitCode=0 Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.387279 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-679ks" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.391175 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-679ks" event={"ID":"09b49860-6afe-49c6-a841-80251a226e9b","Type":"ContainerDied","Data":"5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09"} Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.391208 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-679ks" event={"ID":"09b49860-6afe-49c6-a841-80251a226e9b","Type":"ContainerDied","Data":"466342c5d92c426ef01affa273676eaaf3115022ae2025d9c51ee64114a5b37f"} Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.391224 4972 scope.go:117] "RemoveContainer" containerID="5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.420055 4972 scope.go:117] "RemoveContainer" containerID="fe2da96786b8e736b9b30dee59e8b6d3e0b7e4dea9762424be376e6d8a56126c" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.422933 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-679ks"] Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.431117 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-679ks"] Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.452990 4972 scope.go:117] "RemoveContainer" containerID="8a3f46dea62b1bd768ad79de90441d7bdd5669bf6bf74e3b27201fceb73c5348" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.506174 4972 scope.go:117] "RemoveContainer" containerID="5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09" Nov 21 10:06:28 crc kubenswrapper[4972]: E1121 10:06:28.508767 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09\": container with ID starting with 5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09 not found: ID does not exist" containerID="5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.508926 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09"} err="failed to get container status \"5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09\": rpc error: code = NotFound desc = could not find container \"5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09\": container with ID starting with 5e7dd14c504fd6fb25465be5c0f8f993726f67a69e17601f11d83f96fcd3aa09 not found: ID does not exist" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.509045 4972 scope.go:117] "RemoveContainer" containerID="fe2da96786b8e736b9b30dee59e8b6d3e0b7e4dea9762424be376e6d8a56126c" Nov 21 10:06:28 crc kubenswrapper[4972]: E1121 10:06:28.509560 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe2da96786b8e736b9b30dee59e8b6d3e0b7e4dea9762424be376e6d8a56126c\": container with ID starting with fe2da96786b8e736b9b30dee59e8b6d3e0b7e4dea9762424be376e6d8a56126c not found: ID does not exist" containerID="fe2da96786b8e736b9b30dee59e8b6d3e0b7e4dea9762424be376e6d8a56126c" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.509592 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe2da96786b8e736b9b30dee59e8b6d3e0b7e4dea9762424be376e6d8a56126c"} err="failed to get container status \"fe2da96786b8e736b9b30dee59e8b6d3e0b7e4dea9762424be376e6d8a56126c\": rpc error: code = NotFound desc = could not find container \"fe2da96786b8e736b9b30dee59e8b6d3e0b7e4dea9762424be376e6d8a56126c\": container with ID starting with fe2da96786b8e736b9b30dee59e8b6d3e0b7e4dea9762424be376e6d8a56126c not found: ID does not exist" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.509606 4972 scope.go:117] "RemoveContainer" containerID="8a3f46dea62b1bd768ad79de90441d7bdd5669bf6bf74e3b27201fceb73c5348" Nov 21 10:06:28 crc kubenswrapper[4972]: E1121 10:06:28.509943 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a3f46dea62b1bd768ad79de90441d7bdd5669bf6bf74e3b27201fceb73c5348\": container with ID starting with 8a3f46dea62b1bd768ad79de90441d7bdd5669bf6bf74e3b27201fceb73c5348 not found: ID does not exist" containerID="8a3f46dea62b1bd768ad79de90441d7bdd5669bf6bf74e3b27201fceb73c5348" Nov 21 10:06:28 crc kubenswrapper[4972]: I1121 10:06:28.510038 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a3f46dea62b1bd768ad79de90441d7bdd5669bf6bf74e3b27201fceb73c5348"} err="failed to get container status \"8a3f46dea62b1bd768ad79de90441d7bdd5669bf6bf74e3b27201fceb73c5348\": rpc error: code = NotFound desc = could not find container \"8a3f46dea62b1bd768ad79de90441d7bdd5669bf6bf74e3b27201fceb73c5348\": container with ID starting with 8a3f46dea62b1bd768ad79de90441d7bdd5669bf6bf74e3b27201fceb73c5348 not found: ID does not exist" Nov 21 10:06:29 crc kubenswrapper[4972]: I1121 10:06:29.770996 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b49860-6afe-49c6-a841-80251a226e9b" path="/var/lib/kubelet/pods/09b49860-6afe-49c6-a841-80251a226e9b/volumes" Nov 21 10:06:29 crc kubenswrapper[4972]: I1121 10:06:29.909123 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 10:06:29 crc kubenswrapper[4972]: I1121 10:06:29.909362 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="73b2b355-c8e4-496c-8d3c-2927280fed38" containerName="kube-state-metrics" containerID="cri-o://7abb8b4c502b2cb32d88ebecf840b58358ed86ae8173e0a5c658fa64af90dfec" gracePeriod=30 Nov 21 10:06:30 crc kubenswrapper[4972]: E1121 10:06:30.135779 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73b2b355_c8e4_496c_8d3c_2927280fed38.slice/crio-conmon-7abb8b4c502b2cb32d88ebecf840b58358ed86ae8173e0a5c658fa64af90dfec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73b2b355_c8e4_496c_8d3c_2927280fed38.slice/crio-7abb8b4c502b2cb32d88ebecf840b58358ed86ae8173e0a5c658fa64af90dfec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10495f4a_dd76_4f6d_b078_96fb1dae42dd.slice/crio-conmon-c60b2c13bcf0e6d2ad9127ed62e64813d1e6a8bfa8cfc5f265d4cfb21ca4e9e5.scope\": RecentStats: unable to find data in memory cache]" Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.414171 4972 generic.go:334] "Generic (PLEG): container finished" podID="73b2b355-c8e4-496c-8d3c-2927280fed38" containerID="7abb8b4c502b2cb32d88ebecf840b58358ed86ae8173e0a5c658fa64af90dfec" exitCode=2 Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.414553 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"73b2b355-c8e4-496c-8d3c-2927280fed38","Type":"ContainerDied","Data":"7abb8b4c502b2cb32d88ebecf840b58358ed86ae8173e0a5c658fa64af90dfec"} Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.414597 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"73b2b355-c8e4-496c-8d3c-2927280fed38","Type":"ContainerDied","Data":"b68e18b7f59ea9379482a4ca6efa44b540a09742eaab0c1ed3e6b87508930f65"} Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.414609 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b68e18b7f59ea9379482a4ca6efa44b540a09742eaab0c1ed3e6b87508930f65" Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.417659 4972 generic.go:334] "Generic (PLEG): container finished" podID="10495f4a-dd76-4f6d-b078-96fb1dae42dd" containerID="c60b2c13bcf0e6d2ad9127ed62e64813d1e6a8bfa8cfc5f265d4cfb21ca4e9e5" exitCode=137 Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.417706 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"10495f4a-dd76-4f6d-b078-96fb1dae42dd","Type":"ContainerDied","Data":"c60b2c13bcf0e6d2ad9127ed62e64813d1e6a8bfa8cfc5f265d4cfb21ca4e9e5"} Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.417731 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"10495f4a-dd76-4f6d-b078-96fb1dae42dd","Type":"ContainerDied","Data":"7f834da6ba3499bde07021140ebfddb00a74be0912564741765de2f04930bea4"} Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.417743 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f834da6ba3499bde07021140ebfddb00a74be0912564741765de2f04930bea4" Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.444626 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.451677 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.553737 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10495f4a-dd76-4f6d-b078-96fb1dae42dd-config-data\") pod \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\" (UID: \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\") " Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.553789 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10495f4a-dd76-4f6d-b078-96fb1dae42dd-combined-ca-bundle\") pod \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\" (UID: \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\") " Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.553823 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4mps\" (UniqueName: \"kubernetes.io/projected/10495f4a-dd76-4f6d-b078-96fb1dae42dd-kube-api-access-v4mps\") pod \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\" (UID: \"10495f4a-dd76-4f6d-b078-96fb1dae42dd\") " Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.554099 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67ns5\" (UniqueName: \"kubernetes.io/projected/73b2b355-c8e4-496c-8d3c-2927280fed38-kube-api-access-67ns5\") pod \"73b2b355-c8e4-496c-8d3c-2927280fed38\" (UID: \"73b2b355-c8e4-496c-8d3c-2927280fed38\") " Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.559905 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b2b355-c8e4-496c-8d3c-2927280fed38-kube-api-access-67ns5" (OuterVolumeSpecName: "kube-api-access-67ns5") pod "73b2b355-c8e4-496c-8d3c-2927280fed38" (UID: "73b2b355-c8e4-496c-8d3c-2927280fed38"). InnerVolumeSpecName "kube-api-access-67ns5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.560043 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10495f4a-dd76-4f6d-b078-96fb1dae42dd-kube-api-access-v4mps" (OuterVolumeSpecName: "kube-api-access-v4mps") pod "10495f4a-dd76-4f6d-b078-96fb1dae42dd" (UID: "10495f4a-dd76-4f6d-b078-96fb1dae42dd"). InnerVolumeSpecName "kube-api-access-v4mps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.587359 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10495f4a-dd76-4f6d-b078-96fb1dae42dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10495f4a-dd76-4f6d-b078-96fb1dae42dd" (UID: "10495f4a-dd76-4f6d-b078-96fb1dae42dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.587386 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10495f4a-dd76-4f6d-b078-96fb1dae42dd-config-data" (OuterVolumeSpecName: "config-data") pod "10495f4a-dd76-4f6d-b078-96fb1dae42dd" (UID: "10495f4a-dd76-4f6d-b078-96fb1dae42dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.655857 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67ns5\" (UniqueName: \"kubernetes.io/projected/73b2b355-c8e4-496c-8d3c-2927280fed38-kube-api-access-67ns5\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.656056 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10495f4a-dd76-4f6d-b078-96fb1dae42dd-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.656115 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10495f4a-dd76-4f6d-b078-96fb1dae42dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:30 crc kubenswrapper[4972]: I1121 10:06:30.656190 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4mps\" (UniqueName: \"kubernetes.io/projected/10495f4a-dd76-4f6d-b078-96fb1dae42dd-kube-api-access-v4mps\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.429104 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.429194 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.497345 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.519203 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.532662 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.543329 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.560703 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 10:06:31 crc kubenswrapper[4972]: E1121 10:06:31.561142 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b49860-6afe-49c6-a841-80251a226e9b" containerName="extract-utilities" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.561161 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b49860-6afe-49c6-a841-80251a226e9b" containerName="extract-utilities" Nov 21 10:06:31 crc kubenswrapper[4972]: E1121 10:06:31.561174 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b49860-6afe-49c6-a841-80251a226e9b" containerName="registry-server" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.561180 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b49860-6afe-49c6-a841-80251a226e9b" containerName="registry-server" Nov 21 10:06:31 crc kubenswrapper[4972]: E1121 10:06:31.561205 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b2b355-c8e4-496c-8d3c-2927280fed38" containerName="kube-state-metrics" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.561214 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b2b355-c8e4-496c-8d3c-2927280fed38" containerName="kube-state-metrics" Nov 21 10:06:31 crc kubenswrapper[4972]: E1121 10:06:31.561229 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10495f4a-dd76-4f6d-b078-96fb1dae42dd" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.561237 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="10495f4a-dd76-4f6d-b078-96fb1dae42dd" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 10:06:31 crc kubenswrapper[4972]: E1121 10:06:31.561257 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b49860-6afe-49c6-a841-80251a226e9b" containerName="extract-content" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.561263 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b49860-6afe-49c6-a841-80251a226e9b" containerName="extract-content" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.561418 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b2b355-c8e4-496c-8d3c-2927280fed38" containerName="kube-state-metrics" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.561432 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b49860-6afe-49c6-a841-80251a226e9b" containerName="registry-server" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.561442 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="10495f4a-dd76-4f6d-b078-96fb1dae42dd" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.562182 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.565905 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.566070 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.577951 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.579717 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.583480 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.583493 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.588915 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.591075 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.601143 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.677846 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.677893 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.677937 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.677961 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2sw8\" (UniqueName: \"kubernetes.io/projected/fc61b266-e156-4999-8ec7-8aa1f1988e42-kube-api-access-d2sw8\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.678017 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.678047 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.678104 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2swdg\" (UniqueName: \"kubernetes.io/projected/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-api-access-2swdg\") pod \"kube-state-metrics-0\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.678143 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.678177 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.774067 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10495f4a-dd76-4f6d-b078-96fb1dae42dd" path="/var/lib/kubelet/pods/10495f4a-dd76-4f6d-b078-96fb1dae42dd/volumes" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.775041 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b2b355-c8e4-496c-8d3c-2927280fed38" path="/var/lib/kubelet/pods/73b2b355-c8e4-496c-8d3c-2927280fed38/volumes" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.779405 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2sw8\" (UniqueName: \"kubernetes.io/projected/fc61b266-e156-4999-8ec7-8aa1f1988e42-kube-api-access-d2sw8\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.779478 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.779509 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.779556 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2swdg\" (UniqueName: \"kubernetes.io/projected/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-api-access-2swdg\") pod \"kube-state-metrics-0\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.779587 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.779613 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.779667 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.779683 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.779711 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.785121 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.785135 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.788363 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.789354 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.792907 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.794971 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.795781 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2swdg\" (UniqueName: \"kubernetes.io/projected/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-api-access-2swdg\") pod \"kube-state-metrics-0\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.796870 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.799154 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2sw8\" (UniqueName: \"kubernetes.io/projected/fc61b266-e156-4999-8ec7-8aa1f1988e42-kube-api-access-d2sw8\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.882595 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.883196 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="ceilometer-central-agent" containerID="cri-o://8159b1610bcf54d00c2b259306729ba318b19f6013640a3364c2b54804b2c943" gracePeriod=30 Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.883233 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="sg-core" containerID="cri-o://e7268272f40c6f2753a46444f0106ac9251e513804f7234f4eace92b311fe9ee" gracePeriod=30 Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.883294 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="proxy-httpd" containerID="cri-o://0e258527fd35117af22410e22340c7e8ea64053488f1523721ac50d5007a8d2a" gracePeriod=30 Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.883516 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="ceilometer-notification-agent" containerID="cri-o://044ae04497fc1c9c393d8cc2813cb71c8522e939bc38448b64217eb20db3cdc6" gracePeriod=30 Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.887683 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 10:06:31 crc kubenswrapper[4972]: I1121 10:06:31.901188 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.410954 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 10:06:32 crc kubenswrapper[4972]: W1121 10:06:32.425402 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71ed1f19_43e6_4245_82c1_f51b5f18d1e6.slice/crio-a4aa708a3cf47770310bf4aa94a83fd60ab3d549885c2d22b19a0f7986dee2b5 WatchSource:0}: Error finding container a4aa708a3cf47770310bf4aa94a83fd60ab3d549885c2d22b19a0f7986dee2b5: Status 404 returned error can't find the container with id a4aa708a3cf47770310bf4aa94a83fd60ab3d549885c2d22b19a0f7986dee2b5 Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.477616 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.489199 4972 generic.go:334] "Generic (PLEG): container finished" podID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerID="0e258527fd35117af22410e22340c7e8ea64053488f1523721ac50d5007a8d2a" exitCode=0 Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.489228 4972 generic.go:334] "Generic (PLEG): container finished" podID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerID="e7268272f40c6f2753a46444f0106ac9251e513804f7234f4eace92b311fe9ee" exitCode=2 Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.489236 4972 generic.go:334] "Generic (PLEG): container finished" podID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerID="044ae04497fc1c9c393d8cc2813cb71c8522e939bc38448b64217eb20db3cdc6" exitCode=0 Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.489243 4972 generic.go:334] "Generic (PLEG): container finished" podID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerID="8159b1610bcf54d00c2b259306729ba318b19f6013640a3364c2b54804b2c943" exitCode=0 Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.489292 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f18e62-c72d-4a5c-8738-975c81f1d724","Type":"ContainerDied","Data":"0e258527fd35117af22410e22340c7e8ea64053488f1523721ac50d5007a8d2a"} Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.489318 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f18e62-c72d-4a5c-8738-975c81f1d724","Type":"ContainerDied","Data":"e7268272f40c6f2753a46444f0106ac9251e513804f7234f4eace92b311fe9ee"} Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.489331 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f18e62-c72d-4a5c-8738-975c81f1d724","Type":"ContainerDied","Data":"044ae04497fc1c9c393d8cc2813cb71c8522e939bc38448b64217eb20db3cdc6"} Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.489339 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f18e62-c72d-4a5c-8738-975c81f1d724","Type":"ContainerDied","Data":"8159b1610bcf54d00c2b259306729ba318b19f6013640a3364c2b54804b2c943"} Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.492067 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"71ed1f19-43e6-4245-82c1-f51b5f18d1e6","Type":"ContainerStarted","Data":"a4aa708a3cf47770310bf4aa94a83fd60ab3d549885c2d22b19a0f7986dee2b5"} Nov 21 10:06:32 crc kubenswrapper[4972]: W1121 10:06:32.496502 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc61b266_e156_4999_8ec7_8aa1f1988e42.slice/crio-7b5170e3ec4c2fa4c67d3a01f27042beb630cfe9d9a67e94747993750fc33333 WatchSource:0}: Error finding container 7b5170e3ec4c2fa4c67d3a01f27042beb630cfe9d9a67e94747993750fc33333: Status 404 returned error can't find the container with id 7b5170e3ec4c2fa4c67d3a01f27042beb630cfe9d9a67e94747993750fc33333 Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.678765 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.680003 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.692615 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.701908 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.730231 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.803248 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-config-data\") pod \"d7f18e62-c72d-4a5c-8738-975c81f1d724\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.803400 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f18e62-c72d-4a5c-8738-975c81f1d724-log-httpd\") pod \"d7f18e62-c72d-4a5c-8738-975c81f1d724\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.803478 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4gqh\" (UniqueName: \"kubernetes.io/projected/d7f18e62-c72d-4a5c-8738-975c81f1d724-kube-api-access-h4gqh\") pod \"d7f18e62-c72d-4a5c-8738-975c81f1d724\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.803525 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-sg-core-conf-yaml\") pod \"d7f18e62-c72d-4a5c-8738-975c81f1d724\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.803634 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-scripts\") pod \"d7f18e62-c72d-4a5c-8738-975c81f1d724\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.803685 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f18e62-c72d-4a5c-8738-975c81f1d724-run-httpd\") pod \"d7f18e62-c72d-4a5c-8738-975c81f1d724\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.803766 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-combined-ca-bundle\") pod \"d7f18e62-c72d-4a5c-8738-975c81f1d724\" (UID: \"d7f18e62-c72d-4a5c-8738-975c81f1d724\") " Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.807610 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f18e62-c72d-4a5c-8738-975c81f1d724-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d7f18e62-c72d-4a5c-8738-975c81f1d724" (UID: "d7f18e62-c72d-4a5c-8738-975c81f1d724"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.810607 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-scripts" (OuterVolumeSpecName: "scripts") pod "d7f18e62-c72d-4a5c-8738-975c81f1d724" (UID: "d7f18e62-c72d-4a5c-8738-975c81f1d724"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.811716 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f18e62-c72d-4a5c-8738-975c81f1d724-kube-api-access-h4gqh" (OuterVolumeSpecName: "kube-api-access-h4gqh") pod "d7f18e62-c72d-4a5c-8738-975c81f1d724" (UID: "d7f18e62-c72d-4a5c-8738-975c81f1d724"). InnerVolumeSpecName "kube-api-access-h4gqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.813447 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f18e62-c72d-4a5c-8738-975c81f1d724-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d7f18e62-c72d-4a5c-8738-975c81f1d724" (UID: "d7f18e62-c72d-4a5c-8738-975c81f1d724"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.852239 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d7f18e62-c72d-4a5c-8738-975c81f1d724" (UID: "d7f18e62-c72d-4a5c-8738-975c81f1d724"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.905860 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.905885 4972 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f18e62-c72d-4a5c-8738-975c81f1d724-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.905894 4972 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7f18e62-c72d-4a5c-8738-975c81f1d724-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.905902 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4gqh\" (UniqueName: \"kubernetes.io/projected/d7f18e62-c72d-4a5c-8738-975c81f1d724-kube-api-access-h4gqh\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.905913 4972 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.912411 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7f18e62-c72d-4a5c-8738-975c81f1d724" (UID: "d7f18e62-c72d-4a5c-8738-975c81f1d724"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:32 crc kubenswrapper[4972]: I1121 10:06:32.923585 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-config-data" (OuterVolumeSpecName: "config-data") pod "d7f18e62-c72d-4a5c-8738-975c81f1d724" (UID: "d7f18e62-c72d-4a5c-8738-975c81f1d724"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.007520 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.007552 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f18e62-c72d-4a5c-8738-975c81f1d724-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.547552 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"71ed1f19-43e6-4245-82c1-f51b5f18d1e6","Type":"ContainerStarted","Data":"68b9d9fc5eb11275b79950cd330ab9d42c03e228546a72b6835bf9e0589b651b"} Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.547960 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.592268 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.592948 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.1206954590000002 podStartE2EDuration="2.592933233s" podCreationTimestamp="2025-11-21 10:06:31 +0000 UTC" firstStartedPulling="2025-11-21 10:06:32.429541068 +0000 UTC m=+1537.538683566" lastFinishedPulling="2025-11-21 10:06:32.901778832 +0000 UTC m=+1538.010921340" observedRunningTime="2025-11-21 10:06:33.578532778 +0000 UTC m=+1538.687675306" watchObservedRunningTime="2025-11-21 10:06:33.592933233 +0000 UTC m=+1538.702075731" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.593085 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7f18e62-c72d-4a5c-8738-975c81f1d724","Type":"ContainerDied","Data":"d66b790f40180c116e3389eff0767e735ba2f0dd069e745d19d3eaef4eab5504"} Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.593129 4972 scope.go:117] "RemoveContainer" containerID="0e258527fd35117af22410e22340c7e8ea64053488f1523721ac50d5007a8d2a" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.624464 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fc61b266-e156-4999-8ec7-8aa1f1988e42","Type":"ContainerStarted","Data":"a762824bd37bcd1f70426519763b522438780d31ef55d3f35b56ca5424e1e1ee"} Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.624511 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fc61b266-e156-4999-8ec7-8aa1f1988e42","Type":"ContainerStarted","Data":"7b5170e3ec4c2fa4c67d3a01f27042beb630cfe9d9a67e94747993750fc33333"} Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.624823 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.628072 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.647760 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.647742867 podStartE2EDuration="2.647742867s" podCreationTimestamp="2025-11-21 10:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:06:33.644206352 +0000 UTC m=+1538.753348860" watchObservedRunningTime="2025-11-21 10:06:33.647742867 +0000 UTC m=+1538.756885365" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.667766 4972 scope.go:117] "RemoveContainer" containerID="e7268272f40c6f2753a46444f0106ac9251e513804f7234f4eace92b311fe9ee" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.671442 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.690083 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.711928 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:06:33 crc kubenswrapper[4972]: E1121 10:06:33.712353 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="ceilometer-notification-agent" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.712368 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="ceilometer-notification-agent" Nov 21 10:06:33 crc kubenswrapper[4972]: E1121 10:06:33.712388 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="ceilometer-central-agent" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.712395 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="ceilometer-central-agent" Nov 21 10:06:33 crc kubenswrapper[4972]: E1121 10:06:33.712408 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="proxy-httpd" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.712414 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="proxy-httpd" Nov 21 10:06:33 crc kubenswrapper[4972]: E1121 10:06:33.712442 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="sg-core" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.712447 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="sg-core" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.712639 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="proxy-httpd" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.712656 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="ceilometer-notification-agent" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.712673 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="ceilometer-central-agent" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.712697 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" containerName="sg-core" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.721187 4972 scope.go:117] "RemoveContainer" containerID="044ae04497fc1c9c393d8cc2813cb71c8522e939bc38448b64217eb20db3cdc6" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.725428 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.730172 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.730346 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.730473 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.749877 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.751554 4972 scope.go:117] "RemoveContainer" containerID="8159b1610bcf54d00c2b259306729ba318b19f6013640a3364c2b54804b2c943" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.783202 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f18e62-c72d-4a5c-8738-975c81f1d724" path="/var/lib/kubelet/pods/d7f18e62-c72d-4a5c-8738-975c81f1d724/volumes" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.808178 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6868d89965-c8n5j"] Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.811948 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.824352 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.824400 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ece5a3ef-094d-4371-9269-86f01f0b77f8-log-httpd\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.824440 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-config-data\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.824459 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.824500 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ece5a3ef-094d-4371-9269-86f01f0b77f8-run-httpd\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.824569 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-scripts\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.824594 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.824638 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bsnr\" (UniqueName: \"kubernetes.io/projected/ece5a3ef-094d-4371-9269-86f01f0b77f8-kube-api-access-8bsnr\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.836236 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6868d89965-c8n5j"] Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.926322 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-ovsdbserver-nb\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.926374 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-dns-swift-storage-0\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.926459 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bsnr\" (UniqueName: \"kubernetes.io/projected/ece5a3ef-094d-4371-9269-86f01f0b77f8-kube-api-access-8bsnr\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.926498 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-dns-svc\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.926536 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.926568 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ece5a3ef-094d-4371-9269-86f01f0b77f8-log-httpd\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.926596 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-config\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.926635 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-config-data\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.926655 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.926686 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz8nl\" (UniqueName: \"kubernetes.io/projected/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-kube-api-access-pz8nl\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.926857 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-ovsdbserver-sb\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.926913 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ece5a3ef-094d-4371-9269-86f01f0b77f8-run-httpd\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.927115 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-scripts\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.927199 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.927125 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ece5a3ef-094d-4371-9269-86f01f0b77f8-log-httpd\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.927216 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ece5a3ef-094d-4371-9269-86f01f0b77f8-run-httpd\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.932616 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.935984 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.936011 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-scripts\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.950017 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-config-data\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.950568 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:33 crc kubenswrapper[4972]: I1121 10:06:33.952615 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bsnr\" (UniqueName: \"kubernetes.io/projected/ece5a3ef-094d-4371-9269-86f01f0b77f8-kube-api-access-8bsnr\") pod \"ceilometer-0\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " pod="openstack/ceilometer-0" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.029079 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-config\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.029173 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz8nl\" (UniqueName: \"kubernetes.io/projected/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-kube-api-access-pz8nl\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.029208 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-ovsdbserver-sb\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.029286 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-ovsdbserver-nb\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.029308 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-dns-swift-storage-0\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.029355 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-dns-svc\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.030427 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-dns-svc\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.030713 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-ovsdbserver-sb\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.031325 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-config\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.031752 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-ovsdbserver-nb\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.032384 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-dns-swift-storage-0\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.048312 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz8nl\" (UniqueName: \"kubernetes.io/projected/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-kube-api-access-pz8nl\") pod \"dnsmasq-dns-6868d89965-c8n5j\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.055500 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.135002 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:34 crc kubenswrapper[4972]: W1121 10:06:34.565536 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece5a3ef_094d_4371_9269_86f01f0b77f8.slice/crio-c4500fbec481df9edbe56cddeda769245ae852e173e005a949bcc08ad157c2ed WatchSource:0}: Error finding container c4500fbec481df9edbe56cddeda769245ae852e173e005a949bcc08ad157c2ed: Status 404 returned error can't find the container with id c4500fbec481df9edbe56cddeda769245ae852e173e005a949bcc08ad157c2ed Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.570361 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.642442 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ece5a3ef-094d-4371-9269-86f01f0b77f8","Type":"ContainerStarted","Data":"c4500fbec481df9edbe56cddeda769245ae852e173e005a949bcc08ad157c2ed"} Nov 21 10:06:34 crc kubenswrapper[4972]: W1121 10:06:34.695491 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4280cc0e_ca6a_47d7_be4d_a05beb85de3c.slice/crio-0f0bda1adafc5a483955762bf39be4ca0275d63a1e72abe84bbcce32775bc4de WatchSource:0}: Error finding container 0f0bda1adafc5a483955762bf39be4ca0275d63a1e72abe84bbcce32775bc4de: Status 404 returned error can't find the container with id 0f0bda1adafc5a483955762bf39be4ca0275d63a1e72abe84bbcce32775bc4de Nov 21 10:06:34 crc kubenswrapper[4972]: I1121 10:06:34.697913 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6868d89965-c8n5j"] Nov 21 10:06:35 crc kubenswrapper[4972]: I1121 10:06:35.656953 4972 generic.go:334] "Generic (PLEG): container finished" podID="4280cc0e-ca6a-47d7-be4d-a05beb85de3c" containerID="f9b9c1a4d9ec5055177eaed9a2495ea1e1c2facf3f6db6eee4efc4bb144f6fda" exitCode=0 Nov 21 10:06:35 crc kubenswrapper[4972]: I1121 10:06:35.657042 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" event={"ID":"4280cc0e-ca6a-47d7-be4d-a05beb85de3c","Type":"ContainerDied","Data":"f9b9c1a4d9ec5055177eaed9a2495ea1e1c2facf3f6db6eee4efc4bb144f6fda"} Nov 21 10:06:35 crc kubenswrapper[4972]: I1121 10:06:35.657631 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" event={"ID":"4280cc0e-ca6a-47d7-be4d-a05beb85de3c","Type":"ContainerStarted","Data":"0f0bda1adafc5a483955762bf39be4ca0275d63a1e72abe84bbcce32775bc4de"} Nov 21 10:06:35 crc kubenswrapper[4972]: I1121 10:06:35.665918 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ece5a3ef-094d-4371-9269-86f01f0b77f8","Type":"ContainerStarted","Data":"d35bfa7444d1b4c2c5112f2eb7d6dc7ca44cdb98b6111f8efe77e83e4aeeea00"} Nov 21 10:06:36 crc kubenswrapper[4972]: I1121 10:06:36.474533 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:06:36 crc kubenswrapper[4972]: I1121 10:06:36.595542 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:36 crc kubenswrapper[4972]: I1121 10:06:36.683768 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" event={"ID":"4280cc0e-ca6a-47d7-be4d-a05beb85de3c","Type":"ContainerStarted","Data":"69e04f00aa09648eb63cd97e3e98080e91e8163cf8df71f506e5b1e624817eb1"} Nov 21 10:06:36 crc kubenswrapper[4972]: I1121 10:06:36.684965 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:36 crc kubenswrapper[4972]: I1121 10:06:36.692543 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b258c455-2d04-41a7-9522-653cdcf78f5c" containerName="nova-api-log" containerID="cri-o://13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf" gracePeriod=30 Nov 21 10:06:36 crc kubenswrapper[4972]: I1121 10:06:36.692796 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ece5a3ef-094d-4371-9269-86f01f0b77f8","Type":"ContainerStarted","Data":"d8ab729fbaef8ab6e1590cc492ffb673fe2eef56b2ba15f887a1f4ebefe4c208"} Nov 21 10:06:36 crc kubenswrapper[4972]: I1121 10:06:36.692902 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b258c455-2d04-41a7-9522-653cdcf78f5c" containerName="nova-api-api" containerID="cri-o://479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34" gracePeriod=30 Nov 21 10:06:36 crc kubenswrapper[4972]: I1121 10:06:36.755891 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" podStartSLOduration=3.755864864 podStartE2EDuration="3.755864864s" podCreationTimestamp="2025-11-21 10:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:06:36.708039357 +0000 UTC m=+1541.817181875" watchObservedRunningTime="2025-11-21 10:06:36.755864864 +0000 UTC m=+1541.865007362" Nov 21 10:06:36 crc kubenswrapper[4972]: I1121 10:06:36.901293 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:37 crc kubenswrapper[4972]: I1121 10:06:37.703397 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ece5a3ef-094d-4371-9269-86f01f0b77f8","Type":"ContainerStarted","Data":"3b1f99c2fbcabd57039de18a68f2528db1bbeec49985591b87c56cced24b2931"} Nov 21 10:06:37 crc kubenswrapper[4972]: I1121 10:06:37.706535 4972 generic.go:334] "Generic (PLEG): container finished" podID="b258c455-2d04-41a7-9522-653cdcf78f5c" containerID="13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf" exitCode=143 Nov 21 10:06:37 crc kubenswrapper[4972]: I1121 10:06:37.706609 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b258c455-2d04-41a7-9522-653cdcf78f5c","Type":"ContainerDied","Data":"13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf"} Nov 21 10:06:38 crc kubenswrapper[4972]: I1121 10:06:38.725392 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ece5a3ef-094d-4371-9269-86f01f0b77f8","Type":"ContainerStarted","Data":"bb2767589d5f4243c0986e271d3e94bd73167bfedfd6c2ccb1060649f6208b57"} Nov 21 10:06:38 crc kubenswrapper[4972]: I1121 10:06:38.725633 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="ceilometer-central-agent" containerID="cri-o://d35bfa7444d1b4c2c5112f2eb7d6dc7ca44cdb98b6111f8efe77e83e4aeeea00" gracePeriod=30 Nov 21 10:06:38 crc kubenswrapper[4972]: I1121 10:06:38.725664 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="ceilometer-notification-agent" containerID="cri-o://d8ab729fbaef8ab6e1590cc492ffb673fe2eef56b2ba15f887a1f4ebefe4c208" gracePeriod=30 Nov 21 10:06:38 crc kubenswrapper[4972]: I1121 10:06:38.725670 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="sg-core" containerID="cri-o://3b1f99c2fbcabd57039de18a68f2528db1bbeec49985591b87c56cced24b2931" gracePeriod=30 Nov 21 10:06:38 crc kubenswrapper[4972]: I1121 10:06:38.725797 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="proxy-httpd" containerID="cri-o://bb2767589d5f4243c0986e271d3e94bd73167bfedfd6c2ccb1060649f6208b57" gracePeriod=30 Nov 21 10:06:38 crc kubenswrapper[4972]: I1121 10:06:38.752171 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.238777393 podStartE2EDuration="5.752151485s" podCreationTimestamp="2025-11-21 10:06:33 +0000 UTC" firstStartedPulling="2025-11-21 10:06:34.569989419 +0000 UTC m=+1539.679131917" lastFinishedPulling="2025-11-21 10:06:38.083363511 +0000 UTC m=+1543.192506009" observedRunningTime="2025-11-21 10:06:38.749259907 +0000 UTC m=+1543.858402415" watchObservedRunningTime="2025-11-21 10:06:38.752151485 +0000 UTC m=+1543.861293973" Nov 21 10:06:39 crc kubenswrapper[4972]: I1121 10:06:39.738337 4972 generic.go:334] "Generic (PLEG): container finished" podID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerID="bb2767589d5f4243c0986e271d3e94bd73167bfedfd6c2ccb1060649f6208b57" exitCode=0 Nov 21 10:06:39 crc kubenswrapper[4972]: I1121 10:06:39.738726 4972 generic.go:334] "Generic (PLEG): container finished" podID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerID="3b1f99c2fbcabd57039de18a68f2528db1bbeec49985591b87c56cced24b2931" exitCode=2 Nov 21 10:06:39 crc kubenswrapper[4972]: I1121 10:06:39.738737 4972 generic.go:334] "Generic (PLEG): container finished" podID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerID="d8ab729fbaef8ab6e1590cc492ffb673fe2eef56b2ba15f887a1f4ebefe4c208" exitCode=0 Nov 21 10:06:39 crc kubenswrapper[4972]: I1121 10:06:39.738761 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ece5a3ef-094d-4371-9269-86f01f0b77f8","Type":"ContainerDied","Data":"bb2767589d5f4243c0986e271d3e94bd73167bfedfd6c2ccb1060649f6208b57"} Nov 21 10:06:39 crc kubenswrapper[4972]: I1121 10:06:39.738812 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ece5a3ef-094d-4371-9269-86f01f0b77f8","Type":"ContainerDied","Data":"3b1f99c2fbcabd57039de18a68f2528db1bbeec49985591b87c56cced24b2931"} Nov 21 10:06:39 crc kubenswrapper[4972]: I1121 10:06:39.738877 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ece5a3ef-094d-4371-9269-86f01f0b77f8","Type":"ContainerDied","Data":"d8ab729fbaef8ab6e1590cc492ffb673fe2eef56b2ba15f887a1f4ebefe4c208"} Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.287469 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.376468 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b258c455-2d04-41a7-9522-653cdcf78f5c-logs\") pod \"b258c455-2d04-41a7-9522-653cdcf78f5c\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.376507 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b258c455-2d04-41a7-9522-653cdcf78f5c-combined-ca-bundle\") pod \"b258c455-2d04-41a7-9522-653cdcf78f5c\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.376544 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzcpb\" (UniqueName: \"kubernetes.io/projected/b258c455-2d04-41a7-9522-653cdcf78f5c-kube-api-access-tzcpb\") pod \"b258c455-2d04-41a7-9522-653cdcf78f5c\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.376628 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b258c455-2d04-41a7-9522-653cdcf78f5c-config-data\") pod \"b258c455-2d04-41a7-9522-653cdcf78f5c\" (UID: \"b258c455-2d04-41a7-9522-653cdcf78f5c\") " Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.377907 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b258c455-2d04-41a7-9522-653cdcf78f5c-logs" (OuterVolumeSpecName: "logs") pod "b258c455-2d04-41a7-9522-653cdcf78f5c" (UID: "b258c455-2d04-41a7-9522-653cdcf78f5c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.385308 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b258c455-2d04-41a7-9522-653cdcf78f5c-kube-api-access-tzcpb" (OuterVolumeSpecName: "kube-api-access-tzcpb") pod "b258c455-2d04-41a7-9522-653cdcf78f5c" (UID: "b258c455-2d04-41a7-9522-653cdcf78f5c"). InnerVolumeSpecName "kube-api-access-tzcpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.423519 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b258c455-2d04-41a7-9522-653cdcf78f5c-config-data" (OuterVolumeSpecName: "config-data") pod "b258c455-2d04-41a7-9522-653cdcf78f5c" (UID: "b258c455-2d04-41a7-9522-653cdcf78f5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.440080 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b258c455-2d04-41a7-9522-653cdcf78f5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b258c455-2d04-41a7-9522-653cdcf78f5c" (UID: "b258c455-2d04-41a7-9522-653cdcf78f5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.478285 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b258c455-2d04-41a7-9522-653cdcf78f5c-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.478310 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b258c455-2d04-41a7-9522-653cdcf78f5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.478321 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzcpb\" (UniqueName: \"kubernetes.io/projected/b258c455-2d04-41a7-9522-653cdcf78f5c-kube-api-access-tzcpb\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.478332 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b258c455-2d04-41a7-9522-653cdcf78f5c-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.747177 4972 generic.go:334] "Generic (PLEG): container finished" podID="b258c455-2d04-41a7-9522-653cdcf78f5c" containerID="479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34" exitCode=0 Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.747404 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b258c455-2d04-41a7-9522-653cdcf78f5c","Type":"ContainerDied","Data":"479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34"} Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.747428 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b258c455-2d04-41a7-9522-653cdcf78f5c","Type":"ContainerDied","Data":"c8b45f9c6d44744314a9bb9860cab53a8cf3972a9d64c545b7fa3c7d06f0d981"} Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.747443 4972 scope.go:117] "RemoveContainer" containerID="479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.747542 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.776994 4972 scope.go:117] "RemoveContainer" containerID="13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.786483 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.802483 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.807452 4972 scope.go:117] "RemoveContainer" containerID="479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34" Nov 21 10:06:40 crc kubenswrapper[4972]: E1121 10:06:40.808080 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34\": container with ID starting with 479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34 not found: ID does not exist" containerID="479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.808117 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34"} err="failed to get container status \"479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34\": rpc error: code = NotFound desc = could not find container \"479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34\": container with ID starting with 479540a88c4425b4e2ce78ae81f15717e6576b3018abcd763e4c3a735b9b7c34 not found: ID does not exist" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.808145 4972 scope.go:117] "RemoveContainer" containerID="13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf" Nov 21 10:06:40 crc kubenswrapper[4972]: E1121 10:06:40.808404 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf\": container with ID starting with 13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf not found: ID does not exist" containerID="13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.808437 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf"} err="failed to get container status \"13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf\": rpc error: code = NotFound desc = could not find container \"13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf\": container with ID starting with 13f0a8cd0c9df039fa126f9538e7caec70834ca7ab0f7fc179407fe350f46edf not found: ID does not exist" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.812746 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:40 crc kubenswrapper[4972]: E1121 10:06:40.813189 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b258c455-2d04-41a7-9522-653cdcf78f5c" containerName="nova-api-api" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.813278 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b258c455-2d04-41a7-9522-653cdcf78f5c" containerName="nova-api-api" Nov 21 10:06:40 crc kubenswrapper[4972]: E1121 10:06:40.813300 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b258c455-2d04-41a7-9522-653cdcf78f5c" containerName="nova-api-log" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.813308 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b258c455-2d04-41a7-9522-653cdcf78f5c" containerName="nova-api-log" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.813490 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b258c455-2d04-41a7-9522-653cdcf78f5c" containerName="nova-api-api" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.813525 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b258c455-2d04-41a7-9522-653cdcf78f5c" containerName="nova-api-log" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.815597 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.819052 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.819166 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.819295 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.826408 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.886388 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.886427 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7c2h\" (UniqueName: \"kubernetes.io/projected/c6383716-c6ac-45b6-908d-42f4600d44ab-kube-api-access-z7c2h\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.886502 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.886549 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-public-tls-certs\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.886576 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-config-data\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.886591 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6383716-c6ac-45b6-908d-42f4600d44ab-logs\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.987917 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.987966 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7c2h\" (UniqueName: \"kubernetes.io/projected/c6383716-c6ac-45b6-908d-42f4600d44ab-kube-api-access-z7c2h\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.988072 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.988135 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-public-tls-certs\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.988169 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-config-data\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.988193 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6383716-c6ac-45b6-908d-42f4600d44ab-logs\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.989007 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6383716-c6ac-45b6-908d-42f4600d44ab-logs\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.993729 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-public-tls-certs\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.994751 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.994786 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-config-data\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:40 crc kubenswrapper[4972]: I1121 10:06:40.995313 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:41 crc kubenswrapper[4972]: I1121 10:06:41.011220 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7c2h\" (UniqueName: \"kubernetes.io/projected/c6383716-c6ac-45b6-908d-42f4600d44ab-kube-api-access-z7c2h\") pod \"nova-api-0\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " pod="openstack/nova-api-0" Nov 21 10:06:41 crc kubenswrapper[4972]: I1121 10:06:41.129646 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:06:41 crc kubenswrapper[4972]: I1121 10:06:41.580987 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:41 crc kubenswrapper[4972]: W1121 10:06:41.588424 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6383716_c6ac_45b6_908d_42f4600d44ab.slice/crio-a4c39df558e9ff933d4cc25216b5c5087409a9d66a32538cb91f2aaee836fa4d WatchSource:0}: Error finding container a4c39df558e9ff933d4cc25216b5c5087409a9d66a32538cb91f2aaee836fa4d: Status 404 returned error can't find the container with id a4c39df558e9ff933d4cc25216b5c5087409a9d66a32538cb91f2aaee836fa4d Nov 21 10:06:41 crc kubenswrapper[4972]: I1121 10:06:41.759899 4972 generic.go:334] "Generic (PLEG): container finished" podID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerID="d35bfa7444d1b4c2c5112f2eb7d6dc7ca44cdb98b6111f8efe77e83e4aeeea00" exitCode=0 Nov 21 10:06:41 crc kubenswrapper[4972]: I1121 10:06:41.760233 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ece5a3ef-094d-4371-9269-86f01f0b77f8","Type":"ContainerDied","Data":"d35bfa7444d1b4c2c5112f2eb7d6dc7ca44cdb98b6111f8efe77e83e4aeeea00"} Nov 21 10:06:41 crc kubenswrapper[4972]: I1121 10:06:41.761311 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6383716-c6ac-45b6-908d-42f4600d44ab","Type":"ContainerStarted","Data":"a4c39df558e9ff933d4cc25216b5c5087409a9d66a32538cb91f2aaee836fa4d"} Nov 21 10:06:41 crc kubenswrapper[4972]: I1121 10:06:41.784650 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b258c455-2d04-41a7-9522-653cdcf78f5c" path="/var/lib/kubelet/pods/b258c455-2d04-41a7-9522-653cdcf78f5c/volumes" Nov 21 10:06:41 crc kubenswrapper[4972]: I1121 10:06:41.901518 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:41 crc kubenswrapper[4972]: I1121 10:06:41.916813 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 21 10:06:41 crc kubenswrapper[4972]: I1121 10:06:41.930955 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.064743 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.211642 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-ceilometer-tls-certs\") pod \"ece5a3ef-094d-4371-9269-86f01f0b77f8\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.211953 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-config-data\") pod \"ece5a3ef-094d-4371-9269-86f01f0b77f8\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.212013 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ece5a3ef-094d-4371-9269-86f01f0b77f8-run-httpd\") pod \"ece5a3ef-094d-4371-9269-86f01f0b77f8\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.212082 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bsnr\" (UniqueName: \"kubernetes.io/projected/ece5a3ef-094d-4371-9269-86f01f0b77f8-kube-api-access-8bsnr\") pod \"ece5a3ef-094d-4371-9269-86f01f0b77f8\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.212142 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-sg-core-conf-yaml\") pod \"ece5a3ef-094d-4371-9269-86f01f0b77f8\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.212174 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ece5a3ef-094d-4371-9269-86f01f0b77f8-log-httpd\") pod \"ece5a3ef-094d-4371-9269-86f01f0b77f8\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.212232 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-combined-ca-bundle\") pod \"ece5a3ef-094d-4371-9269-86f01f0b77f8\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.212280 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-scripts\") pod \"ece5a3ef-094d-4371-9269-86f01f0b77f8\" (UID: \"ece5a3ef-094d-4371-9269-86f01f0b77f8\") " Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.213846 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece5a3ef-094d-4371-9269-86f01f0b77f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ece5a3ef-094d-4371-9269-86f01f0b77f8" (UID: "ece5a3ef-094d-4371-9269-86f01f0b77f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.217173 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-scripts" (OuterVolumeSpecName: "scripts") pod "ece5a3ef-094d-4371-9269-86f01f0b77f8" (UID: "ece5a3ef-094d-4371-9269-86f01f0b77f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.217353 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece5a3ef-094d-4371-9269-86f01f0b77f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ece5a3ef-094d-4371-9269-86f01f0b77f8" (UID: "ece5a3ef-094d-4371-9269-86f01f0b77f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.217462 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece5a3ef-094d-4371-9269-86f01f0b77f8-kube-api-access-8bsnr" (OuterVolumeSpecName: "kube-api-access-8bsnr") pod "ece5a3ef-094d-4371-9269-86f01f0b77f8" (UID: "ece5a3ef-094d-4371-9269-86f01f0b77f8"). InnerVolumeSpecName "kube-api-access-8bsnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.251740 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ece5a3ef-094d-4371-9269-86f01f0b77f8" (UID: "ece5a3ef-094d-4371-9269-86f01f0b77f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.266625 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ece5a3ef-094d-4371-9269-86f01f0b77f8" (UID: "ece5a3ef-094d-4371-9269-86f01f0b77f8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.302701 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ece5a3ef-094d-4371-9269-86f01f0b77f8" (UID: "ece5a3ef-094d-4371-9269-86f01f0b77f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.315803 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bsnr\" (UniqueName: \"kubernetes.io/projected/ece5a3ef-094d-4371-9269-86f01f0b77f8-kube-api-access-8bsnr\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.316174 4972 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.316279 4972 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ece5a3ef-094d-4371-9269-86f01f0b77f8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.316368 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.316452 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.316535 4972 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.316638 4972 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ece5a3ef-094d-4371-9269-86f01f0b77f8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.323288 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-config-data" (OuterVolumeSpecName: "config-data") pod "ece5a3ef-094d-4371-9269-86f01f0b77f8" (UID: "ece5a3ef-094d-4371-9269-86f01f0b77f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.418320 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece5a3ef-094d-4371-9269-86f01f0b77f8-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.776748 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ece5a3ef-094d-4371-9269-86f01f0b77f8","Type":"ContainerDied","Data":"c4500fbec481df9edbe56cddeda769245ae852e173e005a949bcc08ad157c2ed"} Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.776807 4972 scope.go:117] "RemoveContainer" containerID="bb2767589d5f4243c0986e271d3e94bd73167bfedfd6c2ccb1060649f6208b57" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.776860 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.781709 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6383716-c6ac-45b6-908d-42f4600d44ab","Type":"ContainerStarted","Data":"24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0"} Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.783750 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6383716-c6ac-45b6-908d-42f4600d44ab","Type":"ContainerStarted","Data":"e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce"} Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.803096 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.820605 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.820569712 podStartE2EDuration="2.820569712s" podCreationTimestamp="2025-11-21 10:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:06:42.804783211 +0000 UTC m=+1547.913925799" watchObservedRunningTime="2025-11-21 10:06:42.820569712 +0000 UTC m=+1547.929712250" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.821616 4972 scope.go:117] "RemoveContainer" containerID="3b1f99c2fbcabd57039de18a68f2528db1bbeec49985591b87c56cced24b2931" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.866349 4972 scope.go:117] "RemoveContainer" containerID="d8ab729fbaef8ab6e1590cc492ffb673fe2eef56b2ba15f887a1f4ebefe4c208" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.886926 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.901010 4972 scope.go:117] "RemoveContainer" containerID="d35bfa7444d1b4c2c5112f2eb7d6dc7ca44cdb98b6111f8efe77e83e4aeeea00" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.907548 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.921351 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:06:42 crc kubenswrapper[4972]: E1121 10:06:42.922147 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="proxy-httpd" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.922172 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="proxy-httpd" Nov 21 10:06:42 crc kubenswrapper[4972]: E1121 10:06:42.922205 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="sg-core" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.922212 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="sg-core" Nov 21 10:06:42 crc kubenswrapper[4972]: E1121 10:06:42.922234 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="ceilometer-notification-agent" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.922240 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="ceilometer-notification-agent" Nov 21 10:06:42 crc kubenswrapper[4972]: E1121 10:06:42.922253 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="ceilometer-central-agent" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.922258 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="ceilometer-central-agent" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.922481 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="sg-core" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.922492 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="proxy-httpd" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.922514 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="ceilometer-notification-agent" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.922524 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" containerName="ceilometer-central-agent" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.924434 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.926295 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.926394 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.926473 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.936229 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.976893 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wn882"] Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.978261 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.985298 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.985494 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 21 10:06:42 crc kubenswrapper[4972]: I1121 10:06:42.986385 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wn882"] Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.063800 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.063866 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85a9950-7e9d-4e16-9b35-d6912bacadf9-log-httpd\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.063973 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85a9950-7e9d-4e16-9b35-d6912bacadf9-run-httpd\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.063997 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx5m8\" (UniqueName: \"kubernetes.io/projected/f85a9950-7e9d-4e16-9b35-d6912bacadf9-kube-api-access-xx5m8\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.064034 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.064065 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-config-data\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.064089 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-scripts\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.064142 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.167151 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.167235 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-config-data\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.167280 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-scripts\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.167325 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wn882\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.167371 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9c7g\" (UniqueName: \"kubernetes.io/projected/79051f6b-9693-43af-af16-8298e8205c25-kube-api-access-n9c7g\") pod \"nova-cell1-cell-mapping-wn882\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.167419 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.167475 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.167508 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85a9950-7e9d-4e16-9b35-d6912bacadf9-log-httpd\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.167563 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-config-data\") pod \"nova-cell1-cell-mapping-wn882\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.167598 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-scripts\") pod \"nova-cell1-cell-mapping-wn882\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.167625 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85a9950-7e9d-4e16-9b35-d6912bacadf9-run-httpd\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.167651 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx5m8\" (UniqueName: \"kubernetes.io/projected/f85a9950-7e9d-4e16-9b35-d6912bacadf9-kube-api-access-xx5m8\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.168622 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85a9950-7e9d-4e16-9b35-d6912bacadf9-log-httpd\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.168769 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85a9950-7e9d-4e16-9b35-d6912bacadf9-run-httpd\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.174895 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.175863 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-scripts\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.176137 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.184284 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-config-data\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.186681 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.188215 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx5m8\" (UniqueName: \"kubernetes.io/projected/f85a9950-7e9d-4e16-9b35-d6912bacadf9-kube-api-access-xx5m8\") pod \"ceilometer-0\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.246605 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.278793 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-config-data\") pod \"nova-cell1-cell-mapping-wn882\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.279134 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-scripts\") pod \"nova-cell1-cell-mapping-wn882\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.279378 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wn882\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.279752 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9c7g\" (UniqueName: \"kubernetes.io/projected/79051f6b-9693-43af-af16-8298e8205c25-kube-api-access-n9c7g\") pod \"nova-cell1-cell-mapping-wn882\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.285611 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-scripts\") pod \"nova-cell1-cell-mapping-wn882\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.285628 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wn882\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.291128 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-config-data\") pod \"nova-cell1-cell-mapping-wn882\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.328688 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9c7g\" (UniqueName: \"kubernetes.io/projected/79051f6b-9693-43af-af16-8298e8205c25-kube-api-access-n9c7g\") pod \"nova-cell1-cell-mapping-wn882\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:43 crc kubenswrapper[4972]: I1121 10:06:43.601509 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:43.768308 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece5a3ef-094d-4371-9269-86f01f0b77f8" path="/var/lib/kubelet/pods/ece5a3ef-094d-4371-9269-86f01f0b77f8/volumes" Nov 21 10:06:44 crc kubenswrapper[4972]: W1121 10:06:43.818139 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85a9950_7e9d_4e16_9b35_d6912bacadf9.slice/crio-656cfb33de2f54aa731c3f5e8cc9e6f629f5b67eac120e2fd6a6b15f4a722349 WatchSource:0}: Error finding container 656cfb33de2f54aa731c3f5e8cc9e6f629f5b67eac120e2fd6a6b15f4a722349: Status 404 returned error can't find the container with id 656cfb33de2f54aa731c3f5e8cc9e6f629f5b67eac120e2fd6a6b15f4a722349 Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:43.821301 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.137996 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.201570 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c9d97f9-qf467"] Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.201786 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" podUID="28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" containerName="dnsmasq-dns" containerID="cri-o://04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d" gracePeriod=10 Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.636604 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.712845 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-config\") pod \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.712925 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-dns-svc\") pod \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.712989 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-ovsdbserver-sb\") pod \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.713097 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5b77\" (UniqueName: \"kubernetes.io/projected/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-kube-api-access-m5b77\") pod \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.713138 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-dns-swift-storage-0\") pod \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.713242 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-ovsdbserver-nb\") pod \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\" (UID: \"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596\") " Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.721943 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-kube-api-access-m5b77" (OuterVolumeSpecName: "kube-api-access-m5b77") pod "28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" (UID: "28a426a5-7fdf-4c25-b5ce-56f1dc2b3596"). InnerVolumeSpecName "kube-api-access-m5b77". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.765903 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wn882"] Nov 21 10:06:44 crc kubenswrapper[4972]: W1121 10:06:44.780340 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79051f6b_9693_43af_af16_8298e8205c25.slice/crio-af249cc3aceb4cf13354f6b735e78adc43abb57af86daa939c023682b676edf5 WatchSource:0}: Error finding container af249cc3aceb4cf13354f6b735e78adc43abb57af86daa939c023682b676edf5: Status 404 returned error can't find the container with id af249cc3aceb4cf13354f6b735e78adc43abb57af86daa939c023682b676edf5 Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.797095 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" (UID: "28a426a5-7fdf-4c25-b5ce-56f1dc2b3596"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.799016 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" (UID: "28a426a5-7fdf-4c25-b5ce-56f1dc2b3596"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.806399 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wn882" event={"ID":"79051f6b-9693-43af-af16-8298e8205c25","Type":"ContainerStarted","Data":"af249cc3aceb4cf13354f6b735e78adc43abb57af86daa939c023682b676edf5"} Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.810243 4972 generic.go:334] "Generic (PLEG): container finished" podID="28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" containerID="04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d" exitCode=0 Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.810298 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" event={"ID":"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596","Type":"ContainerDied","Data":"04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d"} Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.810325 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" event={"ID":"28a426a5-7fdf-4c25-b5ce-56f1dc2b3596","Type":"ContainerDied","Data":"1df9f39b586a847f0a07d08fc56ebd51878cd96c57dab61ee9b21dd6b8525bf8"} Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.810340 4972 scope.go:117] "RemoveContainer" containerID="04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.810558 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c9d97f9-qf467" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.812934 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85a9950-7e9d-4e16-9b35-d6912bacadf9","Type":"ContainerStarted","Data":"102434648feec12743eab24b7f83c96674ffe057bcf52540be9617ee45d178a0"} Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.812976 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85a9950-7e9d-4e16-9b35-d6912bacadf9","Type":"ContainerStarted","Data":"656cfb33de2f54aa731c3f5e8cc9e6f629f5b67eac120e2fd6a6b15f4a722349"} Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.815101 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" (UID: "28a426a5-7fdf-4c25-b5ce-56f1dc2b3596"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.815484 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-config" (OuterVolumeSpecName: "config") pod "28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" (UID: "28a426a5-7fdf-4c25-b5ce-56f1dc2b3596"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.819492 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.819533 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5b77\" (UniqueName: \"kubernetes.io/projected/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-kube-api-access-m5b77\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.819546 4972 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.819555 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.819564 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.822135 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" (UID: "28a426a5-7fdf-4c25-b5ce-56f1dc2b3596"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.833821 4972 scope.go:117] "RemoveContainer" containerID="38fd9da6f8719034bf40709cf28aa6814f879a28702bd232b8381e45b82cda98" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.870291 4972 scope.go:117] "RemoveContainer" containerID="04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d" Nov 21 10:06:44 crc kubenswrapper[4972]: E1121 10:06:44.870899 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d\": container with ID starting with 04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d not found: ID does not exist" containerID="04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.870954 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d"} err="failed to get container status \"04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d\": rpc error: code = NotFound desc = could not find container \"04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d\": container with ID starting with 04a78903a9a5bfcdec5ad896f3ab22d3e73e9e4c94819ca7b0021d7057741c4d not found: ID does not exist" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.870993 4972 scope.go:117] "RemoveContainer" containerID="38fd9da6f8719034bf40709cf28aa6814f879a28702bd232b8381e45b82cda98" Nov 21 10:06:44 crc kubenswrapper[4972]: E1121 10:06:44.871354 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38fd9da6f8719034bf40709cf28aa6814f879a28702bd232b8381e45b82cda98\": container with ID starting with 38fd9da6f8719034bf40709cf28aa6814f879a28702bd232b8381e45b82cda98 not found: ID does not exist" containerID="38fd9da6f8719034bf40709cf28aa6814f879a28702bd232b8381e45b82cda98" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.871379 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38fd9da6f8719034bf40709cf28aa6814f879a28702bd232b8381e45b82cda98"} err="failed to get container status \"38fd9da6f8719034bf40709cf28aa6814f879a28702bd232b8381e45b82cda98\": rpc error: code = NotFound desc = could not find container \"38fd9da6f8719034bf40709cf28aa6814f879a28702bd232b8381e45b82cda98\": container with ID starting with 38fd9da6f8719034bf40709cf28aa6814f879a28702bd232b8381e45b82cda98 not found: ID does not exist" Nov 21 10:06:44 crc kubenswrapper[4972]: I1121 10:06:44.922451 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:45 crc kubenswrapper[4972]: I1121 10:06:45.148009 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c9d97f9-qf467"] Nov 21 10:06:45 crc kubenswrapper[4972]: I1121 10:06:45.157343 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9c9d97f9-qf467"] Nov 21 10:06:45 crc kubenswrapper[4972]: I1121 10:06:45.773382 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" path="/var/lib/kubelet/pods/28a426a5-7fdf-4c25-b5ce-56f1dc2b3596/volumes" Nov 21 10:06:45 crc kubenswrapper[4972]: I1121 10:06:45.841209 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wn882" event={"ID":"79051f6b-9693-43af-af16-8298e8205c25","Type":"ContainerStarted","Data":"a049093f6f09a795c8830a24c84bf46a23071b861fe261c2d86c5c7a2d1602c9"} Nov 21 10:06:45 crc kubenswrapper[4972]: I1121 10:06:45.871795 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wn882" podStartSLOduration=3.8717734 podStartE2EDuration="3.8717734s" podCreationTimestamp="2025-11-21 10:06:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:06:45.856783569 +0000 UTC m=+1550.965926067" watchObservedRunningTime="2025-11-21 10:06:45.8717734 +0000 UTC m=+1550.980915908" Nov 21 10:06:46 crc kubenswrapper[4972]: I1121 10:06:46.866001 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85a9950-7e9d-4e16-9b35-d6912bacadf9","Type":"ContainerStarted","Data":"286335ef286bcd6ad23d79c21ed873d352f5bcd061facf6a215c8094c81d6976"} Nov 21 10:06:46 crc kubenswrapper[4972]: I1121 10:06:46.866287 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85a9950-7e9d-4e16-9b35-d6912bacadf9","Type":"ContainerStarted","Data":"0de233b9671d8c6438cab670dcbf6fed2c7c33040cc7b1af5fe0bdb4e3dd967b"} Nov 21 10:06:48 crc kubenswrapper[4972]: I1121 10:06:48.887446 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85a9950-7e9d-4e16-9b35-d6912bacadf9","Type":"ContainerStarted","Data":"5408c1bb0f0589423c4f14602bd190494f5ead1bdc142f1e32e478d7ceca9219"} Nov 21 10:06:48 crc kubenswrapper[4972]: I1121 10:06:48.887876 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 10:06:48 crc kubenswrapper[4972]: I1121 10:06:48.929225 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.564109611 podStartE2EDuration="6.929198863s" podCreationTimestamp="2025-11-21 10:06:42 +0000 UTC" firstStartedPulling="2025-11-21 10:06:43.820439059 +0000 UTC m=+1548.929581557" lastFinishedPulling="2025-11-21 10:06:48.185528271 +0000 UTC m=+1553.294670809" observedRunningTime="2025-11-21 10:06:48.923578893 +0000 UTC m=+1554.032721431" watchObservedRunningTime="2025-11-21 10:06:48.929198863 +0000 UTC m=+1554.038341401" Nov 21 10:06:49 crc kubenswrapper[4972]: I1121 10:06:49.901876 4972 generic.go:334] "Generic (PLEG): container finished" podID="79051f6b-9693-43af-af16-8298e8205c25" containerID="a049093f6f09a795c8830a24c84bf46a23071b861fe261c2d86c5c7a2d1602c9" exitCode=0 Nov 21 10:06:49 crc kubenswrapper[4972]: I1121 10:06:49.901883 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wn882" event={"ID":"79051f6b-9693-43af-af16-8298e8205c25","Type":"ContainerDied","Data":"a049093f6f09a795c8830a24c84bf46a23071b861fe261c2d86c5c7a2d1602c9"} Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.543295 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pxdcp"] Nov 21 10:06:50 crc kubenswrapper[4972]: E1121 10:06:50.544032 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" containerName="dnsmasq-dns" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.544069 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" containerName="dnsmasq-dns" Nov 21 10:06:50 crc kubenswrapper[4972]: E1121 10:06:50.544131 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" containerName="init" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.544144 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" containerName="init" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.544469 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a426a5-7fdf-4c25-b5ce-56f1dc2b3596" containerName="dnsmasq-dns" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.547414 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.563811 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxdcp"] Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.653687 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f178e0-a635-45ee-89cd-64951fb05e8d-catalog-content\") pod \"redhat-marketplace-pxdcp\" (UID: \"87f178e0-a635-45ee-89cd-64951fb05e8d\") " pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.653847 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6lsf\" (UniqueName: \"kubernetes.io/projected/87f178e0-a635-45ee-89cd-64951fb05e8d-kube-api-access-h6lsf\") pod \"redhat-marketplace-pxdcp\" (UID: \"87f178e0-a635-45ee-89cd-64951fb05e8d\") " pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.653914 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f178e0-a635-45ee-89cd-64951fb05e8d-utilities\") pod \"redhat-marketplace-pxdcp\" (UID: \"87f178e0-a635-45ee-89cd-64951fb05e8d\") " pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.756317 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f178e0-a635-45ee-89cd-64951fb05e8d-catalog-content\") pod \"redhat-marketplace-pxdcp\" (UID: \"87f178e0-a635-45ee-89cd-64951fb05e8d\") " pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.757383 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6lsf\" (UniqueName: \"kubernetes.io/projected/87f178e0-a635-45ee-89cd-64951fb05e8d-kube-api-access-h6lsf\") pod \"redhat-marketplace-pxdcp\" (UID: \"87f178e0-a635-45ee-89cd-64951fb05e8d\") " pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.757438 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f178e0-a635-45ee-89cd-64951fb05e8d-utilities\") pod \"redhat-marketplace-pxdcp\" (UID: \"87f178e0-a635-45ee-89cd-64951fb05e8d\") " pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.757219 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f178e0-a635-45ee-89cd-64951fb05e8d-catalog-content\") pod \"redhat-marketplace-pxdcp\" (UID: \"87f178e0-a635-45ee-89cd-64951fb05e8d\") " pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.758106 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f178e0-a635-45ee-89cd-64951fb05e8d-utilities\") pod \"redhat-marketplace-pxdcp\" (UID: \"87f178e0-a635-45ee-89cd-64951fb05e8d\") " pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.783110 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6lsf\" (UniqueName: \"kubernetes.io/projected/87f178e0-a635-45ee-89cd-64951fb05e8d-kube-api-access-h6lsf\") pod \"redhat-marketplace-pxdcp\" (UID: \"87f178e0-a635-45ee-89cd-64951fb05e8d\") " pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:06:50 crc kubenswrapper[4972]: I1121 10:06:50.884174 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.133765 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.134375 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.494938 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxdcp"] Nov 21 10:06:51 crc kubenswrapper[4972]: W1121 10:06:51.503704 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87f178e0_a635_45ee_89cd_64951fb05e8d.slice/crio-1e2a2053b4f2b552031434614e06e18ddfc9b2b69658909e383aebb615876a26 WatchSource:0}: Error finding container 1e2a2053b4f2b552031434614e06e18ddfc9b2b69658909e383aebb615876a26: Status 404 returned error can't find the container with id 1e2a2053b4f2b552031434614e06e18ddfc9b2b69658909e383aebb615876a26 Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.673388 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.779267 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-combined-ca-bundle\") pod \"79051f6b-9693-43af-af16-8298e8205c25\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.779472 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9c7g\" (UniqueName: \"kubernetes.io/projected/79051f6b-9693-43af-af16-8298e8205c25-kube-api-access-n9c7g\") pod \"79051f6b-9693-43af-af16-8298e8205c25\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.779563 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-scripts\") pod \"79051f6b-9693-43af-af16-8298e8205c25\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.779601 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-config-data\") pod \"79051f6b-9693-43af-af16-8298e8205c25\" (UID: \"79051f6b-9693-43af-af16-8298e8205c25\") " Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.787543 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79051f6b-9693-43af-af16-8298e8205c25-kube-api-access-n9c7g" (OuterVolumeSpecName: "kube-api-access-n9c7g") pod "79051f6b-9693-43af-af16-8298e8205c25" (UID: "79051f6b-9693-43af-af16-8298e8205c25"). InnerVolumeSpecName "kube-api-access-n9c7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.787587 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-scripts" (OuterVolumeSpecName: "scripts") pod "79051f6b-9693-43af-af16-8298e8205c25" (UID: "79051f6b-9693-43af-af16-8298e8205c25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.820038 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79051f6b-9693-43af-af16-8298e8205c25" (UID: "79051f6b-9693-43af-af16-8298e8205c25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.831339 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-config-data" (OuterVolumeSpecName: "config-data") pod "79051f6b-9693-43af-af16-8298e8205c25" (UID: "79051f6b-9693-43af-af16-8298e8205c25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.881952 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.881984 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9c7g\" (UniqueName: \"kubernetes.io/projected/79051f6b-9693-43af-af16-8298e8205c25-kube-api-access-n9c7g\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.881996 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.882005 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79051f6b-9693-43af-af16-8298e8205c25-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.945397 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wn882" event={"ID":"79051f6b-9693-43af-af16-8298e8205c25","Type":"ContainerDied","Data":"af249cc3aceb4cf13354f6b735e78adc43abb57af86daa939c023682b676edf5"} Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.945693 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af249cc3aceb4cf13354f6b735e78adc43abb57af86daa939c023682b676edf5" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.945760 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wn882" Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.955419 4972 generic.go:334] "Generic (PLEG): container finished" podID="87f178e0-a635-45ee-89cd-64951fb05e8d" containerID="a3460af0cd5a5186e1a043948cc21b2ba00aa0c14ff7ad192d5b52ea025c1af1" exitCode=0 Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.955469 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxdcp" event={"ID":"87f178e0-a635-45ee-89cd-64951fb05e8d","Type":"ContainerDied","Data":"a3460af0cd5a5186e1a043948cc21b2ba00aa0c14ff7ad192d5b52ea025c1af1"} Nov 21 10:06:51 crc kubenswrapper[4972]: I1121 10:06:51.955498 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxdcp" event={"ID":"87f178e0-a635-45ee-89cd-64951fb05e8d","Type":"ContainerStarted","Data":"1e2a2053b4f2b552031434614e06e18ddfc9b2b69658909e383aebb615876a26"} Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.099694 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.100333 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c6383716-c6ac-45b6-908d-42f4600d44ab" containerName="nova-api-api" containerID="cri-o://24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0" gracePeriod=30 Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.100420 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c6383716-c6ac-45b6-908d-42f4600d44ab" containerName="nova-api-log" containerID="cri-o://e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce" gracePeriod=30 Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.111854 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c6383716-c6ac-45b6-908d-42f4600d44ab" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.196:8774/\": EOF" Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.112317 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c6383716-c6ac-45b6-908d-42f4600d44ab" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.196:8774/\": EOF" Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.128775 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.129299 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0c8f56de-a95c-4144-a6f4-472e1f4dd1fd" containerName="nova-scheduler-scheduler" containerID="cri-o://9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce" gracePeriod=30 Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.193951 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.194211 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerName="nova-metadata-log" containerID="cri-o://39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd" gracePeriod=30 Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.194279 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerName="nova-metadata-metadata" containerID="cri-o://7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4" gracePeriod=30 Nov 21 10:06:52 crc kubenswrapper[4972]: E1121 10:06:52.651093 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 10:06:52 crc kubenswrapper[4972]: E1121 10:06:52.656053 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 10:06:52 crc kubenswrapper[4972]: E1121 10:06:52.657465 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 10:06:52 crc kubenswrapper[4972]: E1121 10:06:52.657513 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0c8f56de-a95c-4144-a6f4-472e1f4dd1fd" containerName="nova-scheduler-scheduler" Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.969055 4972 generic.go:334] "Generic (PLEG): container finished" podID="87f178e0-a635-45ee-89cd-64951fb05e8d" containerID="e8fb6c2b2462245e6f0aed333c1a7188f9beba1c46182cc82726e2673312126b" exitCode=0 Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.969115 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxdcp" event={"ID":"87f178e0-a635-45ee-89cd-64951fb05e8d","Type":"ContainerDied","Data":"e8fb6c2b2462245e6f0aed333c1a7188f9beba1c46182cc82726e2673312126b"} Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.971965 4972 generic.go:334] "Generic (PLEG): container finished" podID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerID="39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd" exitCode=143 Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.972043 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25d70968-1cb9-42c5-9e6a-42be7447c211","Type":"ContainerDied","Data":"39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd"} Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.975640 4972 generic.go:334] "Generic (PLEG): container finished" podID="c6383716-c6ac-45b6-908d-42f4600d44ab" containerID="e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce" exitCode=143 Nov 21 10:06:52 crc kubenswrapper[4972]: I1121 10:06:52.975676 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6383716-c6ac-45b6-908d-42f4600d44ab","Type":"ContainerDied","Data":"e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce"} Nov 21 10:06:53 crc kubenswrapper[4972]: I1121 10:06:53.986218 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxdcp" event={"ID":"87f178e0-a635-45ee-89cd-64951fb05e8d","Type":"ContainerStarted","Data":"c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713"} Nov 21 10:06:54 crc kubenswrapper[4972]: I1121 10:06:54.007788 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pxdcp" podStartSLOduration=2.524508715 podStartE2EDuration="4.007767802s" podCreationTimestamp="2025-11-21 10:06:50 +0000 UTC" firstStartedPulling="2025-11-21 10:06:51.957313235 +0000 UTC m=+1557.066455733" lastFinishedPulling="2025-11-21 10:06:53.440572322 +0000 UTC m=+1558.549714820" observedRunningTime="2025-11-21 10:06:54.002774599 +0000 UTC m=+1559.111917137" watchObservedRunningTime="2025-11-21 10:06:54.007767802 +0000 UTC m=+1559.116910310" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.356510 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": read tcp 10.217.0.2:38202->10.217.0.187:8775: read: connection reset by peer" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.356569 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": read tcp 10.217.0.2:38216->10.217.0.187:8775: read: connection reset by peer" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.838329 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.895692 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-nova-metadata-tls-certs\") pod \"25d70968-1cb9-42c5-9e6a-42be7447c211\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.895845 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d70968-1cb9-42c5-9e6a-42be7447c211-logs\") pod \"25d70968-1cb9-42c5-9e6a-42be7447c211\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.895995 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-config-data\") pod \"25d70968-1cb9-42c5-9e6a-42be7447c211\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.896044 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-combined-ca-bundle\") pod \"25d70968-1cb9-42c5-9e6a-42be7447c211\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.896166 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st9j4\" (UniqueName: \"kubernetes.io/projected/25d70968-1cb9-42c5-9e6a-42be7447c211-kube-api-access-st9j4\") pod \"25d70968-1cb9-42c5-9e6a-42be7447c211\" (UID: \"25d70968-1cb9-42c5-9e6a-42be7447c211\") " Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.897360 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d70968-1cb9-42c5-9e6a-42be7447c211-logs" (OuterVolumeSpecName: "logs") pod "25d70968-1cb9-42c5-9e6a-42be7447c211" (UID: "25d70968-1cb9-42c5-9e6a-42be7447c211"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.905407 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d70968-1cb9-42c5-9e6a-42be7447c211-kube-api-access-st9j4" (OuterVolumeSpecName: "kube-api-access-st9j4") pod "25d70968-1cb9-42c5-9e6a-42be7447c211" (UID: "25d70968-1cb9-42c5-9e6a-42be7447c211"). InnerVolumeSpecName "kube-api-access-st9j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.931784 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25d70968-1cb9-42c5-9e6a-42be7447c211" (UID: "25d70968-1cb9-42c5-9e6a-42be7447c211"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.936214 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-config-data" (OuterVolumeSpecName: "config-data") pod "25d70968-1cb9-42c5-9e6a-42be7447c211" (UID: "25d70968-1cb9-42c5-9e6a-42be7447c211"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.951641 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "25d70968-1cb9-42c5-9e6a-42be7447c211" (UID: "25d70968-1cb9-42c5-9e6a-42be7447c211"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.998150 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.998338 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.998394 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st9j4\" (UniqueName: \"kubernetes.io/projected/25d70968-1cb9-42c5-9e6a-42be7447c211-kube-api-access-st9j4\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.998446 4972 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d70968-1cb9-42c5-9e6a-42be7447c211-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:55 crc kubenswrapper[4972]: I1121 10:06:55.998496 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d70968-1cb9-42c5-9e6a-42be7447c211-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.004680 4972 generic.go:334] "Generic (PLEG): container finished" podID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerID="7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4" exitCode=0 Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.004861 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25d70968-1cb9-42c5-9e6a-42be7447c211","Type":"ContainerDied","Data":"7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4"} Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.004904 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25d70968-1cb9-42c5-9e6a-42be7447c211","Type":"ContainerDied","Data":"34c24e4e4f5bd88254afb328fa533ab8b1737264daaec9aa45847e2a1c782d9b"} Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.004922 4972 scope.go:117] "RemoveContainer" containerID="7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.005043 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.038470 4972 scope.go:117] "RemoveContainer" containerID="39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.038689 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.046767 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.060458 4972 scope.go:117] "RemoveContainer" containerID="7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4" Nov 21 10:06:56 crc kubenswrapper[4972]: E1121 10:06:56.061017 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4\": container with ID starting with 7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4 not found: ID does not exist" containerID="7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.061115 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4"} err="failed to get container status \"7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4\": rpc error: code = NotFound desc = could not find container \"7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4\": container with ID starting with 7ea2abf0f7d755ab2861190cc97c1b1b24d73ce7eb2a6178eb1dde03a9649db4 not found: ID does not exist" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.061235 4972 scope.go:117] "RemoveContainer" containerID="39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd" Nov 21 10:06:56 crc kubenswrapper[4972]: E1121 10:06:56.061915 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd\": container with ID starting with 39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd not found: ID does not exist" containerID="39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.061946 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd"} err="failed to get container status \"39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd\": rpc error: code = NotFound desc = could not find container \"39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd\": container with ID starting with 39589530ed982f493eea7a56eb4fbabd714aa196e697f256a3c6c9384da05dbd not found: ID does not exist" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.064372 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:56 crc kubenswrapper[4972]: E1121 10:06:56.064746 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79051f6b-9693-43af-af16-8298e8205c25" containerName="nova-manage" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.064763 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="79051f6b-9693-43af-af16-8298e8205c25" containerName="nova-manage" Nov 21 10:06:56 crc kubenswrapper[4972]: E1121 10:06:56.064798 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerName="nova-metadata-metadata" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.064806 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerName="nova-metadata-metadata" Nov 21 10:06:56 crc kubenswrapper[4972]: E1121 10:06:56.064911 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerName="nova-metadata-log" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.064920 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerName="nova-metadata-log" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.065117 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerName="nova-metadata-log" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.065149 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" containerName="nova-metadata-metadata" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.065162 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="79051f6b-9693-43af-af16-8298e8205c25" containerName="nova-manage" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.066285 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.072350 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.072380 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.081532 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.099920 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.100210 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8drk\" (UniqueName: \"kubernetes.io/projected/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-kube-api-access-c8drk\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.100308 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.100411 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-logs\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.100519 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-config-data\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.178641 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.178698 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.202146 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.202259 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8drk\" (UniqueName: \"kubernetes.io/projected/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-kube-api-access-c8drk\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.202285 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.202316 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-logs\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.202348 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-config-data\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.203431 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-logs\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.206557 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-config-data\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.207196 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.207299 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.222235 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8drk\" (UniqueName: \"kubernetes.io/projected/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-kube-api-access-c8drk\") pod \"nova-metadata-0\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.386256 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.720095 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.813765 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kn72\" (UniqueName: \"kubernetes.io/projected/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-kube-api-access-5kn72\") pod \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\" (UID: \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\") " Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.813979 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-combined-ca-bundle\") pod \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\" (UID: \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\") " Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.814009 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-config-data\") pod \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\" (UID: \"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd\") " Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.819275 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-kube-api-access-5kn72" (OuterVolumeSpecName: "kube-api-access-5kn72") pod "0c8f56de-a95c-4144-a6f4-472e1f4dd1fd" (UID: "0c8f56de-a95c-4144-a6f4-472e1f4dd1fd"). InnerVolumeSpecName "kube-api-access-5kn72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.862037 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-config-data" (OuterVolumeSpecName: "config-data") pod "0c8f56de-a95c-4144-a6f4-472e1f4dd1fd" (UID: "0c8f56de-a95c-4144-a6f4-472e1f4dd1fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.869790 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c8f56de-a95c-4144-a6f4-472e1f4dd1fd" (UID: "0c8f56de-a95c-4144-a6f4-472e1f4dd1fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.900253 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.916772 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.916809 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:56 crc kubenswrapper[4972]: I1121 10:06:56.916822 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kn72\" (UniqueName: \"kubernetes.io/projected/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd-kube-api-access-5kn72\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.014142 4972 generic.go:334] "Generic (PLEG): container finished" podID="0c8f56de-a95c-4144-a6f4-472e1f4dd1fd" containerID="9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce" exitCode=0 Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.014184 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.014214 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd","Type":"ContainerDied","Data":"9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce"} Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.014255 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c8f56de-a95c-4144-a6f4-472e1f4dd1fd","Type":"ContainerDied","Data":"120e1a4513fbe24f45a4eeda9d7faf06501691be3c9bb3fd40b171aac9cb912f"} Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.014280 4972 scope.go:117] "RemoveContainer" containerID="9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.016227 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57f61d22-4b79-4f80-b7dc-0f5bea4b506d","Type":"ContainerStarted","Data":"7370cbe19069066758df478b0ed4ac37f029e7f9d8cba9d0594d6e1b3147fd7d"} Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.035920 4972 scope.go:117] "RemoveContainer" containerID="9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce" Nov 21 10:06:57 crc kubenswrapper[4972]: E1121 10:06:57.036513 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce\": container with ID starting with 9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce not found: ID does not exist" containerID="9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.036561 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce"} err="failed to get container status \"9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce\": rpc error: code = NotFound desc = could not find container \"9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce\": container with ID starting with 9438aa8a40e11fb601f857c2019776fb2041c2601bdf0dc59dedd69ca8f5b1ce not found: ID does not exist" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.066272 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.091570 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.109387 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:06:57 crc kubenswrapper[4972]: E1121 10:06:57.110213 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8f56de-a95c-4144-a6f4-472e1f4dd1fd" containerName="nova-scheduler-scheduler" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.110229 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8f56de-a95c-4144-a6f4-472e1f4dd1fd" containerName="nova-scheduler-scheduler" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.111040 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8f56de-a95c-4144-a6f4-472e1f4dd1fd" containerName="nova-scheduler-scheduler" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.111988 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.116206 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.128195 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.222908 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-config-data\") pod \"nova-scheduler-0\" (UID: \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.223002 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.223248 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncjs4\" (UniqueName: \"kubernetes.io/projected/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-kube-api-access-ncjs4\") pod \"nova-scheduler-0\" (UID: \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.324800 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncjs4\" (UniqueName: \"kubernetes.io/projected/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-kube-api-access-ncjs4\") pod \"nova-scheduler-0\" (UID: \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.324920 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-config-data\") pod \"nova-scheduler-0\" (UID: \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.324956 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.329542 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.330118 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-config-data\") pod \"nova-scheduler-0\" (UID: \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.341117 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncjs4\" (UniqueName: \"kubernetes.io/projected/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-kube-api-access-ncjs4\") pod \"nova-scheduler-0\" (UID: \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\") " pod="openstack/nova-scheduler-0" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.432995 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.770981 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c8f56de-a95c-4144-a6f4-472e1f4dd1fd" path="/var/lib/kubelet/pods/0c8f56de-a95c-4144-a6f4-472e1f4dd1fd/volumes" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.773429 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d70968-1cb9-42c5-9e6a-42be7447c211" path="/var/lib/kubelet/pods/25d70968-1cb9-42c5-9e6a-42be7447c211/volumes" Nov 21 10:06:57 crc kubenswrapper[4972]: I1121 10:06:57.903544 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:06:57 crc kubenswrapper[4972]: W1121 10:06:57.914323 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c3ae47e_fcf5_4397_a2a4_8e847e542d75.slice/crio-dddc8e3ecb0b3a13568d7e208c2d4946e4e4b11413f9eb2ab601daae140e4758 WatchSource:0}: Error finding container dddc8e3ecb0b3a13568d7e208c2d4946e4e4b11413f9eb2ab601daae140e4758: Status 404 returned error can't find the container with id dddc8e3ecb0b3a13568d7e208c2d4946e4e4b11413f9eb2ab601daae140e4758 Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.026081 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.026928 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3c3ae47e-fcf5-4397-a2a4-8e847e542d75","Type":"ContainerStarted","Data":"dddc8e3ecb0b3a13568d7e208c2d4946e4e4b11413f9eb2ab601daae140e4758"} Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.029326 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.029312 4972 generic.go:334] "Generic (PLEG): container finished" podID="c6383716-c6ac-45b6-908d-42f4600d44ab" containerID="24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0" exitCode=0 Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.029342 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6383716-c6ac-45b6-908d-42f4600d44ab","Type":"ContainerDied","Data":"24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0"} Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.029584 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6383716-c6ac-45b6-908d-42f4600d44ab","Type":"ContainerDied","Data":"a4c39df558e9ff933d4cc25216b5c5087409a9d66a32538cb91f2aaee836fa4d"} Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.029618 4972 scope.go:117] "RemoveContainer" containerID="24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.033788 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57f61d22-4b79-4f80-b7dc-0f5bea4b506d","Type":"ContainerStarted","Data":"9b2a5c600705559dcf3e1539bb652e7d2fca4320b9c98a74794471b4827bdfb0"} Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.033869 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57f61d22-4b79-4f80-b7dc-0f5bea4b506d","Type":"ContainerStarted","Data":"278382caff383ae2485ecd6e804ee41e63dbe81738ffa66e0a8508b7e3a9f20e"} Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.070773 4972 scope.go:117] "RemoveContainer" containerID="e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.094073 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.0940510469999998 podStartE2EDuration="2.094051047s" podCreationTimestamp="2025-11-21 10:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:06:58.079672363 +0000 UTC m=+1563.188814881" watchObservedRunningTime="2025-11-21 10:06:58.094051047 +0000 UTC m=+1563.203193555" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.105955 4972 scope.go:117] "RemoveContainer" containerID="24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0" Nov 21 10:06:58 crc kubenswrapper[4972]: E1121 10:06:58.106344 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0\": container with ID starting with 24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0 not found: ID does not exist" containerID="24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.106369 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0"} err="failed to get container status \"24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0\": rpc error: code = NotFound desc = could not find container \"24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0\": container with ID starting with 24f932eea6e6b156252e61c68005992e36ff677b687106321980e2e20a70e3d0 not found: ID does not exist" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.106388 4972 scope.go:117] "RemoveContainer" containerID="e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce" Nov 21 10:06:58 crc kubenswrapper[4972]: E1121 10:06:58.106684 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce\": container with ID starting with e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce not found: ID does not exist" containerID="e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.106766 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce"} err="failed to get container status \"e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce\": rpc error: code = NotFound desc = could not find container \"e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce\": container with ID starting with e87dd555f1f0277b6c3a3843748fc3c4a761d7039ec7df8909d201c4c0753bce not found: ID does not exist" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.139490 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7c2h\" (UniqueName: \"kubernetes.io/projected/c6383716-c6ac-45b6-908d-42f4600d44ab-kube-api-access-z7c2h\") pod \"c6383716-c6ac-45b6-908d-42f4600d44ab\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.139619 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-config-data\") pod \"c6383716-c6ac-45b6-908d-42f4600d44ab\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.139691 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-public-tls-certs\") pod \"c6383716-c6ac-45b6-908d-42f4600d44ab\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.139722 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-internal-tls-certs\") pod \"c6383716-c6ac-45b6-908d-42f4600d44ab\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.139820 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6383716-c6ac-45b6-908d-42f4600d44ab-logs\") pod \"c6383716-c6ac-45b6-908d-42f4600d44ab\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.139965 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-combined-ca-bundle\") pod \"c6383716-c6ac-45b6-908d-42f4600d44ab\" (UID: \"c6383716-c6ac-45b6-908d-42f4600d44ab\") " Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.140279 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6383716-c6ac-45b6-908d-42f4600d44ab-logs" (OuterVolumeSpecName: "logs") pod "c6383716-c6ac-45b6-908d-42f4600d44ab" (UID: "c6383716-c6ac-45b6-908d-42f4600d44ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.140514 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6383716-c6ac-45b6-908d-42f4600d44ab-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.145219 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6383716-c6ac-45b6-908d-42f4600d44ab-kube-api-access-z7c2h" (OuterVolumeSpecName: "kube-api-access-z7c2h") pod "c6383716-c6ac-45b6-908d-42f4600d44ab" (UID: "c6383716-c6ac-45b6-908d-42f4600d44ab"). InnerVolumeSpecName "kube-api-access-z7c2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.166228 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-config-data" (OuterVolumeSpecName: "config-data") pod "c6383716-c6ac-45b6-908d-42f4600d44ab" (UID: "c6383716-c6ac-45b6-908d-42f4600d44ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.166999 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6383716-c6ac-45b6-908d-42f4600d44ab" (UID: "c6383716-c6ac-45b6-908d-42f4600d44ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.195978 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c6383716-c6ac-45b6-908d-42f4600d44ab" (UID: "c6383716-c6ac-45b6-908d-42f4600d44ab"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.203112 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c6383716-c6ac-45b6-908d-42f4600d44ab" (UID: "c6383716-c6ac-45b6-908d-42f4600d44ab"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.242337 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.242507 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7c2h\" (UniqueName: \"kubernetes.io/projected/c6383716-c6ac-45b6-908d-42f4600d44ab-kube-api-access-z7c2h\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.242585 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.242685 4972 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.242764 4972 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6383716-c6ac-45b6-908d-42f4600d44ab-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.367813 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.377547 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.399881 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:58 crc kubenswrapper[4972]: E1121 10:06:58.400331 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6383716-c6ac-45b6-908d-42f4600d44ab" containerName="nova-api-api" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.400360 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6383716-c6ac-45b6-908d-42f4600d44ab" containerName="nova-api-api" Nov 21 10:06:58 crc kubenswrapper[4972]: E1121 10:06:58.400389 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6383716-c6ac-45b6-908d-42f4600d44ab" containerName="nova-api-log" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.400396 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6383716-c6ac-45b6-908d-42f4600d44ab" containerName="nova-api-log" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.400545 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6383716-c6ac-45b6-908d-42f4600d44ab" containerName="nova-api-api" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.400569 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6383716-c6ac-45b6-908d-42f4600d44ab" containerName="nova-api-log" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.401522 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.404825 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.405191 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.407352 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.418274 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.445734 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-public-tls-certs\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.445824 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-internal-tls-certs\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.445915 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcc4w\" (UniqueName: \"kubernetes.io/projected/302b9e1c-affd-4f2f-bacd-98f40dedeb91-kube-api-access-jcc4w\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.446050 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.446116 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-config-data\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.446155 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/302b9e1c-affd-4f2f-bacd-98f40dedeb91-logs\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.547575 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-public-tls-certs\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.547775 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-internal-tls-certs\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.547891 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcc4w\" (UniqueName: \"kubernetes.io/projected/302b9e1c-affd-4f2f-bacd-98f40dedeb91-kube-api-access-jcc4w\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.548004 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.548115 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-config-data\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.548183 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/302b9e1c-affd-4f2f-bacd-98f40dedeb91-logs\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.548620 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/302b9e1c-affd-4f2f-bacd-98f40dedeb91-logs\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.551118 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-public-tls-certs\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.551715 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-config-data\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.555386 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-internal-tls-certs\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.556120 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.568673 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcc4w\" (UniqueName: \"kubernetes.io/projected/302b9e1c-affd-4f2f-bacd-98f40dedeb91-kube-api-access-jcc4w\") pod \"nova-api-0\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " pod="openstack/nova-api-0" Nov 21 10:06:58 crc kubenswrapper[4972]: I1121 10:06:58.726734 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:06:59 crc kubenswrapper[4972]: I1121 10:06:59.050014 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3c3ae47e-fcf5-4397-a2a4-8e847e542d75","Type":"ContainerStarted","Data":"528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180"} Nov 21 10:06:59 crc kubenswrapper[4972]: I1121 10:06:59.065811 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.065796293 podStartE2EDuration="2.065796293s" podCreationTimestamp="2025-11-21 10:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:06:59.065143815 +0000 UTC m=+1564.174286313" watchObservedRunningTime="2025-11-21 10:06:59.065796293 +0000 UTC m=+1564.174938791" Nov 21 10:06:59 crc kubenswrapper[4972]: W1121 10:06:59.316162 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod302b9e1c_affd_4f2f_bacd_98f40dedeb91.slice/crio-22e902b9466d1ff08e0258c13adb35c65ff177471150f84b862959616d26741a WatchSource:0}: Error finding container 22e902b9466d1ff08e0258c13adb35c65ff177471150f84b862959616d26741a: Status 404 returned error can't find the container with id 22e902b9466d1ff08e0258c13adb35c65ff177471150f84b862959616d26741a Nov 21 10:06:59 crc kubenswrapper[4972]: I1121 10:06:59.316644 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:06:59 crc kubenswrapper[4972]: I1121 10:06:59.778342 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6383716-c6ac-45b6-908d-42f4600d44ab" path="/var/lib/kubelet/pods/c6383716-c6ac-45b6-908d-42f4600d44ab/volumes" Nov 21 10:07:00 crc kubenswrapper[4972]: I1121 10:07:00.069547 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"302b9e1c-affd-4f2f-bacd-98f40dedeb91","Type":"ContainerStarted","Data":"a50f5adc14def76f321fa0ba2955141d2c00ea811995acd79453b97be54e414e"} Nov 21 10:07:00 crc kubenswrapper[4972]: I1121 10:07:00.069630 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"302b9e1c-affd-4f2f-bacd-98f40dedeb91","Type":"ContainerStarted","Data":"9663e5eeed349feea42f465ac185ce4a281832b49f5ad7e6676845ad1940d586"} Nov 21 10:07:00 crc kubenswrapper[4972]: I1121 10:07:00.069647 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"302b9e1c-affd-4f2f-bacd-98f40dedeb91","Type":"ContainerStarted","Data":"22e902b9466d1ff08e0258c13adb35c65ff177471150f84b862959616d26741a"} Nov 21 10:07:00 crc kubenswrapper[4972]: I1121 10:07:00.128197 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.128169258 podStartE2EDuration="2.128169258s" podCreationTimestamp="2025-11-21 10:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:07:00.117052141 +0000 UTC m=+1565.226194699" watchObservedRunningTime="2025-11-21 10:07:00.128169258 +0000 UTC m=+1565.237311756" Nov 21 10:07:00 crc kubenswrapper[4972]: I1121 10:07:00.884368 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:07:00 crc kubenswrapper[4972]: I1121 10:07:00.884948 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:07:00 crc kubenswrapper[4972]: I1121 10:07:00.974237 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:07:01 crc kubenswrapper[4972]: I1121 10:07:01.146724 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:07:01 crc kubenswrapper[4972]: I1121 10:07:01.215883 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxdcp"] Nov 21 10:07:01 crc kubenswrapper[4972]: I1121 10:07:01.387116 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 10:07:01 crc kubenswrapper[4972]: I1121 10:07:01.387230 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 10:07:02 crc kubenswrapper[4972]: I1121 10:07:02.433964 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 10:07:03 crc kubenswrapper[4972]: I1121 10:07:03.103693 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pxdcp" podUID="87f178e0-a635-45ee-89cd-64951fb05e8d" containerName="registry-server" containerID="cri-o://c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713" gracePeriod=2 Nov 21 10:07:03 crc kubenswrapper[4972]: I1121 10:07:03.693220 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:07:03 crc kubenswrapper[4972]: I1121 10:07:03.773272 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f178e0-a635-45ee-89cd-64951fb05e8d-catalog-content\") pod \"87f178e0-a635-45ee-89cd-64951fb05e8d\" (UID: \"87f178e0-a635-45ee-89cd-64951fb05e8d\") " Nov 21 10:07:03 crc kubenswrapper[4972]: I1121 10:07:03.773361 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6lsf\" (UniqueName: \"kubernetes.io/projected/87f178e0-a635-45ee-89cd-64951fb05e8d-kube-api-access-h6lsf\") pod \"87f178e0-a635-45ee-89cd-64951fb05e8d\" (UID: \"87f178e0-a635-45ee-89cd-64951fb05e8d\") " Nov 21 10:07:03 crc kubenswrapper[4972]: I1121 10:07:03.773388 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f178e0-a635-45ee-89cd-64951fb05e8d-utilities\") pod \"87f178e0-a635-45ee-89cd-64951fb05e8d\" (UID: \"87f178e0-a635-45ee-89cd-64951fb05e8d\") " Nov 21 10:07:03 crc kubenswrapper[4972]: I1121 10:07:03.774486 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f178e0-a635-45ee-89cd-64951fb05e8d-utilities" (OuterVolumeSpecName: "utilities") pod "87f178e0-a635-45ee-89cd-64951fb05e8d" (UID: "87f178e0-a635-45ee-89cd-64951fb05e8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:03 crc kubenswrapper[4972]: I1121 10:07:03.782193 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f178e0-a635-45ee-89cd-64951fb05e8d-kube-api-access-h6lsf" (OuterVolumeSpecName: "kube-api-access-h6lsf") pod "87f178e0-a635-45ee-89cd-64951fb05e8d" (UID: "87f178e0-a635-45ee-89cd-64951fb05e8d"). InnerVolumeSpecName "kube-api-access-h6lsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:03 crc kubenswrapper[4972]: I1121 10:07:03.809471 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f178e0-a635-45ee-89cd-64951fb05e8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87f178e0-a635-45ee-89cd-64951fb05e8d" (UID: "87f178e0-a635-45ee-89cd-64951fb05e8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:03 crc kubenswrapper[4972]: I1121 10:07:03.876315 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f178e0-a635-45ee-89cd-64951fb05e8d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:03 crc kubenswrapper[4972]: I1121 10:07:03.876359 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6lsf\" (UniqueName: \"kubernetes.io/projected/87f178e0-a635-45ee-89cd-64951fb05e8d-kube-api-access-h6lsf\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:03 crc kubenswrapper[4972]: I1121 10:07:03.876378 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f178e0-a635-45ee-89cd-64951fb05e8d-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.121769 4972 generic.go:334] "Generic (PLEG): container finished" podID="87f178e0-a635-45ee-89cd-64951fb05e8d" containerID="c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713" exitCode=0 Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.121979 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxdcp" Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.122036 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxdcp" event={"ID":"87f178e0-a635-45ee-89cd-64951fb05e8d","Type":"ContainerDied","Data":"c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713"} Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.122405 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxdcp" event={"ID":"87f178e0-a635-45ee-89cd-64951fb05e8d","Type":"ContainerDied","Data":"1e2a2053b4f2b552031434614e06e18ddfc9b2b69658909e383aebb615876a26"} Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.122444 4972 scope.go:117] "RemoveContainer" containerID="c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713" Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.174302 4972 scope.go:117] "RemoveContainer" containerID="e8fb6c2b2462245e6f0aed333c1a7188f9beba1c46182cc82726e2673312126b" Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.178082 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxdcp"] Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.190259 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxdcp"] Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.210457 4972 scope.go:117] "RemoveContainer" containerID="a3460af0cd5a5186e1a043948cc21b2ba00aa0c14ff7ad192d5b52ea025c1af1" Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.252093 4972 scope.go:117] "RemoveContainer" containerID="c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713" Nov 21 10:07:04 crc kubenswrapper[4972]: E1121 10:07:04.252540 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713\": container with ID starting with c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713 not found: ID does not exist" containerID="c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713" Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.252568 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713"} err="failed to get container status \"c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713\": rpc error: code = NotFound desc = could not find container \"c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713\": container with ID starting with c3108605566c0a96f437c6f5d08e1440a50dbf0186d356a9ac92a9d52faca713 not found: ID does not exist" Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.252588 4972 scope.go:117] "RemoveContainer" containerID="e8fb6c2b2462245e6f0aed333c1a7188f9beba1c46182cc82726e2673312126b" Nov 21 10:07:04 crc kubenswrapper[4972]: E1121 10:07:04.252880 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8fb6c2b2462245e6f0aed333c1a7188f9beba1c46182cc82726e2673312126b\": container with ID starting with e8fb6c2b2462245e6f0aed333c1a7188f9beba1c46182cc82726e2673312126b not found: ID does not exist" containerID="e8fb6c2b2462245e6f0aed333c1a7188f9beba1c46182cc82726e2673312126b" Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.252901 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8fb6c2b2462245e6f0aed333c1a7188f9beba1c46182cc82726e2673312126b"} err="failed to get container status \"e8fb6c2b2462245e6f0aed333c1a7188f9beba1c46182cc82726e2673312126b\": rpc error: code = NotFound desc = could not find container \"e8fb6c2b2462245e6f0aed333c1a7188f9beba1c46182cc82726e2673312126b\": container with ID starting with e8fb6c2b2462245e6f0aed333c1a7188f9beba1c46182cc82726e2673312126b not found: ID does not exist" Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.252913 4972 scope.go:117] "RemoveContainer" containerID="a3460af0cd5a5186e1a043948cc21b2ba00aa0c14ff7ad192d5b52ea025c1af1" Nov 21 10:07:04 crc kubenswrapper[4972]: E1121 10:07:04.253291 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3460af0cd5a5186e1a043948cc21b2ba00aa0c14ff7ad192d5b52ea025c1af1\": container with ID starting with a3460af0cd5a5186e1a043948cc21b2ba00aa0c14ff7ad192d5b52ea025c1af1 not found: ID does not exist" containerID="a3460af0cd5a5186e1a043948cc21b2ba00aa0c14ff7ad192d5b52ea025c1af1" Nov 21 10:07:04 crc kubenswrapper[4972]: I1121 10:07:04.253340 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3460af0cd5a5186e1a043948cc21b2ba00aa0c14ff7ad192d5b52ea025c1af1"} err="failed to get container status \"a3460af0cd5a5186e1a043948cc21b2ba00aa0c14ff7ad192d5b52ea025c1af1\": rpc error: code = NotFound desc = could not find container \"a3460af0cd5a5186e1a043948cc21b2ba00aa0c14ff7ad192d5b52ea025c1af1\": container with ID starting with a3460af0cd5a5186e1a043948cc21b2ba00aa0c14ff7ad192d5b52ea025c1af1 not found: ID does not exist" Nov 21 10:07:05 crc kubenswrapper[4972]: I1121 10:07:05.774399 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f178e0-a635-45ee-89cd-64951fb05e8d" path="/var/lib/kubelet/pods/87f178e0-a635-45ee-89cd-64951fb05e8d/volumes" Nov 21 10:07:06 crc kubenswrapper[4972]: I1121 10:07:06.386881 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 10:07:06 crc kubenswrapper[4972]: I1121 10:07:06.386969 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 10:07:07 crc kubenswrapper[4972]: I1121 10:07:07.404282 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 10:07:07 crc kubenswrapper[4972]: I1121 10:07:07.407756 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 10:07:07 crc kubenswrapper[4972]: I1121 10:07:07.433941 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 10:07:07 crc kubenswrapper[4972]: I1121 10:07:07.495264 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 10:07:08 crc kubenswrapper[4972]: I1121 10:07:08.200676 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 10:07:08 crc kubenswrapper[4972]: I1121 10:07:08.728715 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 10:07:08 crc kubenswrapper[4972]: I1121 10:07:08.728787 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 10:07:09 crc kubenswrapper[4972]: I1121 10:07:09.744254 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 10:07:09 crc kubenswrapper[4972]: I1121 10:07:09.744301 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 10:07:13 crc kubenswrapper[4972]: I1121 10:07:13.259315 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 10:07:16 crc kubenswrapper[4972]: I1121 10:07:16.398357 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 10:07:16 crc kubenswrapper[4972]: I1121 10:07:16.398577 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 10:07:16 crc kubenswrapper[4972]: I1121 10:07:16.416590 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 10:07:16 crc kubenswrapper[4972]: I1121 10:07:16.416941 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 10:07:18 crc kubenswrapper[4972]: I1121 10:07:18.740286 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 10:07:18 crc kubenswrapper[4972]: I1121 10:07:18.741633 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 10:07:18 crc kubenswrapper[4972]: I1121 10:07:18.745224 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 10:07:18 crc kubenswrapper[4972]: I1121 10:07:18.754438 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 10:07:19 crc kubenswrapper[4972]: I1121 10:07:19.294895 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 10:07:19 crc kubenswrapper[4972]: I1121 10:07:19.302709 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 10:07:26 crc kubenswrapper[4972]: I1121 10:07:26.179559 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:07:26 crc kubenswrapper[4972]: I1121 10:07:26.180535 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:07:26 crc kubenswrapper[4972]: I1121 10:07:26.180650 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 10:07:26 crc kubenswrapper[4972]: I1121 10:07:26.181963 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 10:07:26 crc kubenswrapper[4972]: I1121 10:07:26.182101 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" gracePeriod=600 Nov 21 10:07:26 crc kubenswrapper[4972]: E1121 10:07:26.321299 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:07:26 crc kubenswrapper[4972]: I1121 10:07:26.386596 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" exitCode=0 Nov 21 10:07:26 crc kubenswrapper[4972]: I1121 10:07:26.386715 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8"} Nov 21 10:07:26 crc kubenswrapper[4972]: I1121 10:07:26.388031 4972 scope.go:117] "RemoveContainer" containerID="7ec11dc5626524562fd7c3b24c6b4002aa3a346dd5009bf5fa88dabd42ba42bd" Nov 21 10:07:26 crc kubenswrapper[4972]: I1121 10:07:26.388224 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:07:26 crc kubenswrapper[4972]: E1121 10:07:26.388823 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:07:36 crc kubenswrapper[4972]: I1121 10:07:36.760788 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:07:36 crc kubenswrapper[4972]: E1121 10:07:36.761917 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.197301 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.202469 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="88c81504-7f14-498f-bd8d-4fa74aebf2d2" containerName="openstackclient" containerID="cri-o://30837ba80ae724788ccc279d47025486e335f00269e93508eaa7eabb78466914" gracePeriod=2 Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.238471 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.413345 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.414080 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="c9c438ca-0f93-434d-81ea-29ae82b217bf" containerName="openstack-network-exporter" containerID="cri-o://02017bfaf39dc941741a5f40c1bacc3f4996ecfcae24cb31b354109768689142" gracePeriod=300 Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.417686 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.452786 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.453135 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="44805331-e34b-4455-a744-4c8fe27a1b9e" containerName="openstack-network-exporter" containerID="cri-o://0eb74e778f9330e95160a9380c73ad009f10cca6eb82633cae811a9a159e0d84" gracePeriod=300 Nov 21 10:07:39 crc kubenswrapper[4972]: E1121 10:07:39.542741 4972 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 21 10:07:39 crc kubenswrapper[4972]: E1121 10:07:39.542804 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data podName:2bc44abc-7710-432b-b503-fd54e3afeede nodeName:}" failed. No retries permitted until 2025-11-21 10:07:40.042787423 +0000 UTC m=+1605.151929921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data") pod "rabbitmq-cell1-server-0" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede") : configmap "rabbitmq-cell1-config-data" not found Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.603241 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glanced431-account-delete-5xwls"] Nov 21 10:07:39 crc kubenswrapper[4972]: E1121 10:07:39.615382 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f178e0-a635-45ee-89cd-64951fb05e8d" containerName="registry-server" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.615411 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f178e0-a635-45ee-89cd-64951fb05e8d" containerName="registry-server" Nov 21 10:07:39 crc kubenswrapper[4972]: E1121 10:07:39.615434 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f178e0-a635-45ee-89cd-64951fb05e8d" containerName="extract-content" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.615442 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f178e0-a635-45ee-89cd-64951fb05e8d" containerName="extract-content" Nov 21 10:07:39 crc kubenswrapper[4972]: E1121 10:07:39.615464 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f178e0-a635-45ee-89cd-64951fb05e8d" containerName="extract-utilities" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.615477 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f178e0-a635-45ee-89cd-64951fb05e8d" containerName="extract-utilities" Nov 21 10:07:39 crc kubenswrapper[4972]: E1121 10:07:39.615500 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c81504-7f14-498f-bd8d-4fa74aebf2d2" containerName="openstackclient" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.615506 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c81504-7f14-498f-bd8d-4fa74aebf2d2" containerName="openstackclient" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.615767 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f178e0-a635-45ee-89cd-64951fb05e8d" containerName="registry-server" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.615789 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c81504-7f14-498f-bd8d-4fa74aebf2d2" containerName="openstackclient" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.616432 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glanced431-account-delete-5xwls" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.651385 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1224d0f-d488-49e6-b6dc-12a188b43a43-operator-scripts\") pod \"glanced431-account-delete-5xwls\" (UID: \"f1224d0f-d488-49e6-b6dc-12a188b43a43\") " pod="openstack/glanced431-account-delete-5xwls" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.651443 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55f8h\" (UniqueName: \"kubernetes.io/projected/f1224d0f-d488-49e6-b6dc-12a188b43a43-kube-api-access-55f8h\") pod \"glanced431-account-delete-5xwls\" (UID: \"f1224d0f-d488-49e6-b6dc-12a188b43a43\") " pod="openstack/glanced431-account-delete-5xwls" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.657910 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glanced431-account-delete-5xwls"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.683882 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.716179 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="c9c438ca-0f93-434d-81ea-29ae82b217bf" containerName="ovsdbserver-nb" containerID="cri-o://b39e49481b4242d63f67036e50dc39fabe6cc04941ad1ad33655c4f1ec8f7121" gracePeriod=300 Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.728740 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="44805331-e34b-4455-a744-4c8fe27a1b9e" containerName="ovsdbserver-sb" containerID="cri-o://7a52d5a35cd478c028a14322544e0cedd59d5fc637ca22a8848442a143badc31" gracePeriod=300 Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.738851 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutrondbea-account-delete-9c96n"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.741414 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutrondbea-account-delete-9c96n" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.747796 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.748116 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" containerName="openstack-network-exporter" containerID="cri-o://8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37" gracePeriod=30 Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.748208 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" containerName="ovn-northd" containerID="cri-o://2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e" gracePeriod=30 Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.761162 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5548s\" (UniqueName: \"kubernetes.io/projected/97ccfb34-fe6c-4529-812a-af30eb178e8b-kube-api-access-5548s\") pod \"neutrondbea-account-delete-9c96n\" (UID: \"97ccfb34-fe6c-4529-812a-af30eb178e8b\") " pod="openstack/neutrondbea-account-delete-9c96n" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.761397 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1224d0f-d488-49e6-b6dc-12a188b43a43-operator-scripts\") pod \"glanced431-account-delete-5xwls\" (UID: \"f1224d0f-d488-49e6-b6dc-12a188b43a43\") " pod="openstack/glanced431-account-delete-5xwls" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.761499 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ccfb34-fe6c-4529-812a-af30eb178e8b-operator-scripts\") pod \"neutrondbea-account-delete-9c96n\" (UID: \"97ccfb34-fe6c-4529-812a-af30eb178e8b\") " pod="openstack/neutrondbea-account-delete-9c96n" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.761548 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55f8h\" (UniqueName: \"kubernetes.io/projected/f1224d0f-d488-49e6-b6dc-12a188b43a43-kube-api-access-55f8h\") pod \"glanced431-account-delete-5xwls\" (UID: \"f1224d0f-d488-49e6-b6dc-12a188b43a43\") " pod="openstack/glanced431-account-delete-5xwls" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.763612 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-g84kj"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.764432 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1224d0f-d488-49e6-b6dc-12a188b43a43-operator-scripts\") pod \"glanced431-account-delete-5xwls\" (UID: \"f1224d0f-d488-49e6-b6dc-12a188b43a43\") " pod="openstack/glanced431-account-delete-5xwls" Nov 21 10:07:39 crc kubenswrapper[4972]: E1121 10:07:39.764756 4972 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 21 10:07:39 crc kubenswrapper[4972]: E1121 10:07:39.764812 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data podName:392b5094-f8ef-47b8-8dc5-9e1d2dbef612 nodeName:}" failed. No retries permitted until 2025-11-21 10:07:40.264788803 +0000 UTC m=+1605.373931291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data") pod "rabbitmq-server-0" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612") : configmap "rabbitmq-config-data" not found Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.796616 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55f8h\" (UniqueName: \"kubernetes.io/projected/f1224d0f-d488-49e6-b6dc-12a188b43a43-kube-api-access-55f8h\") pod \"glanced431-account-delete-5xwls\" (UID: \"f1224d0f-d488-49e6-b6dc-12a188b43a43\") " pod="openstack/glanced431-account-delete-5xwls" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.815018 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-g84kj"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.815345 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutrondbea-account-delete-9c96n"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.837153 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement66d8-account-delete-947ct"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.838395 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement66d8-account-delete-947ct" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.862943 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ghp\" (UniqueName: \"kubernetes.io/projected/ffb786ba-2a1a-4124-9ef7-116e12402f5c-kube-api-access-r2ghp\") pod \"placement66d8-account-delete-947ct\" (UID: \"ffb786ba-2a1a-4124-9ef7-116e12402f5c\") " pod="openstack/placement66d8-account-delete-947ct" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.863665 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5548s\" (UniqueName: \"kubernetes.io/projected/97ccfb34-fe6c-4529-812a-af30eb178e8b-kube-api-access-5548s\") pod \"neutrondbea-account-delete-9c96n\" (UID: \"97ccfb34-fe6c-4529-812a-af30eb178e8b\") " pod="openstack/neutrondbea-account-delete-9c96n" Nov 21 10:07:39 crc kubenswrapper[4972]: E1121 10:07:39.864459 4972 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Nov 21 10:07:39 crc kubenswrapper[4972]: E1121 10:07:39.864505 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data podName:befdbf4d-7d20-40ca-9985-8309a0295dad nodeName:}" failed. No retries permitted until 2025-11-21 10:07:40.364488036 +0000 UTC m=+1605.473630534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data") pod "cinder-scheduler-0" (UID: "befdbf4d-7d20-40ca-9985-8309a0295dad") : secret "cinder-config-data" not found Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.864818 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb786ba-2a1a-4124-9ef7-116e12402f5c-operator-scripts\") pod \"placement66d8-account-delete-947ct\" (UID: \"ffb786ba-2a1a-4124-9ef7-116e12402f5c\") " pod="openstack/placement66d8-account-delete-947ct" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.864954 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ccfb34-fe6c-4529-812a-af30eb178e8b-operator-scripts\") pod \"neutrondbea-account-delete-9c96n\" (UID: \"97ccfb34-fe6c-4529-812a-af30eb178e8b\") " pod="openstack/neutrondbea-account-delete-9c96n" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.865917 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ccfb34-fe6c-4529-812a-af30eb178e8b-operator-scripts\") pod \"neutrondbea-account-delete-9c96n\" (UID: \"97ccfb34-fe6c-4529-812a-af30eb178e8b\") " pod="openstack/neutrondbea-account-delete-9c96n" Nov 21 10:07:39 crc kubenswrapper[4972]: E1121 10:07:39.866144 4972 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Nov 21 10:07:39 crc kubenswrapper[4972]: E1121 10:07:39.866178 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-scripts podName:befdbf4d-7d20-40ca-9985-8309a0295dad nodeName:}" failed. No retries permitted until 2025-11-21 10:07:40.36616572 +0000 UTC m=+1605.475308218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-scripts") pod "cinder-scheduler-0" (UID: "befdbf4d-7d20-40ca-9985-8309a0295dad") : secret "cinder-scripts" not found Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.887907 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement66d8-account-delete-947ct"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.902594 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5548s\" (UniqueName: \"kubernetes.io/projected/97ccfb34-fe6c-4529-812a-af30eb178e8b-kube-api-access-5548s\") pod \"neutrondbea-account-delete-9c96n\" (UID: \"97ccfb34-fe6c-4529-812a-af30eb178e8b\") " pod="openstack/neutrondbea-account-delete-9c96n" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.951111 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder04d8-account-delete-kvgwd"] Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.952378 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder04d8-account-delete-kvgwd" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.966710 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glanced431-account-delete-5xwls" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.969547 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chfz4\" (UniqueName: \"kubernetes.io/projected/bf21c86e-7747-4dca-a870-352dfa214beb-kube-api-access-chfz4\") pod \"cinder04d8-account-delete-kvgwd\" (UID: \"bf21c86e-7747-4dca-a870-352dfa214beb\") " pod="openstack/cinder04d8-account-delete-kvgwd" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.970006 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2ghp\" (UniqueName: \"kubernetes.io/projected/ffb786ba-2a1a-4124-9ef7-116e12402f5c-kube-api-access-r2ghp\") pod \"placement66d8-account-delete-947ct\" (UID: \"ffb786ba-2a1a-4124-9ef7-116e12402f5c\") " pod="openstack/placement66d8-account-delete-947ct" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.970269 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb786ba-2a1a-4124-9ef7-116e12402f5c-operator-scripts\") pod \"placement66d8-account-delete-947ct\" (UID: \"ffb786ba-2a1a-4124-9ef7-116e12402f5c\") " pod="openstack/placement66d8-account-delete-947ct" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.970424 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf21c86e-7747-4dca-a870-352dfa214beb-operator-scripts\") pod \"cinder04d8-account-delete-kvgwd\" (UID: \"bf21c86e-7747-4dca-a870-352dfa214beb\") " pod="openstack/cinder04d8-account-delete-kvgwd" Nov 21 10:07:39 crc kubenswrapper[4972]: I1121 10:07:39.971489 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb786ba-2a1a-4124-9ef7-116e12402f5c-operator-scripts\") pod \"placement66d8-account-delete-947ct\" (UID: \"ffb786ba-2a1a-4124-9ef7-116e12402f5c\") " pod="openstack/placement66d8-account-delete-947ct" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.003272 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder04d8-account-delete-kvgwd"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.019404 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2ghp\" (UniqueName: \"kubernetes.io/projected/ffb786ba-2a1a-4124-9ef7-116e12402f5c-kube-api-access-r2ghp\") pod \"placement66d8-account-delete-947ct\" (UID: \"ffb786ba-2a1a-4124-9ef7-116e12402f5c\") " pod="openstack/placement66d8-account-delete-947ct" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.049891 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican958e-account-delete-tfwsx"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.051471 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican958e-account-delete-tfwsx" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.088652 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican958e-account-delete-tfwsx"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.105334 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutrondbea-account-delete-9c96n" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.105835 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf21c86e-7747-4dca-a870-352dfa214beb-operator-scripts\") pod \"cinder04d8-account-delete-kvgwd\" (UID: \"bf21c86e-7747-4dca-a870-352dfa214beb\") " pod="openstack/cinder04d8-account-delete-kvgwd" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.105971 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chfz4\" (UniqueName: \"kubernetes.io/projected/bf21c86e-7747-4dca-a870-352dfa214beb-kube-api-access-chfz4\") pod \"cinder04d8-account-delete-kvgwd\" (UID: \"bf21c86e-7747-4dca-a870-352dfa214beb\") " pod="openstack/cinder04d8-account-delete-kvgwd" Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.106159 4972 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.106205 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data podName:2bc44abc-7710-432b-b503-fd54e3afeede nodeName:}" failed. No retries permitted until 2025-11-21 10:07:41.106191982 +0000 UTC m=+1606.215334480 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data") pod "rabbitmq-cell1-server-0" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede") : configmap "rabbitmq-cell1-config-data" not found Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.106808 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf21c86e-7747-4dca-a870-352dfa214beb-operator-scripts\") pod \"cinder04d8-account-delete-kvgwd\" (UID: \"bf21c86e-7747-4dca-a870-352dfa214beb\") " pod="openstack/cinder04d8-account-delete-kvgwd" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.177345 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chfz4\" (UniqueName: \"kubernetes.io/projected/bf21c86e-7747-4dca-a870-352dfa214beb-kube-api-access-chfz4\") pod \"cinder04d8-account-delete-kvgwd\" (UID: \"bf21c86e-7747-4dca-a870-352dfa214beb\") " pod="openstack/cinder04d8-account-delete-kvgwd" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.219316 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement66d8-account-delete-947ct" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.266090 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0-operator-scripts\") pod \"barbican958e-account-delete-tfwsx\" (UID: \"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0\") " pod="openstack/barbican958e-account-delete-tfwsx" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.266268 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvgqd\" (UniqueName: \"kubernetes.io/projected/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0-kube-api-access-kvgqd\") pod \"barbican958e-account-delete-tfwsx\" (UID: \"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0\") " pod="openstack/barbican958e-account-delete-tfwsx" Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.266533 4972 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.266593 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data podName:392b5094-f8ef-47b8-8dc5-9e1d2dbef612 nodeName:}" failed. No retries permitted until 2025-11-21 10:07:41.266576966 +0000 UTC m=+1606.375719464 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data") pod "rabbitmq-server-0" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612") : configmap "rabbitmq-config-data" not found Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.315008 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder04d8-account-delete-kvgwd" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.407424 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvgqd\" (UniqueName: \"kubernetes.io/projected/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0-kube-api-access-kvgqd\") pod \"barbican958e-account-delete-tfwsx\" (UID: \"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0\") " pod="openstack/barbican958e-account-delete-tfwsx" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.407870 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0-operator-scripts\") pod \"barbican958e-account-delete-tfwsx\" (UID: \"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0\") " pod="openstack/barbican958e-account-delete-tfwsx" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.408812 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell05330-account-delete-cxgw8"] Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.409003 4972 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.409086 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-scripts podName:befdbf4d-7d20-40ca-9985-8309a0295dad nodeName:}" failed. No retries permitted until 2025-11-21 10:07:41.409066091 +0000 UTC m=+1606.518208589 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-scripts") pod "cinder-scheduler-0" (UID: "befdbf4d-7d20-40ca-9985-8309a0295dad") : secret "cinder-scripts" not found Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.410570 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05330-account-delete-cxgw8" Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.415807 4972 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.415905 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data podName:befdbf4d-7d20-40ca-9985-8309a0295dad nodeName:}" failed. No retries permitted until 2025-11-21 10:07:41.415873263 +0000 UTC m=+1606.525015761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data") pod "cinder-scheduler-0" (UID: "befdbf4d-7d20-40ca-9985-8309a0295dad") : secret "cinder-config-data" not found Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.419463 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0-operator-scripts\") pod \"barbican958e-account-delete-tfwsx\" (UID: \"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0\") " pod="openstack/barbican958e-account-delete-tfwsx" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.446533 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvgqd\" (UniqueName: \"kubernetes.io/projected/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0-kube-api-access-kvgqd\") pod \"barbican958e-account-delete-tfwsx\" (UID: \"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0\") " pod="openstack/barbican958e-account-delete-tfwsx" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.465101 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell05330-account-delete-cxgw8"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.500715 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapi501f-account-delete-4746z"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.502240 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi501f-account-delete-4746z" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.513653 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e11aef3-0c96-44ed-8876-7e54115d181f-operator-scripts\") pod \"novacell05330-account-delete-cxgw8\" (UID: \"4e11aef3-0c96-44ed-8876-7e54115d181f\") " pod="openstack/novacell05330-account-delete-cxgw8" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.513760 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd442c75-9e94-4f54-81b6-68c19f4de9d8-operator-scripts\") pod \"novaapi501f-account-delete-4746z\" (UID: \"dd442c75-9e94-4f54-81b6-68c19f4de9d8\") " pod="openstack/novaapi501f-account-delete-4746z" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.513793 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl6fl\" (UniqueName: \"kubernetes.io/projected/dd442c75-9e94-4f54-81b6-68c19f4de9d8-kube-api-access-gl6fl\") pod \"novaapi501f-account-delete-4746z\" (UID: \"dd442c75-9e94-4f54-81b6-68c19f4de9d8\") " pod="openstack/novaapi501f-account-delete-4746z" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.513854 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpzrq\" (UniqueName: \"kubernetes.io/projected/4e11aef3-0c96-44ed-8876-7e54115d181f-kube-api-access-wpzrq\") pod \"novacell05330-account-delete-cxgw8\" (UID: \"4e11aef3-0c96-44ed-8876-7e54115d181f\") " pod="openstack/novacell05330-account-delete-cxgw8" Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.558930 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7a52d5a35cd478c028a14322544e0cedd59d5fc637ca22a8848442a143badc31 is running failed: container process not found" containerID="7a52d5a35cd478c028a14322544e0cedd59d5fc637ca22a8848442a143badc31" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.570197 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7a52d5a35cd478c028a14322544e0cedd59d5fc637ca22a8848442a143badc31 is running failed: container process not found" containerID="7a52d5a35cd478c028a14322544e0cedd59d5fc637ca22a8848442a143badc31" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.570342 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi501f-account-delete-4746z"] Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.572873 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7a52d5a35cd478c028a14322544e0cedd59d5fc637ca22a8848442a143badc31 is running failed: container process not found" containerID="7a52d5a35cd478c028a14322544e0cedd59d5fc637ca22a8848442a143badc31" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.573021 4972 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7a52d5a35cd478c028a14322544e0cedd59d5fc637ca22a8848442a143badc31 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="44805331-e34b-4455-a744-4c8fe27a1b9e" containerName="ovsdbserver-sb" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.587072 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-c4vkg"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.593255 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican958e-account-delete-tfwsx" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.607892 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-c4vkg"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.619072 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7tm9c"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.624518 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpzrq\" (UniqueName: \"kubernetes.io/projected/4e11aef3-0c96-44ed-8876-7e54115d181f-kube-api-access-wpzrq\") pod \"novacell05330-account-delete-cxgw8\" (UID: \"4e11aef3-0c96-44ed-8876-7e54115d181f\") " pod="openstack/novacell05330-account-delete-cxgw8" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.625050 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e11aef3-0c96-44ed-8876-7e54115d181f-operator-scripts\") pod \"novacell05330-account-delete-cxgw8\" (UID: \"4e11aef3-0c96-44ed-8876-7e54115d181f\") " pod="openstack/novacell05330-account-delete-cxgw8" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.625289 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd442c75-9e94-4f54-81b6-68c19f4de9d8-operator-scripts\") pod \"novaapi501f-account-delete-4746z\" (UID: \"dd442c75-9e94-4f54-81b6-68c19f4de9d8\") " pod="openstack/novaapi501f-account-delete-4746z" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.625348 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl6fl\" (UniqueName: \"kubernetes.io/projected/dd442c75-9e94-4f54-81b6-68c19f4de9d8-kube-api-access-gl6fl\") pod \"novaapi501f-account-delete-4746z\" (UID: \"dd442c75-9e94-4f54-81b6-68c19f4de9d8\") " pod="openstack/novaapi501f-account-delete-4746z" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.625995 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e11aef3-0c96-44ed-8876-7e54115d181f-operator-scripts\") pod \"novacell05330-account-delete-cxgw8\" (UID: \"4e11aef3-0c96-44ed-8876-7e54115d181f\") " pod="openstack/novacell05330-account-delete-cxgw8" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.626587 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd442c75-9e94-4f54-81b6-68c19f4de9d8-operator-scripts\") pod \"novaapi501f-account-delete-4746z\" (UID: \"dd442c75-9e94-4f54-81b6-68c19f4de9d8\") " pod="openstack/novaapi501f-account-delete-4746z" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.643623 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7tm9c"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.687587 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl6fl\" (UniqueName: \"kubernetes.io/projected/dd442c75-9e94-4f54-81b6-68c19f4de9d8-kube-api-access-gl6fl\") pod \"novaapi501f-account-delete-4746z\" (UID: \"dd442c75-9e94-4f54-81b6-68c19f4de9d8\") " pod="openstack/novaapi501f-account-delete-4746z" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.713856 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_44805331-e34b-4455-a744-4c8fe27a1b9e/ovsdbserver-sb/0.log" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.713912 4972 generic.go:334] "Generic (PLEG): container finished" podID="44805331-e34b-4455-a744-4c8fe27a1b9e" containerID="0eb74e778f9330e95160a9380c73ad009f10cca6eb82633cae811a9a159e0d84" exitCode=2 Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.713927 4972 generic.go:334] "Generic (PLEG): container finished" podID="44805331-e34b-4455-a744-4c8fe27a1b9e" containerID="7a52d5a35cd478c028a14322544e0cedd59d5fc637ca22a8848442a143badc31" exitCode=143 Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.713987 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"44805331-e34b-4455-a744-4c8fe27a1b9e","Type":"ContainerDied","Data":"0eb74e778f9330e95160a9380c73ad009f10cca6eb82633cae811a9a159e0d84"} Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.714010 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"44805331-e34b-4455-a744-4c8fe27a1b9e","Type":"ContainerDied","Data":"7a52d5a35cd478c028a14322544e0cedd59d5fc637ca22a8848442a143badc31"} Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.714028 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.723269 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.735237 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpzrq\" (UniqueName: \"kubernetes.io/projected/4e11aef3-0c96-44ed-8876-7e54115d181f-kube-api-access-wpzrq\") pod \"novacell05330-account-delete-cxgw8\" (UID: \"4e11aef3-0c96-44ed-8876-7e54115d181f\") " pod="openstack/novacell05330-account-delete-cxgw8" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.735340 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-s77fk"] Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.743821 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 21 10:07:40 crc kubenswrapper[4972]: E1121 10:07:40.751346 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" containerName="ovn-northd" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.756356 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-s77fk"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.756523 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c9c438ca-0f93-434d-81ea-29ae82b217bf/ovsdbserver-nb/0.log" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.756986 4972 generic.go:334] "Generic (PLEG): container finished" podID="c9c438ca-0f93-434d-81ea-29ae82b217bf" containerID="02017bfaf39dc941741a5f40c1bacc3f4996ecfcae24cb31b354109768689142" exitCode=2 Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.757077 4972 generic.go:334] "Generic (PLEG): container finished" podID="c9c438ca-0f93-434d-81ea-29ae82b217bf" containerID="b39e49481b4242d63f67036e50dc39fabe6cc04941ad1ad33655c4f1ec8f7121" exitCode=143 Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.757197 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c9c438ca-0f93-434d-81ea-29ae82b217bf","Type":"ContainerDied","Data":"02017bfaf39dc941741a5f40c1bacc3f4996ecfcae24cb31b354109768689142"} Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.757290 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c9c438ca-0f93-434d-81ea-29ae82b217bf","Type":"ContainerDied","Data":"b39e49481b4242d63f67036e50dc39fabe6cc04941ad1ad33655c4f1ec8f7121"} Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.768744 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05330-account-delete-cxgw8" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.788774 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2jlvt"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.809443 4972 generic.go:334] "Generic (PLEG): container finished" podID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" containerID="8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37" exitCode=2 Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.809619 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cf3edebd-74ab-4b7d-8706-2eda69d91aea","Type":"ContainerDied","Data":"8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37"} Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.832443 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2jlvt"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.889871 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi501f-account-delete-4746z" Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.922660 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6868d89965-c8n5j"] Nov 21 10:07:40 crc kubenswrapper[4972]: I1121 10:07:40.922946 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" podUID="4280cc0e-ca6a-47d7-be4d-a05beb85de3c" containerName="dnsmasq-dns" containerID="cri-o://69e04f00aa09648eb63cd97e3e98080e91e8163cf8df71f506e5b1e624817eb1" gracePeriod=10 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.015127 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-4z7b5"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.030868 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glanced431-account-delete-5xwls"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.044773 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5q7hj"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.055721 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-psvpd"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.059492 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-psvpd" podUID="2bb7ffc3-501c-420f-834c-0509b4a509eb" containerName="openstack-network-exporter" containerID="cri-o://4789958ba38111e46a66c03e780a501dd267a5f7418fac71da4781d35d79c30d" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.091594 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mf7mv"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.105019 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wn882"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.129286 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mf7mv"] Nov 21 10:07:41 crc kubenswrapper[4972]: E1121 10:07:41.143933 4972 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 21 10:07:41 crc kubenswrapper[4972]: E1121 10:07:41.144025 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data podName:2bc44abc-7710-432b-b503-fd54e3afeede nodeName:}" failed. No retries permitted until 2025-11-21 10:07:43.143961291 +0000 UTC m=+1608.253103789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data") pod "rabbitmq-cell1-server-0" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede") : configmap "rabbitmq-cell1-config-data" not found Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.144929 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wn882"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.173682 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c9c438ca-0f93-434d-81ea-29ae82b217bf/ovsdbserver-nb/0.log" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.173755 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.205972 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.206179 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" containerName="glance-log" containerID="cri-o://e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.212388 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" containerName="glance-httpd" containerID="cri-o://176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.232968 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-79f8cf4757-8cflk"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.233243 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-79f8cf4757-8cflk" podUID="56aac81e-b855-4419-b8a5-8f1fc099b5e6" containerName="neutron-api" containerID="cri-o://a84df8a5c99a95c300cc9bc766b529621a802a107975b46bcdb8f96199772bb6" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.233637 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-79f8cf4757-8cflk" podUID="56aac81e-b855-4419-b8a5-8f1fc099b5e6" containerName="neutron-httpd" containerID="cri-o://4d77ecd5438c1e9b16f7c8d4f0e5a8b33983d1efefc68af6391bbc8b9f26e966" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.260321 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-combined-ca-bundle\") pod \"c9c438ca-0f93-434d-81ea-29ae82b217bf\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.260379 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9c438ca-0f93-434d-81ea-29ae82b217bf-scripts\") pod \"c9c438ca-0f93-434d-81ea-29ae82b217bf\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.260435 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"c9c438ca-0f93-434d-81ea-29ae82b217bf\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.260454 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-ovsdbserver-nb-tls-certs\") pod \"c9c438ca-0f93-434d-81ea-29ae82b217bf\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.260504 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c438ca-0f93-434d-81ea-29ae82b217bf-config\") pod \"c9c438ca-0f93-434d-81ea-29ae82b217bf\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.260537 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7zzq\" (UniqueName: \"kubernetes.io/projected/c9c438ca-0f93-434d-81ea-29ae82b217bf-kube-api-access-n7zzq\") pod \"c9c438ca-0f93-434d-81ea-29ae82b217bf\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.260638 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-metrics-certs-tls-certs\") pod \"c9c438ca-0f93-434d-81ea-29ae82b217bf\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.260658 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9c438ca-0f93-434d-81ea-29ae82b217bf-ovsdb-rundir\") pod \"c9c438ca-0f93-434d-81ea-29ae82b217bf\" (UID: \"c9c438ca-0f93-434d-81ea-29ae82b217bf\") " Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.262711 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9c438ca-0f93-434d-81ea-29ae82b217bf-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c9c438ca-0f93-434d-81ea-29ae82b217bf" (UID: "c9c438ca-0f93-434d-81ea-29ae82b217bf"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.263306 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9c438ca-0f93-434d-81ea-29ae82b217bf-config" (OuterVolumeSpecName: "config") pod "c9c438ca-0f93-434d-81ea-29ae82b217bf" (UID: "c9c438ca-0f93-434d-81ea-29ae82b217bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.266380 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9c438ca-0f93-434d-81ea-29ae82b217bf-scripts" (OuterVolumeSpecName: "scripts") pod "c9c438ca-0f93-434d-81ea-29ae82b217bf" (UID: "c9c438ca-0f93-434d-81ea-29ae82b217bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.266435 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.266899 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-server" containerID="cri-o://ba577ff7853e877e687486121c6f0ab731e335150c782fdb6337e45da1ea7e56" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.268963 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-updater" containerID="cri-o://ef4d5eb5bf9e2085aa31deab41d35f315b471c1a281ec7d0fdb5669055ceae7e" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269052 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="swift-recon-cron" containerID="cri-o://d4d2c9d3e605844fc00e4083833139b1121a575ad83be76839782a80b770f46a" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269089 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="rsync" containerID="cri-o://d6c241802e71e9521da5b44bb300b3ed93a83b5a2a3b5384891a37d0477bcf5f" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269126 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-expirer" containerID="cri-o://b27dea1fedce06fdcc7b8b10bfa4e01b3977a2c1835d79507b63bffd8cd7cf4f" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269163 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-updater" containerID="cri-o://0f6fda84aaa98d450bf8db3dd84c394bcbdd91eb2c614ce51ee1f7e2fdf05d9e" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269209 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-auditor" containerID="cri-o://7e3179b2cf36ea30c1f398322b657083876aff67dca73310812bf6eda27e562d" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269243 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-replicator" containerID="cri-o://6b51383c400616239b3920aae870a35808849c73b781889b2d7c3fca1086fcc9" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269275 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-server" containerID="cri-o://6d2997e2bf31afa38122b707eaffd973a10f37e15af3ef380d90f5a0e46e40a2" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269306 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-reaper" containerID="cri-o://1d432671871d10b2f9d36122beb37f70113843388eedcb543148c0842f970029" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269352 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-auditor" containerID="cri-o://da2ba4db5685edc3025f879f7e189cf7165c184ef92f7d26d3118102cbc00186" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269395 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-replicator" containerID="cri-o://56e50d004614f42f95a39a005d2e581ae7498a4ab2ace52e0c8e44e4cb64b156" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269425 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-server" containerID="cri-o://4e7f746ee8e85533e7ed177d7195703edc2217f4d9450127a0eefddf988dd729" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269460 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-replicator" containerID="cri-o://7ac0c52eaf55d9c6a4f11a7c5914428a511032a6d41ca1f5562b5b774ab41f34" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.269492 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-auditor" containerID="cri-o://8c31ccc0050d4e99074a90c40277647465e43314e1fdbb8b1f6a9b4753e956a8" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.278095 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "c9c438ca-0f93-434d-81ea-29ae82b217bf" (UID: "c9c438ca-0f93-434d-81ea-29ae82b217bf"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.284092 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-zr6wn"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.295733 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c438ca-0f93-434d-81ea-29ae82b217bf-kube-api-access-n7zzq" (OuterVolumeSpecName: "kube-api-access-n7zzq") pod "c9c438ca-0f93-434d-81ea-29ae82b217bf" (UID: "c9c438ca-0f93-434d-81ea-29ae82b217bf"). InnerVolumeSpecName "kube-api-access-n7zzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.302864 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-zr6wn"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.309248 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.309485 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b2069a31-382b-4fc4-acee-cf202be1de1e" containerName="glance-log" containerID="cri-o://70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.310103 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b2069a31-382b-4fc4-acee-cf202be1de1e" containerName="glance-httpd" containerID="cri-o://fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.317778 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-78d4f89dc4-2qvzl"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.318663 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-78d4f89dc4-2qvzl" podUID="8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" containerName="placement-log" containerID="cri-o://a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.319196 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-78d4f89dc4-2qvzl" podUID="8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" containerName="placement-api" containerID="cri-o://8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.337660 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.367417 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.367449 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c438ca-0f93-434d-81ea-29ae82b217bf-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.367458 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7zzq\" (UniqueName: \"kubernetes.io/projected/c9c438ca-0f93-434d-81ea-29ae82b217bf-kube-api-access-n7zzq\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.367467 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9c438ca-0f93-434d-81ea-29ae82b217bf-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.367476 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9c438ca-0f93-434d-81ea-29ae82b217bf-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:41 crc kubenswrapper[4972]: E1121 10:07:41.370110 4972 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 21 10:07:41 crc kubenswrapper[4972]: E1121 10:07:41.370174 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data podName:392b5094-f8ef-47b8-8dc5-9e1d2dbef612 nodeName:}" failed. No retries permitted until 2025-11-21 10:07:43.370157872 +0000 UTC m=+1608.479300370 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data") pod "rabbitmq-server-0" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612") : configmap "rabbitmq-config-data" not found Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.458956 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.463749 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="befdbf4d-7d20-40ca-9985-8309a0295dad" containerName="cinder-scheduler" containerID="cri-o://a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.464067 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="befdbf4d-7d20-40ca-9985-8309a0295dad" containerName="probe" containerID="cri-o://317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: E1121 10:07:41.483682 4972 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Nov 21 10:07:41 crc kubenswrapper[4972]: E1121 10:07:41.483746 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-scripts podName:befdbf4d-7d20-40ca-9985-8309a0295dad nodeName:}" failed. No retries permitted until 2025-11-21 10:07:43.483729026 +0000 UTC m=+1608.592871614 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-scripts") pod "cinder-scheduler-0" (UID: "befdbf4d-7d20-40ca-9985-8309a0295dad") : secret "cinder-scripts" not found Nov 21 10:07:41 crc kubenswrapper[4972]: E1121 10:07:41.483799 4972 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Nov 21 10:07:41 crc kubenswrapper[4972]: E1121 10:07:41.483858 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data podName:befdbf4d-7d20-40ca-9985-8309a0295dad nodeName:}" failed. No retries permitted until 2025-11-21 10:07:43.483817188 +0000 UTC m=+1608.592959696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data") pod "cinder-scheduler-0" (UID: "befdbf4d-7d20-40ca-9985-8309a0295dad") : secret "cinder-config-data" not found Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.487148 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.487604 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="dc57ffef-2527-4b16-b281-9139b6a0f1a1" containerName="cinder-api-log" containerID="cri-o://ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.490239 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="dc57ffef-2527-4b16-b281-9139b6a0f1a1" containerName="cinder-api" containerID="cri-o://49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.509849 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9c438ca-0f93-434d-81ea-29ae82b217bf" (UID: "c9c438ca-0f93-434d-81ea-29ae82b217bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.520104 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.585711 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.601053 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.622069 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.634443 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3c3ae47e-fcf5-4397-a2a4-8e847e542d75" containerName="nova-scheduler-scheduler" containerID="cri-o://528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.640471 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2bc44abc-7710-432b-b503-fd54e3afeede" containerName="rabbitmq" containerID="cri-o://c8fbc9ceb2b6148e29eeae60a7cccd8704bb5b0088efc4a03700f71500ec7ef2" gracePeriod=604800 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.684602 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c9c438ca-0f93-434d-81ea-29ae82b217bf" (UID: "c9c438ca-0f93-434d-81ea-29ae82b217bf"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.687210 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "c9c438ca-0f93-434d-81ea-29ae82b217bf" (UID: "c9c438ca-0f93-434d-81ea-29ae82b217bf"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.712236 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.712619 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" containerName="nova-api-log" containerID="cri-o://9663e5eeed349feea42f465ac185ce4a281832b49f5ad7e6676845ad1940d586" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.712965 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" containerName="nova-api-api" containerID="cri-o://a50f5adc14def76f321fa0ba2955141d2c00ea811995acd79453b97be54e414e" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.725256 4972 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.725417 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c438ca-0f93-434d-81ea-29ae82b217bf-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.737742 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.741194 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="392b5094-f8ef-47b8-8dc5-9e1d2dbef612" containerName="rabbitmq" containerID="cri-o://40fd57bb0048a573eb9c5e1aa41727272375095e934fe8e65459e974a94e41af" gracePeriod=604800 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.758449 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4vsj6"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.779577 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025ad09c-467a-451c-a24d-4bf686469677" path="/var/lib/kubelet/pods/025ad09c-467a-451c-a24d-4bf686469677/volumes" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.780749 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d74b758-4b34-4381-bb6d-ba95a0ce1c62" path="/var/lib/kubelet/pods/1d74b758-4b34-4381-bb6d-ba95a0ce1c62/volumes" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.781282 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79051f6b-9693-43af-af16-8298e8205c25" path="/var/lib/kubelet/pods/79051f6b-9693-43af-af16-8298e8205c25/volumes" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.781895 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="795431b0-73d4-4c09-95ec-59c039a001d4" path="/var/lib/kubelet/pods/795431b0-73d4-4c09-95ec-59c039a001d4/volumes" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.783308 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84200540-581b-4ed8-b86e-0e744be73aba" path="/var/lib/kubelet/pods/84200540-581b-4ed8-b86e-0e744be73aba/volumes" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.784015 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9359e0ad-9677-4dfd-8cc2-bb9e40144ab3" path="/var/lib/kubelet/pods/9359e0ad-9677-4dfd-8cc2-bb9e40144ab3/volumes" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.785016 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c8d121f-c5f7-42c3-a8ce-6cbb48064e25" path="/var/lib/kubelet/pods/9c8d121f-c5f7-42c3-a8ce-6cbb48064e25/volumes" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.786027 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9513939-1a73-46a3-a946-db9b1008314f" path="/var/lib/kubelet/pods/f9513939-1a73-46a3-a946-db9b1008314f/volumes" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.793912 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4vsj6"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.830256 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.859190 4972 generic.go:334] "Generic (PLEG): container finished" podID="78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" containerID="e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b" exitCode=143 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.859278 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8","Type":"ContainerDied","Data":"e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b"} Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.882696 4972 generic.go:334] "Generic (PLEG): container finished" podID="dc57ffef-2527-4b16-b281-9139b6a0f1a1" containerID="ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad" exitCode=143 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.882785 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dc57ffef-2527-4b16-b281-9139b6a0f1a1","Type":"ContainerDied","Data":"ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad"} Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.895482 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glanced431-account-delete-5xwls" event={"ID":"f1224d0f-d488-49e6-b6dc-12a188b43a43","Type":"ContainerStarted","Data":"96662fa48a81d7c69f1cd5cab7b4c0fb630f2d8f1d9b1a05a36d873f1aaf9857"} Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.908891 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.909165 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" containerName="nova-metadata-log" containerID="cri-o://278382caff383ae2485ecd6e804ee41e63dbe81738ffa66e0a8508b7e3a9f20e" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.909692 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" containerName="nova-metadata-metadata" containerID="cri-o://9b2a5c600705559dcf3e1539bb652e7d2fca4320b9c98a74794471b4827bdfb0" gracePeriod=30 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.938554 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-psvpd_2bb7ffc3-501c-420f-834c-0509b4a509eb/openstack-network-exporter/0.log" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.938599 4972 generic.go:334] "Generic (PLEG): container finished" podID="2bb7ffc3-501c-420f-834c-0509b4a509eb" containerID="4789958ba38111e46a66c03e780a501dd267a5f7418fac71da4781d35d79c30d" exitCode=2 Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.938668 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-psvpd" event={"ID":"2bb7ffc3-501c-420f-834c-0509b4a509eb","Type":"ContainerDied","Data":"4789958ba38111e46a66c03e780a501dd267a5f7418fac71da4781d35d79c30d"} Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.953415 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ac8a-account-create-nnzz2"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.972399 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c9c438ca-0f93-434d-81ea-29ae82b217bf/ovsdbserver-nb/0.log" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.972546 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.972941 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ac8a-account-create-nnzz2"] Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.972993 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c9c438ca-0f93-434d-81ea-29ae82b217bf","Type":"ContainerDied","Data":"d5a4b940543951812eed51459974e2b8ecc44e5e525e6a28b1a85924cc617f0f"} Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.973026 4972 scope.go:117] "RemoveContainer" containerID="02017bfaf39dc941741a5f40c1bacc3f4996ecfcae24cb31b354109768689142" Nov 21 10:07:41 crc kubenswrapper[4972]: I1121 10:07:41.991586 4972 generic.go:334] "Generic (PLEG): container finished" podID="88c81504-7f14-498f-bd8d-4fa74aebf2d2" containerID="30837ba80ae724788ccc279d47025486e335f00269e93508eaa7eabb78466914" exitCode=137 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:41.998127 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-748ccf64d9-7vqzf"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:41.998137 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="8ed54a06-08b9-41a2-92d9-a745631e053c" containerName="galera" containerID="cri-o://a55fa1434e8f1c900b3bebdfafac5e43b0fb9083af7325dfb76ac2940d2d38b2" gracePeriod=30 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:41.998313 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-748ccf64d9-7vqzf" podUID="f69d7d80-dc29-4483-917c-c25921b56e9c" containerName="proxy-httpd" containerID="cri-o://8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c" gracePeriod=30 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:41.998348 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-748ccf64d9-7vqzf" podUID="f69d7d80-dc29-4483-917c-c25921b56e9c" containerName="proxy-server" containerID="cri-o://b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932" gracePeriod=30 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.004254 4972 generic.go:334] "Generic (PLEG): container finished" podID="b2069a31-382b-4fc4-acee-cf202be1de1e" containerID="70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3" exitCode=143 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.004310 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b2069a31-382b-4fc4-acee-cf202be1de1e","Type":"ContainerDied","Data":"70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.005284 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.016003 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7fcd667fc5-5ctgv"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.016395 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" podUID="1934e8d3-ef66-4d0e-8d12-bd958545270a" containerName="barbican-worker-log" containerID="cri-o://b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad" gracePeriod=30 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.016944 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" podUID="1934e8d3-ef66-4d0e-8d12-bd958545270a" containerName="barbican-worker" containerID="cri-o://1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f" gracePeriod=30 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.032301 4972 generic.go:334] "Generic (PLEG): container finished" podID="4280cc0e-ca6a-47d7-be4d-a05beb85de3c" containerID="69e04f00aa09648eb63cd97e3e98080e91e8163cf8df71f506e5b1e624817eb1" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.032368 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" event={"ID":"4280cc0e-ca6a-47d7-be4d-a05beb85de3c","Type":"ContainerDied","Data":"69e04f00aa09648eb63cd97e3e98080e91e8163cf8df71f506e5b1e624817eb1"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.032490 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovs-vswitchd" containerID="cri-o://00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" gracePeriod=29 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.039336 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.039412 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-78666b77b6-ll6mt"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.039588 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" podUID="fbaa8ec7-5499-43d1-ac80-dd8708d28643" containerName="barbican-keystone-listener-log" containerID="cri-o://cfd792eb202fbf7b53ee8748aadb575b4d7545be47e73ef984e2cbe95e0adcce" gracePeriod=30 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.039655 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" podUID="fbaa8ec7-5499-43d1-ac80-dd8708d28643" containerName="barbican-keystone-listener" containerID="cri-o://8e1c5eaa82bd2eee5d1cc7e05fbf76fc1373742047fa91d2696d0552ca0cc505" gracePeriod=30 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.049185 4972 generic.go:334] "Generic (PLEG): container finished" podID="56aac81e-b855-4419-b8a5-8f1fc099b5e6" containerID="4d77ecd5438c1e9b16f7c8d4f0e5a8b33983d1efefc68af6391bbc8b9f26e966" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.049263 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f8cf4757-8cflk" event={"ID":"56aac81e-b855-4419-b8a5-8f1fc099b5e6","Type":"ContainerDied","Data":"4d77ecd5438c1e9b16f7c8d4f0e5a8b33983d1efefc68af6391bbc8b9f26e966"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.066865 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_44805331-e34b-4455-a744-4c8fe27a1b9e/ovsdbserver-sb/0.log" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.067167 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.067317 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-669568d65b-4t6gp"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.067788 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-669568d65b-4t6gp" podUID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" containerName="barbican-api-log" containerID="cri-o://3fe0c9bf4632a5a91bbedb92ac2a74a5be61932a64fbf9dec4c9fe6b9c892be9" gracePeriod=30 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.067897 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-669568d65b-4t6gp" podUID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" containerName="barbican-api" containerID="cri-o://c170fcfc81ca59f5bc98bc8edc442c5c3a824cf4040a9ddb3b5479628d9471b5" gracePeriod=30 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.071973 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.072127 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="fc61b266-e156-4999-8ec7-8aa1f1988e42" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a762824bd37bcd1f70426519763b522438780d31ef55d3f35b56ca5424e1e1ee" gracePeriod=30 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.079725 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-njh52"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.090593 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-njh52"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.110867 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.111095 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8" containerName="nova-cell1-conductor-conductor" containerID="cri-o://2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c" gracePeriod=30 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.117374 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8dt78"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.126820 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.127036 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="5eb9b3f4-6710-4818-b94c-494958fe31ad" containerName="nova-cell0-conductor-conductor" containerID="cri-o://c8508fa7978ab01e9ed41259f44d3ae46c68a33ace18516f62967a60ede2d29a" gracePeriod=30 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.143409 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8dt78"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.150366 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88c81504-7f14-498f-bd8d-4fa74aebf2d2-openstack-config\") pod \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.150401 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-ovsdbserver-sb\") pod \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.150444 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-config\") pod \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.150510 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44805331-e34b-4455-a744-4c8fe27a1b9e-scripts\") pod \"44805331-e34b-4455-a744-4c8fe27a1b9e\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.150528 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-dns-swift-storage-0\") pod \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.150555 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-combined-ca-bundle\") pod \"44805331-e34b-4455-a744-4c8fe27a1b9e\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.150599 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-dns-svc\") pod \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.151244 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-ovsdbserver-sb-tls-certs\") pod \"44805331-e34b-4455-a744-4c8fe27a1b9e\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.151317 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6krh\" (UniqueName: \"kubernetes.io/projected/88c81504-7f14-498f-bd8d-4fa74aebf2d2-kube-api-access-b6krh\") pod \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.151362 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44805331-e34b-4455-a744-4c8fe27a1b9e-ovsdb-rundir\") pod \"44805331-e34b-4455-a744-4c8fe27a1b9e\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.152190 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg45j\" (UniqueName: \"kubernetes.io/projected/44805331-e34b-4455-a744-4c8fe27a1b9e-kube-api-access-tg45j\") pod \"44805331-e34b-4455-a744-4c8fe27a1b9e\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.152220 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44805331-e34b-4455-a744-4c8fe27a1b9e-config\") pod \"44805331-e34b-4455-a744-4c8fe27a1b9e\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.152249 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88c81504-7f14-498f-bd8d-4fa74aebf2d2-openstack-config-secret\") pod \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.152272 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-metrics-certs-tls-certs\") pod \"44805331-e34b-4455-a744-4c8fe27a1b9e\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.152297 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"44805331-e34b-4455-a744-4c8fe27a1b9e\" (UID: \"44805331-e34b-4455-a744-4c8fe27a1b9e\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.152366 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c81504-7f14-498f-bd8d-4fa74aebf2d2-combined-ca-bundle\") pod \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\" (UID: \"88c81504-7f14-498f-bd8d-4fa74aebf2d2\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.152392 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-ovsdbserver-nb\") pod \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.152412 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz8nl\" (UniqueName: \"kubernetes.io/projected/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-kube-api-access-pz8nl\") pod \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\" (UID: \"4280cc0e-ca6a-47d7-be4d-a05beb85de3c\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.153751 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.154152 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44805331-e34b-4455-a744-4c8fe27a1b9e-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "44805331-e34b-4455-a744-4c8fe27a1b9e" (UID: "44805331-e34b-4455-a744-4c8fe27a1b9e"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164561 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="d6c241802e71e9521da5b44bb300b3ed93a83b5a2a3b5384891a37d0477bcf5f" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164598 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="b27dea1fedce06fdcc7b8b10bfa4e01b3977a2c1835d79507b63bffd8cd7cf4f" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164606 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="0f6fda84aaa98d450bf8db3dd84c394bcbdd91eb2c614ce51ee1f7e2fdf05d9e" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164612 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="7e3179b2cf36ea30c1f398322b657083876aff67dca73310812bf6eda27e562d" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164618 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="6b51383c400616239b3920aae870a35808849c73b781889b2d7c3fca1086fcc9" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164625 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="ef4d5eb5bf9e2085aa31deab41d35f315b471c1a281ec7d0fdb5669055ceae7e" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164631 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="da2ba4db5685edc3025f879f7e189cf7165c184ef92f7d26d3118102cbc00186" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164640 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="56e50d004614f42f95a39a005d2e581ae7498a4ab2ace52e0c8e44e4cb64b156" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164646 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="1d432671871d10b2f9d36122beb37f70113843388eedcb543148c0842f970029" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164653 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="8c31ccc0050d4e99074a90c40277647465e43314e1fdbb8b1f6a9b4753e956a8" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164659 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="7ac0c52eaf55d9c6a4f11a7c5914428a511032a6d41ca1f5562b5b774ab41f34" exitCode=0 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164722 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"d6c241802e71e9521da5b44bb300b3ed93a83b5a2a3b5384891a37d0477bcf5f"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164748 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"b27dea1fedce06fdcc7b8b10bfa4e01b3977a2c1835d79507b63bffd8cd7cf4f"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164758 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"0f6fda84aaa98d450bf8db3dd84c394bcbdd91eb2c614ce51ee1f7e2fdf05d9e"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164768 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"7e3179b2cf36ea30c1f398322b657083876aff67dca73310812bf6eda27e562d"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164778 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"6b51383c400616239b3920aae870a35808849c73b781889b2d7c3fca1086fcc9"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164789 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"ef4d5eb5bf9e2085aa31deab41d35f315b471c1a281ec7d0fdb5669055ceae7e"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164798 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"da2ba4db5685edc3025f879f7e189cf7165c184ef92f7d26d3118102cbc00186"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164808 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"56e50d004614f42f95a39a005d2e581ae7498a4ab2ace52e0c8e44e4cb64b156"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164817 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"1d432671871d10b2f9d36122beb37f70113843388eedcb543148c0842f970029"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164856 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"8c31ccc0050d4e99074a90c40277647465e43314e1fdbb8b1f6a9b4753e956a8"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.164868 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"7ac0c52eaf55d9c6a4f11a7c5914428a511032a6d41ca1f5562b5b774ab41f34"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.167823 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.170236 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44805331-e34b-4455-a744-4c8fe27a1b9e-scripts" (OuterVolumeSpecName: "scripts") pod "44805331-e34b-4455-a744-4c8fe27a1b9e" (UID: "44805331-e34b-4455-a744-4c8fe27a1b9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.183127 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "44805331-e34b-4455-a744-4c8fe27a1b9e" (UID: "44805331-e34b-4455-a744-4c8fe27a1b9e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.183124 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c81504-7f14-498f-bd8d-4fa74aebf2d2-kube-api-access-b6krh" (OuterVolumeSpecName: "kube-api-access-b6krh") pod "88c81504-7f14-498f-bd8d-4fa74aebf2d2" (UID: "88c81504-7f14-498f-bd8d-4fa74aebf2d2"). InnerVolumeSpecName "kube-api-access-b6krh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.192641 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44805331-e34b-4455-a744-4c8fe27a1b9e-config" (OuterVolumeSpecName: "config") pod "44805331-e34b-4455-a744-4c8fe27a1b9e" (UID: "44805331-e34b-4455-a744-4c8fe27a1b9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.199042 4972 scope.go:117] "RemoveContainer" containerID="b39e49481b4242d63f67036e50dc39fabe6cc04941ad1ad33655c4f1ec8f7121" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.199851 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_44805331-e34b-4455-a744-4c8fe27a1b9e/ovsdbserver-sb/0.log" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.199911 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"44805331-e34b-4455-a744-4c8fe27a1b9e","Type":"ContainerDied","Data":"51437a607ea84e00f81befc0dea14277711d78f788df5d6084c2799f8be0ded0"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.200021 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.210817 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44805331-e34b-4455-a744-4c8fe27a1b9e-kube-api-access-tg45j" (OuterVolumeSpecName: "kube-api-access-tg45j") pod "44805331-e34b-4455-a744-4c8fe27a1b9e" (UID: "44805331-e34b-4455-a744-4c8fe27a1b9e"). InnerVolumeSpecName "kube-api-access-tg45j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.211171 4972 generic.go:334] "Generic (PLEG): container finished" podID="8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" containerID="a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923" exitCode=143 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.211212 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78d4f89dc4-2qvzl" event={"ID":"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd","Type":"ContainerDied","Data":"a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923"} Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.255494 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6krh\" (UniqueName: \"kubernetes.io/projected/88c81504-7f14-498f-bd8d-4fa74aebf2d2-kube-api-access-b6krh\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.255533 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44805331-e34b-4455-a744-4c8fe27a1b9e-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.255548 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg45j\" (UniqueName: \"kubernetes.io/projected/44805331-e34b-4455-a744-4c8fe27a1b9e-kube-api-access-tg45j\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.255560 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44805331-e34b-4455-a744-4c8fe27a1b9e-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.255585 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.255990 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44805331-e34b-4455-a744-4c8fe27a1b9e-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.264912 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-kube-api-access-pz8nl" (OuterVolumeSpecName: "kube-api-access-pz8nl") pod "4280cc0e-ca6a-47d7-be4d-a05beb85de3c" (UID: "4280cc0e-ca6a-47d7-be4d-a05beb85de3c"). InnerVolumeSpecName "kube-api-access-pz8nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.276026 4972 scope.go:117] "RemoveContainer" containerID="69e04f00aa09648eb63cd97e3e98080e91e8163cf8df71f506e5b1e624817eb1" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.296375 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-psvpd_2bb7ffc3-501c-420f-834c-0509b4a509eb/openstack-network-exporter/0.log" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.296447 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.298990 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c81504-7f14-498f-bd8d-4fa74aebf2d2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "88c81504-7f14-498f-bd8d-4fa74aebf2d2" (UID: "88c81504-7f14-498f-bd8d-4fa74aebf2d2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.330012 4972 scope.go:117] "RemoveContainer" containerID="f9b9c1a4d9ec5055177eaed9a2495ea1e1c2facf3f6db6eee4efc4bb144f6fda" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.356845 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2bb7ffc3-501c-420f-834c-0509b4a509eb-ovs-rundir\") pod \"2bb7ffc3-501c-420f-834c-0509b4a509eb\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.357036 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb7ffc3-501c-420f-834c-0509b4a509eb-combined-ca-bundle\") pod \"2bb7ffc3-501c-420f-834c-0509b4a509eb\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.357121 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2bb7ffc3-501c-420f-834c-0509b4a509eb-ovn-rundir\") pod \"2bb7ffc3-501c-420f-834c-0509b4a509eb\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.357156 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bb7ffc3-501c-420f-834c-0509b4a509eb-metrics-certs-tls-certs\") pod \"2bb7ffc3-501c-420f-834c-0509b4a509eb\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.357177 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj4pg\" (UniqueName: \"kubernetes.io/projected/2bb7ffc3-501c-420f-834c-0509b4a509eb-kube-api-access-bj4pg\") pod \"2bb7ffc3-501c-420f-834c-0509b4a509eb\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.357204 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bb7ffc3-501c-420f-834c-0509b4a509eb-config\") pod \"2bb7ffc3-501c-420f-834c-0509b4a509eb\" (UID: \"2bb7ffc3-501c-420f-834c-0509b4a509eb\") " Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.357821 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz8nl\" (UniqueName: \"kubernetes.io/projected/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-kube-api-access-pz8nl\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.357853 4972 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88c81504-7f14-498f-bd8d-4fa74aebf2d2-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.358289 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bb7ffc3-501c-420f-834c-0509b4a509eb-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "2bb7ffc3-501c-420f-834c-0509b4a509eb" (UID: "2bb7ffc3-501c-420f-834c-0509b4a509eb"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.358437 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bb7ffc3-501c-420f-834c-0509b4a509eb-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "2bb7ffc3-501c-420f-834c-0509b4a509eb" (UID: "2bb7ffc3-501c-420f-834c-0509b4a509eb"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.359918 4972 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 21 10:07:42 crc kubenswrapper[4972]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 21 10:07:42 crc kubenswrapper[4972]: + source /usr/local/bin/container-scripts/functions Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNBridge=br-int Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNRemote=tcp:localhost:6642 Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNEncapType=geneve Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNAvailabilityZones= Nov 21 10:07:42 crc kubenswrapper[4972]: ++ EnableChassisAsGateway=true Nov 21 10:07:42 crc kubenswrapper[4972]: ++ PhysicalNetworks= Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNHostName= Nov 21 10:07:42 crc kubenswrapper[4972]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 21 10:07:42 crc kubenswrapper[4972]: ++ ovs_dir=/var/lib/openvswitch Nov 21 10:07:42 crc kubenswrapper[4972]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 21 10:07:42 crc kubenswrapper[4972]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 21 10:07:42 crc kubenswrapper[4972]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 21 10:07:42 crc kubenswrapper[4972]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 21 10:07:42 crc kubenswrapper[4972]: + sleep 0.5 Nov 21 10:07:42 crc kubenswrapper[4972]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 21 10:07:42 crc kubenswrapper[4972]: + sleep 0.5 Nov 21 10:07:42 crc kubenswrapper[4972]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 21 10:07:42 crc kubenswrapper[4972]: + cleanup_ovsdb_server_semaphore Nov 21 10:07:42 crc kubenswrapper[4972]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 21 10:07:42 crc kubenswrapper[4972]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 21 10:07:42 crc kubenswrapper[4972]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-4z7b5" message=< Nov 21 10:07:42 crc kubenswrapper[4972]: Exiting ovsdb-server (5) [ OK ] Nov 21 10:07:42 crc kubenswrapper[4972]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 21 10:07:42 crc kubenswrapper[4972]: + source /usr/local/bin/container-scripts/functions Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNBridge=br-int Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNRemote=tcp:localhost:6642 Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNEncapType=geneve Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNAvailabilityZones= Nov 21 10:07:42 crc kubenswrapper[4972]: ++ EnableChassisAsGateway=true Nov 21 10:07:42 crc kubenswrapper[4972]: ++ PhysicalNetworks= Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNHostName= Nov 21 10:07:42 crc kubenswrapper[4972]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 21 10:07:42 crc kubenswrapper[4972]: ++ ovs_dir=/var/lib/openvswitch Nov 21 10:07:42 crc kubenswrapper[4972]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 21 10:07:42 crc kubenswrapper[4972]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 21 10:07:42 crc kubenswrapper[4972]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 21 10:07:42 crc kubenswrapper[4972]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 21 10:07:42 crc kubenswrapper[4972]: + sleep 0.5 Nov 21 10:07:42 crc kubenswrapper[4972]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 21 10:07:42 crc kubenswrapper[4972]: + sleep 0.5 Nov 21 10:07:42 crc kubenswrapper[4972]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 21 10:07:42 crc kubenswrapper[4972]: + cleanup_ovsdb_server_semaphore Nov 21 10:07:42 crc kubenswrapper[4972]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 21 10:07:42 crc kubenswrapper[4972]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 21 10:07:42 crc kubenswrapper[4972]: > Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.359968 4972 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 21 10:07:42 crc kubenswrapper[4972]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 21 10:07:42 crc kubenswrapper[4972]: + source /usr/local/bin/container-scripts/functions Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNBridge=br-int Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNRemote=tcp:localhost:6642 Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNEncapType=geneve Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNAvailabilityZones= Nov 21 10:07:42 crc kubenswrapper[4972]: ++ EnableChassisAsGateway=true Nov 21 10:07:42 crc kubenswrapper[4972]: ++ PhysicalNetworks= Nov 21 10:07:42 crc kubenswrapper[4972]: ++ OVNHostName= Nov 21 10:07:42 crc kubenswrapper[4972]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 21 10:07:42 crc kubenswrapper[4972]: ++ ovs_dir=/var/lib/openvswitch Nov 21 10:07:42 crc kubenswrapper[4972]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 21 10:07:42 crc kubenswrapper[4972]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 21 10:07:42 crc kubenswrapper[4972]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 21 10:07:42 crc kubenswrapper[4972]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 21 10:07:42 crc kubenswrapper[4972]: + sleep 0.5 Nov 21 10:07:42 crc kubenswrapper[4972]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 21 10:07:42 crc kubenswrapper[4972]: + sleep 0.5 Nov 21 10:07:42 crc kubenswrapper[4972]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 21 10:07:42 crc kubenswrapper[4972]: + cleanup_ovsdb_server_semaphore Nov 21 10:07:42 crc kubenswrapper[4972]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 21 10:07:42 crc kubenswrapper[4972]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 21 10:07:42 crc kubenswrapper[4972]: > pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server" containerID="cri-o://463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.360009 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server" containerID="cri-o://463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" gracePeriod=29 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.360006 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bb7ffc3-501c-420f-834c-0509b4a509eb-config" (OuterVolumeSpecName: "config") pod "2bb7ffc3-501c-420f-834c-0509b4a509eb" (UID: "2bb7ffc3-501c-420f-834c-0509b4a509eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.367767 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bb7ffc3-501c-420f-834c-0509b4a509eb-kube-api-access-bj4pg" (OuterVolumeSpecName: "kube-api-access-bj4pg") pod "2bb7ffc3-501c-420f-834c-0509b4a509eb" (UID: "2bb7ffc3-501c-420f-834c-0509b4a509eb"). InnerVolumeSpecName "kube-api-access-bj4pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.369683 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44805331-e34b-4455-a744-4c8fe27a1b9e" (UID: "44805331-e34b-4455-a744-4c8fe27a1b9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.406520 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4280cc0e-ca6a-47d7-be4d-a05beb85de3c" (UID: "4280cc0e-ca6a-47d7-be4d-a05beb85de3c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.425904 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb7ffc3-501c-420f-834c-0509b4a509eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bb7ffc3-501c-420f-834c-0509b4a509eb" (UID: "2bb7ffc3-501c-420f-834c-0509b4a509eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.426572 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4280cc0e-ca6a-47d7-be4d-a05beb85de3c" (UID: "4280cc0e-ca6a-47d7-be4d-a05beb85de3c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.443461 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.446243 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.446687 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.446719 4972 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.452103 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.453333 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbaa8ec7_5499_43d1_ac80_dd8708d28643.slice/crio-conmon-cfd792eb202fbf7b53ee8748aadb575b4d7545be47e73ef984e2cbe95e0adcce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1934e8d3_ef66_4d0e_8d12_bd958545270a.slice/crio-b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf69d7d80_dc29_4483_917c_c25921b56e9c.slice/crio-8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbaa8ec7_5499_43d1_ac80_dd8708d28643.slice/crio-cfd792eb202fbf7b53ee8748aadb575b4d7545be47e73ef984e2cbe95e0adcce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1934e8d3_ef66_4d0e_8d12_bd958545270a.slice/crio-conmon-b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod272d9c39_ab5b_4fc1_8dbe_209fbe33e293.slice/crio-3fe0c9bf4632a5a91bbedb92ac2a74a5be61932a64fbf9dec4c9fe6b9c892be9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf69d7d80_dc29_4483_917c_c25921b56e9c.slice/crio-conmon-8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefdbf4d_7d20_40ca_9985_8309a0295dad.slice/crio-a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod272d9c39_ab5b_4fc1_8dbe_209fbe33e293.slice/crio-conmon-3fe0c9bf4632a5a91bbedb92ac2a74a5be61932a64fbf9dec4c9fe6b9c892be9.scope\": RecentStats: unable to find data in memory cache]" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.453682 4972 scope.go:117] "RemoveContainer" containerID="0eb74e778f9330e95160a9380c73ad009f10cca6eb82633cae811a9a159e0d84" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.459997 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.460018 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb7ffc3-501c-420f-834c-0509b4a509eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.460027 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.460036 4972 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2bb7ffc3-501c-420f-834c-0509b4a509eb-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.460050 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj4pg\" (UniqueName: \"kubernetes.io/projected/2bb7ffc3-501c-420f-834c-0509b4a509eb-kube-api-access-bj4pg\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.460059 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bb7ffc3-501c-420f-834c-0509b4a509eb-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.460067 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.460075 4972 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2bb7ffc3-501c-420f-834c-0509b4a509eb-ovs-rundir\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.460083 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.464763 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.464861 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.468094 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c81504-7f14-498f-bd8d-4fa74aebf2d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88c81504-7f14-498f-bd8d-4fa74aebf2d2" (UID: "88c81504-7f14-498f-bd8d-4fa74aebf2d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.471148 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.471522 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.476990 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.477033 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3c3ae47e-fcf5-4397-a2a4-8e847e542d75" containerName="nova-scheduler-scheduler" Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.480434 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.480485 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovs-vswitchd" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.488141 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell05330-account-delete-cxgw8"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.500273 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-config" (OuterVolumeSpecName: "config") pod "4280cc0e-ca6a-47d7-be4d-a05beb85de3c" (UID: "4280cc0e-ca6a-47d7-be4d-a05beb85de3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.508494 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "44805331-e34b-4455-a744-4c8fe27a1b9e" (UID: "44805331-e34b-4455-a744-4c8fe27a1b9e"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.523349 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican958e-account-delete-tfwsx"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.526552 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4280cc0e-ca6a-47d7-be4d-a05beb85de3c" (UID: "4280cc0e-ca6a-47d7-be4d-a05beb85de3c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.543323 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c81504-7f14-498f-bd8d-4fa74aebf2d2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "88c81504-7f14-498f-bd8d-4fa74aebf2d2" (UID: "88c81504-7f14-498f-bd8d-4fa74aebf2d2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.548018 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement66d8-account-delete-947ct"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.554855 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "44805331-e34b-4455-a744-4c8fe27a1b9e" (UID: "44805331-e34b-4455-a744-4c8fe27a1b9e"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.555729 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.558509 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.559786 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb7ffc3-501c-420f-834c-0509b4a509eb-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "2bb7ffc3-501c-420f-834c-0509b4a509eb" (UID: "2bb7ffc3-501c-420f-834c-0509b4a509eb"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.561876 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 10:07:42 crc kubenswrapper[4972]: E1121 10:07:42.561942 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8" containerName="nova-cell1-conductor-conductor" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.562284 4972 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.562305 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c81504-7f14-498f-bd8d-4fa74aebf2d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.562315 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.562324 4972 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.562333 4972 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bb7ffc3-501c-420f-834c-0509b4a509eb-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.562341 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44805331-e34b-4455-a744-4c8fe27a1b9e-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.562349 4972 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88c81504-7f14-498f-bd8d-4fa74aebf2d2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.562927 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutrondbea-account-delete-9c96n"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.571198 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder04d8-account-delete-kvgwd"] Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.613175 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4280cc0e-ca6a-47d7-be4d-a05beb85de3c" (UID: "4280cc0e-ca6a-47d7-be4d-a05beb85de3c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.640979 4972 scope.go:117] "RemoveContainer" containerID="7a52d5a35cd478c028a14322544e0cedd59d5fc637ca22a8848442a143badc31" Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.668657 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4280cc0e-ca6a-47d7-be4d-a05beb85de3c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:42 crc kubenswrapper[4972]: W1121 10:07:42.670946 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97ccfb34_fe6c_4529_812a_af30eb178e8b.slice/crio-a4b91b9a8dff31e90dbf88057e35dc59bd92131bc79c918e7fdff4073b1d4ad1 WatchSource:0}: Error finding container a4b91b9a8dff31e90dbf88057e35dc59bd92131bc79c918e7fdff4073b1d4ad1: Status 404 returned error can't find the container with id a4b91b9a8dff31e90dbf88057e35dc59bd92131bc79c918e7fdff4073b1d4ad1 Nov 21 10:07:42 crc kubenswrapper[4972]: I1121 10:07:42.712267 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi501f-account-delete-4746z"] Nov 21 10:07:42 crc kubenswrapper[4972]: W1121 10:07:42.743035 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd442c75_9e94_4f54_81b6_68c19f4de9d8.slice/crio-671752f103ef4d1c64fe88269b466e37db7bc52d845fcf07142422a63d6c348d WatchSource:0}: Error finding container 671752f103ef4d1c64fe88269b466e37db7bc52d845fcf07142422a63d6c348d: Status 404 returned error can't find the container with id 671752f103ef4d1c64fe88269b466e37db7bc52d845fcf07142422a63d6c348d Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.013570 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.025661 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.042476 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.046701 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.177710 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/befdbf4d-7d20-40ca-9985-8309a0295dad-etc-machine-id\") pod \"befdbf4d-7d20-40ca-9985-8309a0295dad\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.177797 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69d7d80-dc29-4483-917c-c25921b56e9c-run-httpd\") pod \"f69d7d80-dc29-4483-917c-c25921b56e9c\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.177801 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/befdbf4d-7d20-40ca-9985-8309a0295dad-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "befdbf4d-7d20-40ca-9985-8309a0295dad" (UID: "befdbf4d-7d20-40ca-9985-8309a0295dad"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.177862 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-internal-tls-certs\") pod \"f69d7d80-dc29-4483-917c-c25921b56e9c\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.177877 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69d7d80-dc29-4483-917c-c25921b56e9c-log-httpd\") pod \"f69d7d80-dc29-4483-917c-c25921b56e9c\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.177914 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f69d7d80-dc29-4483-917c-c25921b56e9c-etc-swift\") pod \"f69d7d80-dc29-4483-917c-c25921b56e9c\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.177938 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-combined-ca-bundle\") pod \"f69d7d80-dc29-4483-917c-c25921b56e9c\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178008 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-config-data\") pod \"f69d7d80-dc29-4483-917c-c25921b56e9c\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178059 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data-custom\") pod \"befdbf4d-7d20-40ca-9985-8309a0295dad\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178110 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-combined-ca-bundle\") pod \"befdbf4d-7d20-40ca-9985-8309a0295dad\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178140 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-public-tls-certs\") pod \"f69d7d80-dc29-4483-917c-c25921b56e9c\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178190 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data\") pod \"befdbf4d-7d20-40ca-9985-8309a0295dad\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178229 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-scripts\") pod \"befdbf4d-7d20-40ca-9985-8309a0295dad\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178248 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnt7v\" (UniqueName: \"kubernetes.io/projected/befdbf4d-7d20-40ca-9985-8309a0295dad-kube-api-access-fnt7v\") pod \"befdbf4d-7d20-40ca-9985-8309a0295dad\" (UID: \"befdbf4d-7d20-40ca-9985-8309a0295dad\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178318 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdlww\" (UniqueName: \"kubernetes.io/projected/f69d7d80-dc29-4483-917c-c25921b56e9c-kube-api-access-bdlww\") pod \"f69d7d80-dc29-4483-917c-c25921b56e9c\" (UID: \"f69d7d80-dc29-4483-917c-c25921b56e9c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178566 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69d7d80-dc29-4483-917c-c25921b56e9c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f69d7d80-dc29-4483-917c-c25921b56e9c" (UID: "f69d7d80-dc29-4483-917c-c25921b56e9c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178651 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69d7d80-dc29-4483-917c-c25921b56e9c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f69d7d80-dc29-4483-917c-c25921b56e9c" (UID: "f69d7d80-dc29-4483-917c-c25921b56e9c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178799 4972 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/befdbf4d-7d20-40ca-9985-8309a0295dad-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178813 4972 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69d7d80-dc29-4483-917c-c25921b56e9c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.178849 4972 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69d7d80-dc29-4483-917c-c25921b56e9c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: E1121 10:07:43.180306 4972 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 21 10:07:43 crc kubenswrapper[4972]: E1121 10:07:43.180378 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data podName:2bc44abc-7710-432b-b503-fd54e3afeede nodeName:}" failed. No retries permitted until 2025-11-21 10:07:47.180358533 +0000 UTC m=+1612.289501091 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data") pod "rabbitmq-cell1-server-0" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede") : configmap "rabbitmq-cell1-config-data" not found Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.190656 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69d7d80-dc29-4483-917c-c25921b56e9c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f69d7d80-dc29-4483-917c-c25921b56e9c" (UID: "f69d7d80-dc29-4483-917c-c25921b56e9c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.208401 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/befdbf4d-7d20-40ca-9985-8309a0295dad-kube-api-access-fnt7v" (OuterVolumeSpecName: "kube-api-access-fnt7v") pod "befdbf4d-7d20-40ca-9985-8309a0295dad" (UID: "befdbf4d-7d20-40ca-9985-8309a0295dad"). InnerVolumeSpecName "kube-api-access-fnt7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.229023 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "befdbf4d-7d20-40ca-9985-8309a0295dad" (UID: "befdbf4d-7d20-40ca-9985-8309a0295dad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.230965 4972 generic.go:334] "Generic (PLEG): container finished" podID="fc61b266-e156-4999-8ec7-8aa1f1988e42" containerID="a762824bd37bcd1f70426519763b522438780d31ef55d3f35b56ca5424e1e1ee" exitCode=0 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.231041 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fc61b266-e156-4999-8ec7-8aa1f1988e42","Type":"ContainerDied","Data":"a762824bd37bcd1f70426519763b522438780d31ef55d3f35b56ca5424e1e1ee"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.238657 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-scripts" (OuterVolumeSpecName: "scripts") pod "befdbf4d-7d20-40ca-9985-8309a0295dad" (UID: "befdbf4d-7d20-40ca-9985-8309a0295dad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.240806 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69d7d80-dc29-4483-917c-c25921b56e9c-kube-api-access-bdlww" (OuterVolumeSpecName: "kube-api-access-bdlww") pod "f69d7d80-dc29-4483-917c-c25921b56e9c" (UID: "f69d7d80-dc29-4483-917c-c25921b56e9c"). InnerVolumeSpecName "kube-api-access-bdlww". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.246469 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" exitCode=0 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.246543 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4z7b5" event={"ID":"5ea385c8-0af5-4759-acf1-ee6dee48e488","Type":"ContainerDied","Data":"463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.258269 4972 generic.go:334] "Generic (PLEG): container finished" podID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" containerID="3fe0c9bf4632a5a91bbedb92ac2a74a5be61932a64fbf9dec4c9fe6b9c892be9" exitCode=143 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.258334 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-669568d65b-4t6gp" event={"ID":"272d9c39-ab5b-4fc1-8dbe-209fbe33e293","Type":"ContainerDied","Data":"3fe0c9bf4632a5a91bbedb92ac2a74a5be61932a64fbf9dec4c9fe6b9c892be9"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.268103 4972 generic.go:334] "Generic (PLEG): container finished" podID="f1224d0f-d488-49e6-b6dc-12a188b43a43" containerID="5c4178bb82f3320e37e3c09aa58e76bbdcf7d74bf35c0a1b2ed17a19ce71599a" exitCode=0 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.268330 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glanced431-account-delete-5xwls" event={"ID":"f1224d0f-d488-49e6-b6dc-12a188b43a43","Type":"ContainerDied","Data":"5c4178bb82f3320e37e3c09aa58e76bbdcf7d74bf35c0a1b2ed17a19ce71599a"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.280287 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.280504 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnt7v\" (UniqueName: \"kubernetes.io/projected/befdbf4d-7d20-40ca-9985-8309a0295dad-kube-api-access-fnt7v\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.280578 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdlww\" (UniqueName: \"kubernetes.io/projected/f69d7d80-dc29-4483-917c-c25921b56e9c-kube-api-access-bdlww\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.280632 4972 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f69d7d80-dc29-4483-917c-c25921b56e9c-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.280683 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.284146 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-psvpd_2bb7ffc3-501c-420f-834c-0509b4a509eb/openstack-network-exporter/0.log" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.284318 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-psvpd" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.284961 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-psvpd" event={"ID":"2bb7ffc3-501c-420f-834c-0509b4a509eb","Type":"ContainerDied","Data":"cb48925dc46f82ba1a9ca834aa51628abaf1d740a523d607f82307edbfb32b1e"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.285074 4972 scope.go:117] "RemoveContainer" containerID="4789958ba38111e46a66c03e780a501dd267a5f7418fac71da4781d35d79c30d" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.294580 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.303547 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement66d8-account-delete-947ct" event={"ID":"ffb786ba-2a1a-4124-9ef7-116e12402f5c","Type":"ContainerStarted","Data":"89c470b27e0cc5b41977a92000bfd879ba71fb803bedd463313e38c96aae6946"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.305243 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05330-account-delete-cxgw8" event={"ID":"4e11aef3-0c96-44ed-8876-7e54115d181f","Type":"ContainerStarted","Data":"4b985586c5b90053fad24e1a7d2f83cc16f8cf81056613b10f2e37e45f6b69bb"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.307586 4972 generic.go:334] "Generic (PLEG): container finished" podID="1934e8d3-ef66-4d0e-8d12-bd958545270a" containerID="b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad" exitCode=143 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.307682 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" event={"ID":"1934e8d3-ef66-4d0e-8d12-bd958545270a","Type":"ContainerDied","Data":"b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.314799 4972 generic.go:334] "Generic (PLEG): container finished" podID="befdbf4d-7d20-40ca-9985-8309a0295dad" containerID="317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c" exitCode=0 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.314823 4972 generic.go:334] "Generic (PLEG): container finished" podID="befdbf4d-7d20-40ca-9985-8309a0295dad" containerID="a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75" exitCode=0 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.314889 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"befdbf4d-7d20-40ca-9985-8309a0295dad","Type":"ContainerDied","Data":"317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.314911 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"befdbf4d-7d20-40ca-9985-8309a0295dad","Type":"ContainerDied","Data":"a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.314922 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"befdbf4d-7d20-40ca-9985-8309a0295dad","Type":"ContainerDied","Data":"9b6463b15ecb9b8779ca71cddabf1926d3ac4ddaaf5846829a356fc2d428be5e"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.314977 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.323775 4972 generic.go:334] "Generic (PLEG): container finished" podID="f69d7d80-dc29-4483-917c-c25921b56e9c" containerID="b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932" exitCode=0 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.323816 4972 generic.go:334] "Generic (PLEG): container finished" podID="f69d7d80-dc29-4483-917c-c25921b56e9c" containerID="8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c" exitCode=0 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.323880 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-748ccf64d9-7vqzf" event={"ID":"f69d7d80-dc29-4483-917c-c25921b56e9c","Type":"ContainerDied","Data":"b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.323908 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-748ccf64d9-7vqzf" event={"ID":"f69d7d80-dc29-4483-917c-c25921b56e9c","Type":"ContainerDied","Data":"8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.323919 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-748ccf64d9-7vqzf" event={"ID":"f69d7d80-dc29-4483-917c-c25921b56e9c","Type":"ContainerDied","Data":"aa0614f91fc32679c8f0eada54c29c340b17b21d5a6f8cace15db38baa847d5a"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.323997 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-748ccf64d9-7vqzf" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.326761 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi501f-account-delete-4746z" event={"ID":"dd442c75-9e94-4f54-81b6-68c19f4de9d8","Type":"ContainerStarted","Data":"671752f103ef4d1c64fe88269b466e37db7bc52d845fcf07142422a63d6c348d"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.340423 4972 generic.go:334] "Generic (PLEG): container finished" podID="fbaa8ec7-5499-43d1-ac80-dd8708d28643" containerID="cfd792eb202fbf7b53ee8748aadb575b4d7545be47e73ef984e2cbe95e0adcce" exitCode=143 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.340488 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" event={"ID":"fbaa8ec7-5499-43d1-ac80-dd8708d28643","Type":"ContainerDied","Data":"cfd792eb202fbf7b53ee8748aadb575b4d7545be47e73ef984e2cbe95e0adcce"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.353318 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutrondbea-account-delete-9c96n" event={"ID":"97ccfb34-fe6c-4529-812a-af30eb178e8b","Type":"ContainerStarted","Data":"a4b91b9a8dff31e90dbf88057e35dc59bd92131bc79c918e7fdff4073b1d4ad1"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.361460 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-psvpd"] Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.368309 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder04d8-account-delete-kvgwd" event={"ID":"bf21c86e-7747-4dca-a870-352dfa214beb","Type":"ContainerStarted","Data":"dc7b7f17386e24bb82f37a67e5306778129c25da39931199e4781212f6376366"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.372704 4972 scope.go:117] "RemoveContainer" containerID="30837ba80ae724788ccc279d47025486e335f00269e93508eaa7eabb78466914" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.380641 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-psvpd"] Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.380803 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8ed54a06-08b9-41a2-92d9-a745631e053c","Type":"ContainerDied","Data":"a55fa1434e8f1c900b3bebdfafac5e43b0fb9083af7325dfb76ac2940d2d38b2"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.380641 4972 generic.go:334] "Generic (PLEG): container finished" podID="8ed54a06-08b9-41a2-92d9-a745631e053c" containerID="a55fa1434e8f1c900b3bebdfafac5e43b0fb9083af7325dfb76ac2940d2d38b2" exitCode=0 Nov 21 10:07:43 crc kubenswrapper[4972]: E1121 10:07:43.381973 4972 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 21 10:07:43 crc kubenswrapper[4972]: E1121 10:07:43.382024 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data podName:392b5094-f8ef-47b8-8dc5-9e1d2dbef612 nodeName:}" failed. No retries permitted until 2025-11-21 10:07:47.382010119 +0000 UTC m=+1612.491152607 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data") pod "rabbitmq-server-0" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612") : configmap "rabbitmq-config-data" not found Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.382410 4972 generic.go:334] "Generic (PLEG): container finished" podID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" containerID="9663e5eeed349feea42f465ac185ce4a281832b49f5ad7e6676845ad1940d586" exitCode=143 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.382509 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"302b9e1c-affd-4f2f-bacd-98f40dedeb91","Type":"ContainerDied","Data":"9663e5eeed349feea42f465ac185ce4a281832b49f5ad7e6676845ad1940d586"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.390525 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" event={"ID":"4280cc0e-ca6a-47d7-be4d-a05beb85de3c","Type":"ContainerDied","Data":"0f0bda1adafc5a483955762bf39be4ca0275d63a1e72abe84bbcce32775bc4de"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.390671 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6868d89965-c8n5j" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.392466 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican958e-account-delete-tfwsx" event={"ID":"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0","Type":"ContainerStarted","Data":"8bfb615f8412efc40930dc9b12e8aaaf48698b3bb0e725b1a96524dffe302dda"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.415043 4972 scope.go:117] "RemoveContainer" containerID="317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.439431 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="6d2997e2bf31afa38122b707eaffd973a10f37e15af3ef380d90f5a0e46e40a2" exitCode=0 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.439717 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="4e7f746ee8e85533e7ed177d7195703edc2217f4d9450127a0eefddf988dd729" exitCode=0 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.439726 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="ba577ff7853e877e687486121c6f0ab731e335150c782fdb6337e45da1ea7e56" exitCode=0 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.439766 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"6d2997e2bf31afa38122b707eaffd973a10f37e15af3ef380d90f5a0e46e40a2"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.439790 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"4e7f746ee8e85533e7ed177d7195703edc2217f4d9450127a0eefddf988dd729"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.439800 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"ba577ff7853e877e687486121c6f0ab731e335150c782fdb6337e45da1ea7e56"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.440595 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican958e-account-delete-tfwsx" podStartSLOduration=4.440570813 podStartE2EDuration="4.440570813s" podCreationTimestamp="2025-11-21 10:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 10:07:43.426414475 +0000 UTC m=+1608.535556973" watchObservedRunningTime="2025-11-21 10:07:43.440570813 +0000 UTC m=+1608.549713311" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.444038 4972 generic.go:334] "Generic (PLEG): container finished" podID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" containerID="278382caff383ae2485ecd6e804ee41e63dbe81738ffa66e0a8508b7e3a9f20e" exitCode=143 Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.444081 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57f61d22-4b79-4f80-b7dc-0f5bea4b506d","Type":"ContainerDied","Data":"278382caff383ae2485ecd6e804ee41e63dbe81738ffa66e0a8508b7e3a9f20e"} Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.479550 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6868d89965-c8n5j"] Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.491591 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.500401 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.503425 4972 scope.go:117] "RemoveContainer" containerID="a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.504056 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6868d89965-c8n5j"] Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.604904 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-config-data-default\") pod \"8ed54a06-08b9-41a2-92d9-a745631e053c\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.605083 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-kolla-config\") pod \"8ed54a06-08b9-41a2-92d9-a745631e053c\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.605131 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2sw8\" (UniqueName: \"kubernetes.io/projected/fc61b266-e156-4999-8ec7-8aa1f1988e42-kube-api-access-d2sw8\") pod \"fc61b266-e156-4999-8ec7-8aa1f1988e42\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.605238 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvt2p\" (UniqueName: \"kubernetes.io/projected/8ed54a06-08b9-41a2-92d9-a745631e053c-kube-api-access-zvt2p\") pod \"8ed54a06-08b9-41a2-92d9-a745631e053c\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.605290 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed54a06-08b9-41a2-92d9-a745631e053c-combined-ca-bundle\") pod \"8ed54a06-08b9-41a2-92d9-a745631e053c\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.605311 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-combined-ca-bundle\") pod \"fc61b266-e156-4999-8ec7-8aa1f1988e42\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.605383 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8ed54a06-08b9-41a2-92d9-a745631e053c-config-data-generated\") pod \"8ed54a06-08b9-41a2-92d9-a745631e053c\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.605414 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-operator-scripts\") pod \"8ed54a06-08b9-41a2-92d9-a745631e053c\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.605467 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-nova-novncproxy-tls-certs\") pod \"fc61b266-e156-4999-8ec7-8aa1f1988e42\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.605492 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"8ed54a06-08b9-41a2-92d9-a745631e053c\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.605562 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-config-data\") pod \"fc61b266-e156-4999-8ec7-8aa1f1988e42\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.605631 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed54a06-08b9-41a2-92d9-a745631e053c-galera-tls-certs\") pod \"8ed54a06-08b9-41a2-92d9-a745631e053c\" (UID: \"8ed54a06-08b9-41a2-92d9-a745631e053c\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.605681 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-vencrypt-tls-certs\") pod \"fc61b266-e156-4999-8ec7-8aa1f1988e42\" (UID: \"fc61b266-e156-4999-8ec7-8aa1f1988e42\") " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.606371 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "8ed54a06-08b9-41a2-92d9-a745631e053c" (UID: "8ed54a06-08b9-41a2-92d9-a745631e053c"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.607257 4972 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.608240 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "8ed54a06-08b9-41a2-92d9-a745631e053c" (UID: "8ed54a06-08b9-41a2-92d9-a745631e053c"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.608721 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ed54a06-08b9-41a2-92d9-a745631e053c-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "8ed54a06-08b9-41a2-92d9-a745631e053c" (UID: "8ed54a06-08b9-41a2-92d9-a745631e053c"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.613373 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ed54a06-08b9-41a2-92d9-a745631e053c" (UID: "8ed54a06-08b9-41a2-92d9-a745631e053c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.615157 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ed54a06-08b9-41a2-92d9-a745631e053c-kube-api-access-zvt2p" (OuterVolumeSpecName: "kube-api-access-zvt2p") pod "8ed54a06-08b9-41a2-92d9-a745631e053c" (UID: "8ed54a06-08b9-41a2-92d9-a745631e053c"). InnerVolumeSpecName "kube-api-access-zvt2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.619009 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc61b266-e156-4999-8ec7-8aa1f1988e42-kube-api-access-d2sw8" (OuterVolumeSpecName: "kube-api-access-d2sw8") pod "fc61b266-e156-4999-8ec7-8aa1f1988e42" (UID: "fc61b266-e156-4999-8ec7-8aa1f1988e42"). InnerVolumeSpecName "kube-api-access-d2sw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.683665 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "8ed54a06-08b9-41a2-92d9-a745631e053c" (UID: "8ed54a06-08b9-41a2-92d9-a745631e053c"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.686516 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f69d7d80-dc29-4483-917c-c25921b56e9c" (UID: "f69d7d80-dc29-4483-917c-c25921b56e9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.704989 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc61b266-e156-4999-8ec7-8aa1f1988e42" (UID: "fc61b266-e156-4999-8ec7-8aa1f1988e42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.709240 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvt2p\" (UniqueName: \"kubernetes.io/projected/8ed54a06-08b9-41a2-92d9-a745631e053c-kube-api-access-zvt2p\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.709263 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.709273 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8ed54a06-08b9-41a2-92d9-a745631e053c-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.709281 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.709310 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.709321 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.709329 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8ed54a06-08b9-41a2-92d9-a745631e053c-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.709338 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2sw8\" (UniqueName: \"kubernetes.io/projected/fc61b266-e156-4999-8ec7-8aa1f1988e42-kube-api-access-d2sw8\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.744011 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed54a06-08b9-41a2-92d9-a745631e053c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ed54a06-08b9-41a2-92d9-a745631e053c" (UID: "8ed54a06-08b9-41a2-92d9-a745631e053c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.759735 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.760991 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f69d7d80-dc29-4483-917c-c25921b56e9c" (UID: "f69d7d80-dc29-4483-917c-c25921b56e9c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.782805 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c0ee36-eaf7-4101-80d5-6dfca43ebde7" path="/var/lib/kubelet/pods/12c0ee36-eaf7-4101-80d5-6dfca43ebde7/volumes" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.783363 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bb7ffc3-501c-420f-834c-0509b4a509eb" path="/var/lib/kubelet/pods/2bb7ffc3-501c-420f-834c-0509b4a509eb/volumes" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.784062 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4280cc0e-ca6a-47d7-be4d-a05beb85de3c" path="/var/lib/kubelet/pods/4280cc0e-ca6a-47d7-be4d-a05beb85de3c/volumes" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.785599 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44805331-e34b-4455-a744-4c8fe27a1b9e" path="/var/lib/kubelet/pods/44805331-e34b-4455-a744-4c8fe27a1b9e/volumes" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.798403 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "fc61b266-e156-4999-8ec7-8aa1f1988e42" (UID: "fc61b266-e156-4999-8ec7-8aa1f1988e42"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.799130 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="448f2d01-d067-4694-9545-6771e700e52b" path="/var/lib/kubelet/pods/448f2d01-d067-4694-9545-6771e700e52b/volumes" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.799867 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86c343dd-b3e5-4822-b5c0-f12c8a7530bb" path="/var/lib/kubelet/pods/86c343dd-b3e5-4822-b5c0-f12c8a7530bb/volumes" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.801863 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-config-data" (OuterVolumeSpecName: "config-data") pod "fc61b266-e156-4999-8ec7-8aa1f1988e42" (UID: "fc61b266-e156-4999-8ec7-8aa1f1988e42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.803687 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-config-data" (OuterVolumeSpecName: "config-data") pod "f69d7d80-dc29-4483-917c-c25921b56e9c" (UID: "f69d7d80-dc29-4483-917c-c25921b56e9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.808475 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "befdbf4d-7d20-40ca-9985-8309a0295dad" (UID: "befdbf4d-7d20-40ca-9985-8309a0295dad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.809283 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88c81504-7f14-498f-bd8d-4fa74aebf2d2" path="/var/lib/kubelet/pods/88c81504-7f14-498f-bd8d-4fa74aebf2d2/volumes" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.809742 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4543ed6-f2db-43da-8e2e-a63720b6cf67" path="/var/lib/kubelet/pods/b4543ed6-f2db-43da-8e2e-a63720b6cf67/volumes" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.811308 4972 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.811336 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.811347 4972 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.811356 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.811366 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed54a06-08b9-41a2-92d9-a745631e053c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.811375 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.811384 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.821116 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9c438ca-0f93-434d-81ea-29ae82b217bf" path="/var/lib/kubelet/pods/c9c438ca-0f93-434d-81ea-29ae82b217bf/volumes" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.821914 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f69d7d80-dc29-4483-917c-c25921b56e9c" (UID: "f69d7d80-dc29-4483-917c-c25921b56e9c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.834428 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="392b5094-f8ef-47b8-8dc5-9e1d2dbef612" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.843391 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed54a06-08b9-41a2-92d9-a745631e053c-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "8ed54a06-08b9-41a2-92d9-a745631e053c" (UID: "8ed54a06-08b9-41a2-92d9-a745631e053c"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.845579 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "fc61b266-e156-4999-8ec7-8aa1f1988e42" (UID: "fc61b266-e156-4999-8ec7-8aa1f1988e42"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.912607 4972 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc61b266-e156-4999-8ec7-8aa1f1988e42-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.912637 4972 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed54a06-08b9-41a2-92d9-a745631e053c-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.912647 4972 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f69d7d80-dc29-4483-917c-c25921b56e9c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.916517 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data" (OuterVolumeSpecName: "config-data") pod "befdbf4d-7d20-40ca-9985-8309a0295dad" (UID: "befdbf4d-7d20-40ca-9985-8309a0295dad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:43 crc kubenswrapper[4972]: I1121 10:07:43.920061 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2bc44abc-7710-432b-b503-fd54e3afeede" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.014974 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befdbf4d-7d20-40ca-9985-8309a0295dad-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.132601 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-79f8cf4757-8cflk" podUID="56aac81e-b855-4419-b8a5-8f1fc099b5e6" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9696/\": dial tcp 10.217.0.152:9696: connect: connection refused" Nov 21 10:07:44 crc kubenswrapper[4972]: E1121 10:07:44.160454 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8508fa7978ab01e9ed41259f44d3ae46c68a33ace18516f62967a60ede2d29a is running failed: container process not found" containerID="c8508fa7978ab01e9ed41259f44d3ae46c68a33ace18516f62967a60ede2d29a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 10:07:44 crc kubenswrapper[4972]: E1121 10:07:44.160742 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8508fa7978ab01e9ed41259f44d3ae46c68a33ace18516f62967a60ede2d29a is running failed: container process not found" containerID="c8508fa7978ab01e9ed41259f44d3ae46c68a33ace18516f62967a60ede2d29a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 10:07:44 crc kubenswrapper[4972]: E1121 10:07:44.160963 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8508fa7978ab01e9ed41259f44d3ae46c68a33ace18516f62967a60ede2d29a is running failed: container process not found" containerID="c8508fa7978ab01e9ed41259f44d3ae46c68a33ace18516f62967a60ede2d29a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 10:07:44 crc kubenswrapper[4972]: E1121 10:07:44.160991 4972 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8508fa7978ab01e9ed41259f44d3ae46c68a33ace18516f62967a60ede2d29a is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="5eb9b3f4-6710-4818-b94c-494958fe31ad" containerName="nova-cell0-conductor-conductor" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.273282 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.273321 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.273333 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.273505 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="481cc370-a05a-4516-99f2-f94a0056a70e" containerName="memcached" containerID="cri-o://b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2" gracePeriod=30 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.276398 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="ceilometer-central-agent" containerID="cri-o://102434648feec12743eab24b7f83c96674ffe057bcf52540be9617ee45d178a0" gracePeriod=30 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.276576 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="71ed1f19-43e6-4245-82c1-f51b5f18d1e6" containerName="kube-state-metrics" containerID="cri-o://68b9d9fc5eb11275b79950cd330ab9d42c03e228546a72b6835bf9e0589b651b" gracePeriod=30 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.277580 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="proxy-httpd" containerID="cri-o://5408c1bb0f0589423c4f14602bd190494f5ead1bdc142f1e32e478d7ceca9219" gracePeriod=30 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.277629 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="sg-core" containerID="cri-o://286335ef286bcd6ad23d79c21ed873d352f5bcd061facf6a215c8094c81d6976" gracePeriod=30 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.277663 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="ceilometer-notification-agent" containerID="cri-o://0de233b9671d8c6438cab670dcbf6fed2c7c33040cc7b1af5fe0bdb4e3dd967b" gracePeriod=30 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.282010 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-twfsm"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.304227 4972 scope.go:117] "RemoveContainer" containerID="317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.304361 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-twfsm"] Nov 21 10:07:44 crc kubenswrapper[4972]: E1121 10:07:44.305351 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c\": container with ID starting with 317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c not found: ID does not exist" containerID="317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.305376 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c"} err="failed to get container status \"317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c\": rpc error: code = NotFound desc = could not find container \"317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c\": container with ID starting with 317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c not found: ID does not exist" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.305415 4972 scope.go:117] "RemoveContainer" containerID="a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75" Nov 21 10:07:44 crc kubenswrapper[4972]: E1121 10:07:44.307672 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75\": container with ID starting with a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75 not found: ID does not exist" containerID="a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.307724 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75"} err="failed to get container status \"a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75\": rpc error: code = NotFound desc = could not find container \"a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75\": container with ID starting with a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75 not found: ID does not exist" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.307751 4972 scope.go:117] "RemoveContainer" containerID="317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.320156 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c"} err="failed to get container status \"317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c\": rpc error: code = NotFound desc = could not find container \"317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c\": container with ID starting with 317b0a027d1cd6e33d37ffe81bf487ffe11faf05f8530114c4462518fd06d92c not found: ID does not exist" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.320201 4972 scope.go:117] "RemoveContainer" containerID="a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.321751 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75"} err="failed to get container status \"a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75\": rpc error: code = NotFound desc = could not find container \"a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75\": container with ID starting with a15be01405eca32ee7f4728571d79fcbd0503e5e431bd39069cf9acff52fbb75 not found: ID does not exist" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.321766 4972 scope.go:117] "RemoveContainer" containerID="b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.360117 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nwp8l"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.367952 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nwp8l"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.406596 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.411643 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db44576f7-2qgwb"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.411870 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-db44576f7-2qgwb" podUID="fe028de3-cf0f-4ab0-ab52-0898bd408c89" containerName="keystone-api" containerID="cri-o://fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188" gracePeriod=30 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.418420 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qth84"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.427936 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qth84"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.436871 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-894c-account-create-jgvz7"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.447029 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-894c-account-create-jgvz7"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.487395 4972 generic.go:334] "Generic (PLEG): container finished" podID="bf21c86e-7747-4dca-a870-352dfa214beb" containerID="07786baae1ddf77982c2d5f450f534ebeb2e7cc9884d66f899ba3263e65f0ad8" exitCode=0 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.487491 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder04d8-account-delete-kvgwd" event={"ID":"bf21c86e-7747-4dca-a870-352dfa214beb","Type":"ContainerDied","Data":"07786baae1ddf77982c2d5f450f534ebeb2e7cc9884d66f899ba3263e65f0ad8"} Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.502916 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fc61b266-e156-4999-8ec7-8aa1f1988e42","Type":"ContainerDied","Data":"7b5170e3ec4c2fa4c67d3a01f27042beb630cfe9d9a67e94747993750fc33333"} Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.503014 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.543766 4972 generic.go:334] "Generic (PLEG): container finished" podID="dd442c75-9e94-4f54-81b6-68c19f4de9d8" containerID="89ccc751e52d22869faea0021e71a2a785e988e4b549c89fec6ec8009f3c77b5" exitCode=0 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.543875 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi501f-account-delete-4746z" event={"ID":"dd442c75-9e94-4f54-81b6-68c19f4de9d8","Type":"ContainerDied","Data":"89ccc751e52d22869faea0021e71a2a785e988e4b549c89fec6ec8009f3c77b5"} Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.546741 4972 generic.go:334] "Generic (PLEG): container finished" podID="d8334e5f-f6cb-4c49-91d6-5e414ecc53f0" containerID="afd0454d2d22fbe7e1b75217dcf7c4a0117d4b1c86410c4d152284906a742754" exitCode=0 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.546804 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican958e-account-delete-tfwsx" event={"ID":"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0","Type":"ContainerDied","Data":"afd0454d2d22fbe7e1b75217dcf7c4a0117d4b1c86410c4d152284906a742754"} Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.552124 4972 generic.go:334] "Generic (PLEG): container finished" podID="97ccfb34-fe6c-4529-812a-af30eb178e8b" containerID="018ecfda8736af92fbfa98308446bcbcff66c1d5bd21c2f05351dc0453b58305" exitCode=0 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.552194 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutrondbea-account-delete-9c96n" event={"ID":"97ccfb34-fe6c-4529-812a-af30eb178e8b","Type":"ContainerDied","Data":"018ecfda8736af92fbfa98308446bcbcff66c1d5bd21c2f05351dc0453b58305"} Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.641059 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gw4hh"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.681486 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gw4hh"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.681891 4972 generic.go:334] "Generic (PLEG): container finished" podID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerID="286335ef286bcd6ad23d79c21ed873d352f5bcd061facf6a215c8094c81d6976" exitCode=2 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.681944 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85a9950-7e9d-4e16-9b35-d6912bacadf9","Type":"ContainerDied","Data":"286335ef286bcd6ad23d79c21ed873d352f5bcd061facf6a215c8094c81d6976"} Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.686618 4972 generic.go:334] "Generic (PLEG): container finished" podID="4e11aef3-0c96-44ed-8876-7e54115d181f" containerID="67e8a5e3395fadd7aeb273e85f7b2a4f78e8f4c78e7fb3ffbe5561d7becea437" exitCode=0 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.686698 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05330-account-delete-cxgw8" event={"ID":"4e11aef3-0c96-44ed-8876-7e54115d181f","Type":"ContainerDied","Data":"67e8a5e3395fadd7aeb273e85f7b2a4f78e8f4c78e7fb3ffbe5561d7becea437"} Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.705116 4972 generic.go:334] "Generic (PLEG): container finished" podID="5eb9b3f4-6710-4818-b94c-494958fe31ad" containerID="c8508fa7978ab01e9ed41259f44d3ae46c68a33ace18516f62967a60ede2d29a" exitCode=0 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.705222 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5eb9b3f4-6710-4818-b94c-494958fe31ad","Type":"ContainerDied","Data":"c8508fa7978ab01e9ed41259f44d3ae46c68a33ace18516f62967a60ede2d29a"} Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.705248 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5eb9b3f4-6710-4818-b94c-494958fe31ad","Type":"ContainerDied","Data":"06d36ccb42c26305d47f1a332d6c36c70d9d5d3384c9db9b4e9d5122cd135302"} Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.705259 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06d36ccb42c26305d47f1a332d6c36c70d9d5d3384c9db9b4e9d5122cd135302" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.733682 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d431-account-create-q5hbw"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.765629 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d431-account-create-q5hbw"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.766080 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glanced431-account-delete-5xwls"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.766177 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-92cbp"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.776363 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-92cbp"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.776659 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8ed54a06-08b9-41a2-92d9-a745631e053c","Type":"ContainerDied","Data":"96a0bf3e8d149c7ac801bfa3b8f34dcfb968f9f80764a5fead1c97548120d9d6"} Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.777626 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.780051 4972 generic.go:334] "Generic (PLEG): container finished" podID="71ed1f19-43e6-4245-82c1-f51b5f18d1e6" containerID="68b9d9fc5eb11275b79950cd330ab9d42c03e228546a72b6835bf9e0589b651b" exitCode=2 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.780115 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"71ed1f19-43e6-4245-82c1-f51b5f18d1e6","Type":"ContainerDied","Data":"68b9d9fc5eb11275b79950cd330ab9d42c03e228546a72b6835bf9e0589b651b"} Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.812178 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dbea-account-create-5qfpb"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.831522 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutrondbea-account-delete-9c96n"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.835077 4972 generic.go:334] "Generic (PLEG): container finished" podID="ffb786ba-2a1a-4124-9ef7-116e12402f5c" containerID="113314d71f8dd77620c3845233c4a215fb34baafe37ddbba0898cb7f503dba83" exitCode=0 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.837296 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement66d8-account-delete-947ct" event={"ID":"ffb786ba-2a1a-4124-9ef7-116e12402f5c","Type":"ContainerDied","Data":"113314d71f8dd77620c3845233c4a215fb34baafe37ddbba0898cb7f503dba83"} Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.845775 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dbea-account-create-5qfpb"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.852124 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-ndz5q"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.862514 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-ndz5q"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.868717 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement66d8-account-delete-947ct"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.871503 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="8027f46e-1fe2-46ad-9226-11b2cc3f8da6" containerName="galera" containerID="cri-o://2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae" gracePeriod=30 Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.877387 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-66d8-account-create-hnrrl"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.889508 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-66d8-account-create-hnrrl"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.966879 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-52nn8"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.979483 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-52nn8"] Nov 21 10:07:44 crc kubenswrapper[4972]: I1121 10:07:44.989040 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder04d8-account-delete-kvgwd"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.021482 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-04d8-account-create-jcdpc"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.026615 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-04d8-account-create-jcdpc"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.026632 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="dc57ffef-2527-4b16-b281-9139b6a0f1a1" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.156:8776/healthcheck\": read tcp 10.217.0.2:58922->10.217.0.156:8776: read: connection reset by peer" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.058123 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-98ww4"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.064364 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-98ww4"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.074406 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-958e-account-create-b6xqx"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.078953 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican958e-account-delete-tfwsx"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.083558 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-958e-account-create-b6xqx"] Nov 21 10:07:45 crc kubenswrapper[4972]: E1121 10:07:45.164474 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 21 10:07:45 crc kubenswrapper[4972]: E1121 10:07:45.168611 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 21 10:07:45 crc kubenswrapper[4972]: E1121 10:07:45.170059 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 21 10:07:45 crc kubenswrapper[4972]: E1121 10:07:45.170088 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="8027f46e-1fe2-46ad-9226-11b2cc3f8da6" containerName="galera" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.233585 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-5jd8b"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.297025 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-5jd8b"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.303808 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.319020 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-669568d65b-4t6gp" podUID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.147:9311/healthcheck\": read tcp 10.217.0.2:60386->10.217.0.147:9311: read: connection reset by peer" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.319362 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-669568d65b-4t6gp" podUID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.147:9311/healthcheck\": read tcp 10.217.0.2:60394->10.217.0.147:9311: read: connection reset by peer" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.333710 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5330-account-create-v8qd8"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.348311 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell05330-account-delete-cxgw8"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.348360 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5330-account-create-v8qd8"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.351647 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-5wd2j"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.357490 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-5wd2j"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.364570 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb9b3f4-6710-4818-b94c-494958fe31ad-config-data\") pod \"5eb9b3f4-6710-4818-b94c-494958fe31ad\" (UID: \"5eb9b3f4-6710-4818-b94c-494958fe31ad\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.364887 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb9b3f4-6710-4818-b94c-494958fe31ad-combined-ca-bundle\") pod \"5eb9b3f4-6710-4818-b94c-494958fe31ad\" (UID: \"5eb9b3f4-6710-4818-b94c-494958fe31ad\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.365198 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5gm7\" (UniqueName: \"kubernetes.io/projected/5eb9b3f4-6710-4818-b94c-494958fe31ad-kube-api-access-t5gm7\") pod \"5eb9b3f4-6710-4818-b94c-494958fe31ad\" (UID: \"5eb9b3f4-6710-4818-b94c-494958fe31ad\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.371291 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eb9b3f4-6710-4818-b94c-494958fe31ad-kube-api-access-t5gm7" (OuterVolumeSpecName: "kube-api-access-t5gm7") pod "5eb9b3f4-6710-4818-b94c-494958fe31ad" (UID: "5eb9b3f4-6710-4818-b94c-494958fe31ad"). InnerVolumeSpecName "kube-api-access-t5gm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.401466 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-501f-account-create-g2sgs"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.433151 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.435296 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi501f-account-delete-4746z"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.439928 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.450479 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb9b3f4-6710-4818-b94c-494958fe31ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5eb9b3f4-6710-4818-b94c-494958fe31ad" (UID: "5eb9b3f4-6710-4818-b94c-494958fe31ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.457373 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb9b3f4-6710-4818-b94c-494958fe31ad-config-data" (OuterVolumeSpecName: "config-data") pod "5eb9b3f4-6710-4818-b94c-494958fe31ad" (UID: "5eb9b3f4-6710-4818-b94c-494958fe31ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.470993 4972 scope.go:117] "RemoveContainer" containerID="8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.476015 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-501f-account-create-g2sgs"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.477743 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb9b3f4-6710-4818-b94c-494958fe31ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.477761 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb9b3f4-6710-4818-b94c-494958fe31ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.477772 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5gm7\" (UniqueName: \"kubernetes.io/projected/5eb9b3f4-6710-4818-b94c-494958fe31ad-kube-api-access-t5gm7\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.504487 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-748ccf64d9-7vqzf"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.510986 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-748ccf64d9-7vqzf"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.523933 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.529789 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.532933 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.547672 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.569433 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.579407 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-logs\") pod \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.579518 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckhsl\" (UniqueName: \"kubernetes.io/projected/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-kube-api-access-ckhsl\") pod \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.579624 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-state-metrics-tls-config\") pod \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.579683 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-combined-ca-bundle\") pod \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.579809 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-httpd-run\") pod \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.581850 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2swdg\" (UniqueName: \"kubernetes.io/projected/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-api-access-2swdg\") pod \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.581916 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-combined-ca-bundle\") pod \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.581945 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.581963 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-state-metrics-tls-certs\") pod \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\" (UID: \"71ed1f19-43e6-4245-82c1-f51b5f18d1e6\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.581992 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-config-data\") pod \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.582022 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-scripts\") pod \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.582133 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-public-tls-certs\") pod \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\" (UID: \"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.589743 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-api-access-2swdg" (OuterVolumeSpecName: "kube-api-access-2swdg") pod "71ed1f19-43e6-4245-82c1-f51b5f18d1e6" (UID: "71ed1f19-43e6-4245-82c1-f51b5f18d1e6"). InnerVolumeSpecName "kube-api-access-2swdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.590198 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-logs" (OuterVolumeSpecName: "logs") pod "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" (UID: "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.591056 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.591181 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" (UID: "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.606270 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-kube-api-access-ckhsl" (OuterVolumeSpecName: "kube-api-access-ckhsl") pod "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" (UID: "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8"). InnerVolumeSpecName "kube-api-access-ckhsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.611919 4972 scope.go:117] "RemoveContainer" containerID="b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932" Nov 21 10:07:45 crc kubenswrapper[4972]: E1121 10:07:45.614185 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932\": container with ID starting with b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932 not found: ID does not exist" containerID="b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.614214 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932"} err="failed to get container status \"b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932\": rpc error: code = NotFound desc = could not find container \"b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932\": container with ID starting with b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932 not found: ID does not exist" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.614234 4972 scope.go:117] "RemoveContainer" containerID="8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c" Nov 21 10:07:45 crc kubenswrapper[4972]: E1121 10:07:45.622803 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c\": container with ID starting with 8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c not found: ID does not exist" containerID="8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.622854 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c"} err="failed to get container status \"8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c\": rpc error: code = NotFound desc = could not find container \"8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c\": container with ID starting with 8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c not found: ID does not exist" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.622876 4972 scope.go:117] "RemoveContainer" containerID="b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.622998 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" (UID: "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.623640 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-scripts" (OuterVolumeSpecName: "scripts") pod "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" (UID: "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.625527 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932"} err="failed to get container status \"b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932\": rpc error: code = NotFound desc = could not find container \"b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932\": container with ID starting with b2643e338f0bda35e4096054b2e9e135f60dcfd58f21d3cfab34ec25fee2e932 not found: ID does not exist" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.625553 4972 scope.go:117] "RemoveContainer" containerID="8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.627305 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "71ed1f19-43e6-4245-82c1-f51b5f18d1e6" (UID: "71ed1f19-43e6-4245-82c1-f51b5f18d1e6"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.627325 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c"} err="failed to get container status \"8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c\": rpc error: code = NotFound desc = could not find container \"8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c\": container with ID starting with 8926429e6adc873b354d1ee81d691befb13ab0ab948a58cf412a0056811cc98c not found: ID does not exist" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.627358 4972 scope.go:117] "RemoveContainer" containerID="a762824bd37bcd1f70426519763b522438780d31ef55d3f35b56ca5424e1e1ee" Nov 21 10:07:45 crc kubenswrapper[4972]: E1121 10:07:45.652132 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.652187 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71ed1f19-43e6-4245-82c1-f51b5f18d1e6" (UID: "71ed1f19-43e6-4245-82c1-f51b5f18d1e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: E1121 10:07:45.653756 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 21 10:07:45 crc kubenswrapper[4972]: E1121 10:07:45.655058 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 21 10:07:45 crc kubenswrapper[4972]: E1121 10:07:45.655134 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" containerName="ovn-northd" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.672735 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-config-data" (OuterVolumeSpecName: "config-data") pod "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" (UID: "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.682461 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" (UID: "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684091 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2069a31-382b-4fc4-acee-cf202be1de1e-httpd-run\") pod \"b2069a31-382b-4fc4-acee-cf202be1de1e\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684160 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-internal-tls-certs\") pod \"b2069a31-382b-4fc4-acee-cf202be1de1e\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684191 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/481cc370-a05a-4516-99f2-f94a0056a70e-config-data\") pod \"481cc370-a05a-4516-99f2-f94a0056a70e\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684212 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-scripts\") pod \"b2069a31-382b-4fc4-acee-cf202be1de1e\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684245 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltzjg\" (UniqueName: \"kubernetes.io/projected/b2069a31-382b-4fc4-acee-cf202be1de1e-kube-api-access-ltzjg\") pod \"b2069a31-382b-4fc4-acee-cf202be1de1e\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684324 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/481cc370-a05a-4516-99f2-f94a0056a70e-memcached-tls-certs\") pod \"481cc370-a05a-4516-99f2-f94a0056a70e\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684359 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481cc370-a05a-4516-99f2-f94a0056a70e-combined-ca-bundle\") pod \"481cc370-a05a-4516-99f2-f94a0056a70e\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684378 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481cc370-a05a-4516-99f2-f94a0056a70e-kolla-config\") pod \"481cc370-a05a-4516-99f2-f94a0056a70e\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684394 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-combined-ca-bundle\") pod \"b2069a31-382b-4fc4-acee-cf202be1de1e\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684413 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-config-data\") pod \"b2069a31-382b-4fc4-acee-cf202be1de1e\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684438 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"b2069a31-382b-4fc4-acee-cf202be1de1e\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684493 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2069a31-382b-4fc4-acee-cf202be1de1e-logs\") pod \"b2069a31-382b-4fc4-acee-cf202be1de1e\" (UID: \"b2069a31-382b-4fc4-acee-cf202be1de1e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684590 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz4dm\" (UniqueName: \"kubernetes.io/projected/481cc370-a05a-4516-99f2-f94a0056a70e-kube-api-access-kz4dm\") pod \"481cc370-a05a-4516-99f2-f94a0056a70e\" (UID: \"481cc370-a05a-4516-99f2-f94a0056a70e\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684963 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684981 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckhsl\" (UniqueName: \"kubernetes.io/projected/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-kube-api-access-ckhsl\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.684990 4972 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.685000 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.685009 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.685017 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2swdg\" (UniqueName: \"kubernetes.io/projected/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-api-access-2swdg\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.685027 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.685044 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.685053 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.685062 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.692599 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2069a31-382b-4fc4-acee-cf202be1de1e-logs" (OuterVolumeSpecName: "logs") pod "b2069a31-382b-4fc4-acee-cf202be1de1e" (UID: "b2069a31-382b-4fc4-acee-cf202be1de1e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.693083 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481cc370-a05a-4516-99f2-f94a0056a70e-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "481cc370-a05a-4516-99f2-f94a0056a70e" (UID: "481cc370-a05a-4516-99f2-f94a0056a70e"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.696939 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2069a31-382b-4fc4-acee-cf202be1de1e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b2069a31-382b-4fc4-acee-cf202be1de1e" (UID: "b2069a31-382b-4fc4-acee-cf202be1de1e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.698485 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481cc370-a05a-4516-99f2-f94a0056a70e-config-data" (OuterVolumeSpecName: "config-data") pod "481cc370-a05a-4516-99f2-f94a0056a70e" (UID: "481cc370-a05a-4516-99f2-f94a0056a70e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.698761 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481cc370-a05a-4516-99f2-f94a0056a70e-kube-api-access-kz4dm" (OuterVolumeSpecName: "kube-api-access-kz4dm") pod "481cc370-a05a-4516-99f2-f94a0056a70e" (UID: "481cc370-a05a-4516-99f2-f94a0056a70e"). InnerVolumeSpecName "kube-api-access-kz4dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.709564 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.724078 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "b2069a31-382b-4fc4-acee-cf202be1de1e" (UID: "b2069a31-382b-4fc4-acee-cf202be1de1e"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.727034 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" (UID: "78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.727111 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-scripts" (OuterVolumeSpecName: "scripts") pod "b2069a31-382b-4fc4-acee-cf202be1de1e" (UID: "b2069a31-382b-4fc4-acee-cf202be1de1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.728137 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2069a31-382b-4fc4-acee-cf202be1de1e-kube-api-access-ltzjg" (OuterVolumeSpecName: "kube-api-access-ltzjg") pod "b2069a31-382b-4fc4-acee-cf202be1de1e" (UID: "b2069a31-382b-4fc4-acee-cf202be1de1e"). InnerVolumeSpecName "kube-api-access-ltzjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.743913 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "71ed1f19-43e6-4245-82c1-f51b5f18d1e6" (UID: "71ed1f19-43e6-4245-82c1-f51b5f18d1e6"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.789242 4972 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481cc370-a05a-4516-99f2-f94a0056a70e-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.789284 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.789295 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2069a31-382b-4fc4-acee-cf202be1de1e-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.789304 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.789313 4972 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ed1f19-43e6-4245-82c1-f51b5f18d1e6-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.789321 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz4dm\" (UniqueName: \"kubernetes.io/projected/481cc370-a05a-4516-99f2-f94a0056a70e-kube-api-access-kz4dm\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.789329 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2069a31-382b-4fc4-acee-cf202be1de1e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.789338 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/481cc370-a05a-4516-99f2-f94a0056a70e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.789345 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.789354 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltzjg\" (UniqueName: \"kubernetes.io/projected/b2069a31-382b-4fc4-acee-cf202be1de1e-kube-api-access-ltzjg\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.789362 4972 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.791521 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-config-data" (OuterVolumeSpecName: "config-data") pod "b2069a31-382b-4fc4-acee-cf202be1de1e" (UID: "b2069a31-382b-4fc4-acee-cf202be1de1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.801700 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2" path="/var/lib/kubelet/pods/3ac49a02-32ff-4c1b-b553-0dfbdcc7c4f2/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.802328 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c86765f-e125-43ac-83ba-99d506750ed5" path="/var/lib/kubelet/pods/3c86765f-e125-43ac-83ba-99d506750ed5/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.802804 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43333bc2-9532-4de9-ada0-761a687b1640" path="/var/lib/kubelet/pods/43333bc2-9532-4de9-ada0-761a687b1640/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.803316 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b94453-0fa8-42b2-9093-b233661916af" path="/var/lib/kubelet/pods/48b94453-0fa8-42b2-9093-b233661916af/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.825906 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55bbc0f1-0876-4076-8a24-7e275bda295e" path="/var/lib/kubelet/pods/55bbc0f1-0876-4076-8a24-7e275bda295e/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.826193 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2069a31-382b-4fc4-acee-cf202be1de1e" (UID: "b2069a31-382b-4fc4-acee-cf202be1de1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.826750 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5828f96a-2e2a-416f-b07e-584b5571b87d" path="/var/lib/kubelet/pods/5828f96a-2e2a-416f-b07e-584b5571b87d/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.827399 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ab9b95-78b8-49ca-ad65-2b63990c55a9" path="/var/lib/kubelet/pods/62ab9b95-78b8-49ca-ad65-2b63990c55a9/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.828374 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed" path="/var/lib/kubelet/pods/67c9ed9d-00eb-4c59-aa10-1fb24d2b5aed/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.828920 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726b6f39-d584-463b-aa77-f5a9e99b778a" path="/var/lib/kubelet/pods/726b6f39-d584-463b-aa77-f5a9e99b778a/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.829397 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86c689e1-6896-490f-a5ad-ab34ffdd5b4d" path="/var/lib/kubelet/pods/86c689e1-6896-490f-a5ad-ab34ffdd5b4d/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.834923 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ed54a06-08b9-41a2-92d9-a745631e053c" path="/var/lib/kubelet/pods/8ed54a06-08b9-41a2-92d9-a745631e053c/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.835451 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad055efa-60cd-4e60-952c-cb732c443d62" path="/var/lib/kubelet/pods/ad055efa-60cd-4e60-952c-cb732c443d62/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.835939 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae72a044-8eb0-450f-84e0-d98165e44377" path="/var/lib/kubelet/pods/ae72a044-8eb0-450f-84e0-d98165e44377/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.842367 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481cc370-a05a-4516-99f2-f94a0056a70e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "481cc370-a05a-4516-99f2-f94a0056a70e" (UID: "481cc370-a05a-4516-99f2-f94a0056a70e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.861415 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.862867 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b222104a-d8e3-440b-bcfe-05976686b3dc" path="/var/lib/kubelet/pods/b222104a-d8e3-440b-bcfe-05976686b3dc/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.863760 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ab5244-5c8e-4699-bc66-7e74b8875520" path="/var/lib/kubelet/pods/b3ab5244-5c8e-4699-bc66-7e74b8875520/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.864286 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36cc95f-a1d2-4425-8928-586f8fda4eb8" path="/var/lib/kubelet/pods/d36cc95f-a1d2-4425-8928-586f8fda4eb8/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.864781 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glanced431-account-delete-5xwls" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.864853 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e938b27a-060f-4c56-af67-7c971a877d64" path="/var/lib/kubelet/pods/e938b27a-060f-4c56-af67-7c971a877d64/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.866306 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ead4d696-8e60-4d06-8db7-09b9f550b11f" path="/var/lib/kubelet/pods/ead4d696-8e60-4d06-8db7-09b9f550b11f/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.866860 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f69d7d80-dc29-4483-917c-c25921b56e9c" path="/var/lib/kubelet/pods/f69d7d80-dc29-4483-917c-c25921b56e9c/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.867877 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481cc370-a05a-4516-99f2-f94a0056a70e-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "481cc370-a05a-4516-99f2-f94a0056a70e" (UID: "481cc370-a05a-4516-99f2-f94a0056a70e"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.868643 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9bb92cc-2905-4800-a31b-1b4fd0e35af3" path="/var/lib/kubelet/pods/f9bb92cc-2905-4800-a31b-1b4fd0e35af3/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.869628 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc61b266-e156-4999-8ec7-8aa1f1988e42" path="/var/lib/kubelet/pods/fc61b266-e156-4999-8ec7-8aa1f1988e42/volumes" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.887405 4972 scope.go:117] "RemoveContainer" containerID="a55fa1434e8f1c900b3bebdfafac5e43b0fb9083af7325dfb76ac2940d2d38b2" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.891340 4972 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/481cc370-a05a-4516-99f2-f94a0056a70e-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.891369 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481cc370-a05a-4516-99f2-f94a0056a70e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.891381 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.891390 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.891402 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.905561 4972 generic.go:334] "Generic (PLEG): container finished" podID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" containerID="a50f5adc14def76f321fa0ba2955141d2c00ea811995acd79453b97be54e414e" exitCode=0 Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.905621 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"302b9e1c-affd-4f2f-bacd-98f40dedeb91","Type":"ContainerDied","Data":"a50f5adc14def76f321fa0ba2955141d2c00ea811995acd79453b97be54e414e"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.905646 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"302b9e1c-affd-4f2f-bacd-98f40dedeb91","Type":"ContainerDied","Data":"22e902b9466d1ff08e0258c13adb35c65ff177471150f84b862959616d26741a"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.905657 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e902b9466d1ff08e0258c13adb35c65ff177471150f84b862959616d26741a" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.905787 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.907748 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.913369 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.915310 4972 generic.go:334] "Generic (PLEG): container finished" podID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerID="5408c1bb0f0589423c4f14602bd190494f5ead1bdc142f1e32e478d7ceca9219" exitCode=0 Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.915328 4972 generic.go:334] "Generic (PLEG): container finished" podID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerID="0de233b9671d8c6438cab670dcbf6fed2c7c33040cc7b1af5fe0bdb4e3dd967b" exitCode=0 Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.915338 4972 generic.go:334] "Generic (PLEG): container finished" podID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerID="102434648feec12743eab24b7f83c96674ffe057bcf52540be9617ee45d178a0" exitCode=0 Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.915381 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85a9950-7e9d-4e16-9b35-d6912bacadf9","Type":"ContainerDied","Data":"5408c1bb0f0589423c4f14602bd190494f5ead1bdc142f1e32e478d7ceca9219"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.915407 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85a9950-7e9d-4e16-9b35-d6912bacadf9","Type":"ContainerDied","Data":"0de233b9671d8c6438cab670dcbf6fed2c7c33040cc7b1af5fe0bdb4e3dd967b"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.915422 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85a9950-7e9d-4e16-9b35-d6912bacadf9","Type":"ContainerDied","Data":"102434648feec12743eab24b7f83c96674ffe057bcf52540be9617ee45d178a0"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.921067 4972 generic.go:334] "Generic (PLEG): container finished" podID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" containerID="9b2a5c600705559dcf3e1539bb652e7d2fca4320b9c98a74794471b4827bdfb0" exitCode=0 Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.921576 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57f61d22-4b79-4f80-b7dc-0f5bea4b506d","Type":"ContainerDied","Data":"9b2a5c600705559dcf3e1539bb652e7d2fca4320b9c98a74794471b4827bdfb0"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.929102 4972 generic.go:334] "Generic (PLEG): container finished" podID="78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" containerID="176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff" exitCode=0 Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.929151 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.929212 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8","Type":"ContainerDied","Data":"176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.929266 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8","Type":"ContainerDied","Data":"5f319e5d8068cd5edda3eaed9c694b6d5830fa3a201ee443c4327495b63f7502"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.949426 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glanced431-account-delete-5xwls" event={"ID":"f1224d0f-d488-49e6-b6dc-12a188b43a43","Type":"ContainerDied","Data":"96662fa48a81d7c69f1cd5cab7b4c0fb630f2d8f1d9b1a05a36d873f1aaf9857"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.949462 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96662fa48a81d7c69f1cd5cab7b4c0fb630f2d8f1d9b1a05a36d873f1aaf9857" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.949552 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glanced431-account-delete-5xwls" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.954986 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.975331 4972 generic.go:334] "Generic (PLEG): container finished" podID="b2069a31-382b-4fc4-acee-cf202be1de1e" containerID="fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1" exitCode=0 Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.975443 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b2069a31-382b-4fc4-acee-cf202be1de1e","Type":"ContainerDied","Data":"fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.975473 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b2069a31-382b-4fc4-acee-cf202be1de1e","Type":"ContainerDied","Data":"6e9245d79aa3c0a020fd2b36c94da9c130d9560936d8f6b5e773469dfbbd13a6"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.975555 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.985271 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.986265 4972 scope.go:117] "RemoveContainer" containerID="7d666958e7088543ec0d51c957e3e53a16be809f029d516ce5c2316c2c498ab9" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.987986 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.988010 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"71ed1f19-43e6-4245-82c1-f51b5f18d1e6","Type":"ContainerDied","Data":"a4aa708a3cf47770310bf4aa94a83fd60ab3d549885c2d22b19a0f7986dee2b5"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.991236 4972 generic.go:334] "Generic (PLEG): container finished" podID="481cc370-a05a-4516-99f2-f94a0056a70e" containerID="b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2" exitCode=0 Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.991286 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"481cc370-a05a-4516-99f2-f94a0056a70e","Type":"ContainerDied","Data":"b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.991343 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"481cc370-a05a-4516-99f2-f94a0056a70e","Type":"ContainerDied","Data":"ec318e98104291b3eeb9a0c19c54c14718665dbf6f3b432f62e9ebbde1eaead3"} Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.991387 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.992583 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1224d0f-d488-49e6-b6dc-12a188b43a43-operator-scripts\") pod \"f1224d0f-d488-49e6-b6dc-12a188b43a43\" (UID: \"f1224d0f-d488-49e6-b6dc-12a188b43a43\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.992667 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-config-data\") pod \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.992717 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcc4w\" (UniqueName: \"kubernetes.io/projected/302b9e1c-affd-4f2f-bacd-98f40dedeb91-kube-api-access-jcc4w\") pod \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.992761 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc57ffef-2527-4b16-b281-9139b6a0f1a1-etc-machine-id\") pod \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.993478 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc57ffef-2527-4b16-b281-9139b6a0f1a1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dc57ffef-2527-4b16-b281-9139b6a0f1a1" (UID: "dc57ffef-2527-4b16-b281-9139b6a0f1a1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.993500 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1224d0f-d488-49e6-b6dc-12a188b43a43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1224d0f-d488-49e6-b6dc-12a188b43a43" (UID: "f1224d0f-d488-49e6-b6dc-12a188b43a43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.992998 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-internal-tls-certs\") pod \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.993984 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-public-tls-certs\") pod \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.994099 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-scripts\") pod \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.994150 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-public-tls-certs\") pod \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.994177 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-config-data\") pod \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.994215 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2zk4\" (UniqueName: \"kubernetes.io/projected/dc57ffef-2527-4b16-b281-9139b6a0f1a1-kube-api-access-t2zk4\") pod \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.994261 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-internal-tls-certs\") pod \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.994435 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55f8h\" (UniqueName: \"kubernetes.io/projected/f1224d0f-d488-49e6-b6dc-12a188b43a43-kube-api-access-55f8h\") pod \"f1224d0f-d488-49e6-b6dc-12a188b43a43\" (UID: \"f1224d0f-d488-49e6-b6dc-12a188b43a43\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.994890 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-combined-ca-bundle\") pod \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.994915 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc57ffef-2527-4b16-b281-9139b6a0f1a1-logs\") pod \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.995034 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-logs\") pod \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.995067 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-config-data-custom\") pod \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.995104 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-internal-tls-certs\") pod \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.995128 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-combined-ca-bundle\") pod \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.995187 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/302b9e1c-affd-4f2f-bacd-98f40dedeb91-logs\") pod \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\" (UID: \"302b9e1c-affd-4f2f-bacd-98f40dedeb91\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.995214 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bfzw\" (UniqueName: \"kubernetes.io/projected/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-kube-api-access-4bfzw\") pod \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.995296 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-public-tls-certs\") pod \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.995402 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-scripts\") pod \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\" (UID: \"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.995431 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-combined-ca-bundle\") pod \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.995458 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-config-data\") pod \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\" (UID: \"dc57ffef-2527-4b16-b281-9139b6a0f1a1\") " Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.996146 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1224d0f-d488-49e6-b6dc-12a188b43a43-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.996165 4972 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc57ffef-2527-4b16-b281-9139b6a0f1a1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.997056 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302b9e1c-affd-4f2f-bacd-98f40dedeb91-kube-api-access-jcc4w" (OuterVolumeSpecName: "kube-api-access-jcc4w") pod "302b9e1c-affd-4f2f-bacd-98f40dedeb91" (UID: "302b9e1c-affd-4f2f-bacd-98f40dedeb91"). InnerVolumeSpecName "kube-api-access-jcc4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:45 crc kubenswrapper[4972]: I1121 10:07:45.997451 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.000617 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/302b9e1c-affd-4f2f-bacd-98f40dedeb91-logs" (OuterVolumeSpecName: "logs") pod "302b9e1c-affd-4f2f-bacd-98f40dedeb91" (UID: "302b9e1c-affd-4f2f-bacd-98f40dedeb91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.014607 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-kube-api-access-4bfzw" (OuterVolumeSpecName: "kube-api-access-4bfzw") pod "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" (UID: "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd"). InnerVolumeSpecName "kube-api-access-4bfzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.015031 4972 generic.go:334] "Generic (PLEG): container finished" podID="dc57ffef-2527-4b16-b281-9139b6a0f1a1" containerID="49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07" exitCode=0 Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.015121 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dc57ffef-2527-4b16-b281-9139b6a0f1a1","Type":"ContainerDied","Data":"49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07"} Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.015139 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-logs" (OuterVolumeSpecName: "logs") pod "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" (UID: "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.015148 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dc57ffef-2527-4b16-b281-9139b6a0f1a1","Type":"ContainerDied","Data":"1af864544aa808fbae91493338a5ae16d87eccc8d300a4b2025420de1726dea1"} Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.015238 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.015336 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc57ffef-2527-4b16-b281-9139b6a0f1a1-kube-api-access-t2zk4" (OuterVolumeSpecName: "kube-api-access-t2zk4") pod "dc57ffef-2527-4b16-b281-9139b6a0f1a1" (UID: "dc57ffef-2527-4b16-b281-9139b6a0f1a1"). InnerVolumeSpecName "kube-api-access-t2zk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.018307 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc57ffef-2527-4b16-b281-9139b6a0f1a1-logs" (OuterVolumeSpecName: "logs") pod "dc57ffef-2527-4b16-b281-9139b6a0f1a1" (UID: "dc57ffef-2527-4b16-b281-9139b6a0f1a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.019322 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.020645 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-scripts" (OuterVolumeSpecName: "scripts") pod "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" (UID: "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.027393 4972 scope.go:117] "RemoveContainer" containerID="176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.042719 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dc57ffef-2527-4b16-b281-9139b6a0f1a1" (UID: "dc57ffef-2527-4b16-b281-9139b6a0f1a1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.045153 4972 generic.go:334] "Generic (PLEG): container finished" podID="8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" containerID="8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f" exitCode=0 Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.045394 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78d4f89dc4-2qvzl" event={"ID":"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd","Type":"ContainerDied","Data":"8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f"} Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.045495 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78d4f89dc4-2qvzl" event={"ID":"8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd","Type":"ContainerDied","Data":"293a0ba14db09e2c1ece2b6244ac59a9b488c2d99d7824f84bddec0614abdd65"} Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.045504 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78d4f89dc4-2qvzl" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.046660 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.046923 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1224d0f-d488-49e6-b6dc-12a188b43a43-kube-api-access-55f8h" (OuterVolumeSpecName: "kube-api-access-55f8h") pod "f1224d0f-d488-49e6-b6dc-12a188b43a43" (UID: "f1224d0f-d488-49e6-b6dc-12a188b43a43"). InnerVolumeSpecName "kube-api-access-55f8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.049424 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-scripts" (OuterVolumeSpecName: "scripts") pod "dc57ffef-2527-4b16-b281-9139b6a0f1a1" (UID: "dc57ffef-2527-4b16-b281-9139b6a0f1a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.062683 4972 scope.go:117] "RemoveContainer" containerID="e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.062928 4972 generic.go:334] "Generic (PLEG): container finished" podID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" containerID="c170fcfc81ca59f5bc98bc8edc442c5c3a824cf4040a9ddb3b5479628d9471b5" exitCode=0 Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.062952 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-669568d65b-4t6gp" event={"ID":"272d9c39-ab5b-4fc1-8dbe-209fbe33e293","Type":"ContainerDied","Data":"c170fcfc81ca59f5bc98bc8edc442c5c3a824cf4040a9ddb3b5479628d9471b5"} Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.063667 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.091022 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.096894 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-combined-ca-bundle\") pod \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.096954 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhht4\" (UniqueName: \"kubernetes.io/projected/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-kube-api-access-dhht4\") pod \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097001 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-config-data-custom\") pod \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097020 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-combined-ca-bundle\") pod \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097048 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85a9950-7e9d-4e16-9b35-d6912bacadf9-log-httpd\") pod \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097062 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-logs\") pod \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097105 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-combined-ca-bundle\") pod \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097122 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-public-tls-certs\") pod \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097145 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-scripts\") pod \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097167 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-config-data\") pod \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097223 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx5m8\" (UniqueName: \"kubernetes.io/projected/f85a9950-7e9d-4e16-9b35-d6912bacadf9-kube-api-access-xx5m8\") pod \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097238 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-config-data\") pod \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097254 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85a9950-7e9d-4e16-9b35-d6912bacadf9-run-httpd\") pod \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097273 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-ceilometer-tls-certs\") pod \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097306 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-config-data\") pod \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097339 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8drk\" (UniqueName: \"kubernetes.io/projected/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-kube-api-access-c8drk\") pod \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.097354 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-sg-core-conf-yaml\") pod \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\" (UID: \"f85a9950-7e9d-4e16-9b35-d6912bacadf9\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.102785 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.105563 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85a9950-7e9d-4e16-9b35-d6912bacadf9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f85a9950-7e9d-4e16-9b35-d6912bacadf9" (UID: "f85a9950-7e9d-4e16-9b35-d6912bacadf9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.105952 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-nova-metadata-tls-certs\") pod \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\" (UID: \"57f61d22-4b79-4f80-b7dc-0f5bea4b506d\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.106002 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-logs\") pod \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.106054 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-internal-tls-certs\") pod \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\" (UID: \"272d9c39-ab5b-4fc1-8dbe-209fbe33e293\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.107709 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.108525 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-logs" (OuterVolumeSpecName: "logs") pod "57f61d22-4b79-4f80-b7dc-0f5bea4b506d" (UID: "57f61d22-4b79-4f80-b7dc-0f5bea4b506d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.109278 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85a9950-7e9d-4e16-9b35-d6912bacadf9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f85a9950-7e9d-4e16-9b35-d6912bacadf9" (UID: "f85a9950-7e9d-4e16-9b35-d6912bacadf9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.110223 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-logs" (OuterVolumeSpecName: "logs") pod "272d9c39-ab5b-4fc1-8dbe-209fbe33e293" (UID: "272d9c39-ab5b-4fc1-8dbe-209fbe33e293"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.116010 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85a9950-7e9d-4e16-9b35-d6912bacadf9-kube-api-access-xx5m8" (OuterVolumeSpecName: "kube-api-access-xx5m8") pod "f85a9950-7e9d-4e16-9b35-d6912bacadf9" (UID: "f85a9950-7e9d-4e16-9b35-d6912bacadf9"). InnerVolumeSpecName "kube-api-access-xx5m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.121189 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-scripts" (OuterVolumeSpecName: "scripts") pod "f85a9950-7e9d-4e16-9b35-d6912bacadf9" (UID: "f85a9950-7e9d-4e16-9b35-d6912bacadf9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122071 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc57ffef-2527-4b16-b281-9139b6a0f1a1-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122119 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122132 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122146 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122162 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx5m8\" (UniqueName: \"kubernetes.io/projected/f85a9950-7e9d-4e16-9b35-d6912bacadf9-kube-api-access-xx5m8\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122175 4972 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85a9950-7e9d-4e16-9b35-d6912bacadf9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122186 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/302b9e1c-affd-4f2f-bacd-98f40dedeb91-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122198 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bfzw\" (UniqueName: \"kubernetes.io/projected/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-kube-api-access-4bfzw\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122210 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122220 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122233 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcc4w\" (UniqueName: \"kubernetes.io/projected/302b9e1c-affd-4f2f-bacd-98f40dedeb91-kube-api-access-jcc4w\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122244 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122256 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2zk4\" (UniqueName: \"kubernetes.io/projected/dc57ffef-2527-4b16-b281-9139b6a0f1a1-kube-api-access-t2zk4\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122267 4972 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85a9950-7e9d-4e16-9b35-d6912bacadf9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122277 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.122290 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55f8h\" (UniqueName: \"kubernetes.io/projected/f1224d0f-d488-49e6-b6dc-12a188b43a43-kube-api-access-55f8h\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.123957 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.133576 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-kube-api-access-c8drk" (OuterVolumeSpecName: "kube-api-access-c8drk") pod "57f61d22-4b79-4f80-b7dc-0f5bea4b506d" (UID: "57f61d22-4b79-4f80-b7dc-0f5bea4b506d"). InnerVolumeSpecName "kube-api-access-c8drk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.147031 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "272d9c39-ab5b-4fc1-8dbe-209fbe33e293" (UID: "272d9c39-ab5b-4fc1-8dbe-209fbe33e293"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.147201 4972 scope.go:117] "RemoveContainer" containerID="176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff" Nov 21 10:07:46 crc kubenswrapper[4972]: E1121 10:07:46.147579 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff\": container with ID starting with 176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff not found: ID does not exist" containerID="176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.147610 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff"} err="failed to get container status \"176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff\": rpc error: code = NotFound desc = could not find container \"176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff\": container with ID starting with 176f31420e0751d42b4bb4b07ba6f49cbfd94280d6aa936d06410ffc01d008ff not found: ID does not exist" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.147629 4972 scope.go:117] "RemoveContainer" containerID="e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b" Nov 21 10:07:46 crc kubenswrapper[4972]: E1121 10:07:46.148073 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b\": container with ID starting with e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b not found: ID does not exist" containerID="e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.148184 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b"} err="failed to get container status \"e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b\": rpc error: code = NotFound desc = could not find container \"e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b\": container with ID starting with e03504354d9520f07bfa1ddb744d599ccc77aeb3feeb232af0a88b1ae4acdb9b not found: ID does not exist" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.148306 4972 scope.go:117] "RemoveContainer" containerID="fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.154227 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-kube-api-access-dhht4" (OuterVolumeSpecName: "kube-api-access-dhht4") pod "272d9c39-ab5b-4fc1-8dbe-209fbe33e293" (UID: "272d9c39-ab5b-4fc1-8dbe-209fbe33e293"). InnerVolumeSpecName "kube-api-access-dhht4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.157581 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.169171 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.194614 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc57ffef-2527-4b16-b281-9139b6a0f1a1" (UID: "dc57ffef-2527-4b16-b281-9139b6a0f1a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.213672 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dc57ffef-2527-4b16-b281-9139b6a0f1a1" (UID: "dc57ffef-2527-4b16-b281-9139b6a0f1a1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.224062 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhht4\" (UniqueName: \"kubernetes.io/projected/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-kube-api-access-dhht4\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.234691 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.234730 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.234746 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8drk\" (UniqueName: \"kubernetes.io/projected/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-kube-api-access-c8drk\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.234760 4972 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.238275 4972 scope.go:117] "RemoveContainer" containerID="70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.249797 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "272d9c39-ab5b-4fc1-8dbe-209fbe33e293" (UID: "272d9c39-ab5b-4fc1-8dbe-209fbe33e293"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.257371 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-config-data" (OuterVolumeSpecName: "config-data") pod "57f61d22-4b79-4f80-b7dc-0f5bea4b506d" (UID: "57f61d22-4b79-4f80-b7dc-0f5bea4b506d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.265398 4972 scope.go:117] "RemoveContainer" containerID="fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1" Nov 21 10:07:46 crc kubenswrapper[4972]: E1121 10:07:46.265787 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1\": container with ID starting with fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1 not found: ID does not exist" containerID="fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.265816 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1"} err="failed to get container status \"fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1\": rpc error: code = NotFound desc = could not find container \"fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1\": container with ID starting with fa23d72a8ed2e8dc42ba23984ce0256b39eb3f3688efcf051d30829a56d4b1b1 not found: ID does not exist" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.265847 4972 scope.go:117] "RemoveContainer" containerID="70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3" Nov 21 10:07:46 crc kubenswrapper[4972]: E1121 10:07:46.266249 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3\": container with ID starting with 70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3 not found: ID does not exist" containerID="70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.266312 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3"} err="failed to get container status \"70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3\": rpc error: code = NotFound desc = could not find container \"70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3\": container with ID starting with 70d10fbe1cb3eca06bf152b5ac4e871031e9408c9157ad25660a3912d1bfdcf3 not found: ID does not exist" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.266348 4972 scope.go:117] "RemoveContainer" containerID="68b9d9fc5eb11275b79950cd330ab9d42c03e228546a72b6835bf9e0589b651b" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.291931 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57f61d22-4b79-4f80-b7dc-0f5bea4b506d" (UID: "57f61d22-4b79-4f80-b7dc-0f5bea4b506d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.292450 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-config-data" (OuterVolumeSpecName: "config-data") pod "dc57ffef-2527-4b16-b281-9139b6a0f1a1" (UID: "dc57ffef-2527-4b16-b281-9139b6a0f1a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.296482 4972 scope.go:117] "RemoveContainer" containerID="b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.316400 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glanced431-account-delete-5xwls"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.328052 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glanced431-account-delete-5xwls"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.330317 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f85a9950-7e9d-4e16-9b35-d6912bacadf9" (UID: "f85a9950-7e9d-4e16-9b35-d6912bacadf9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.333402 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "302b9e1c-affd-4f2f-bacd-98f40dedeb91" (UID: "302b9e1c-affd-4f2f-bacd-98f40dedeb91"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.333813 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-config-data" (OuterVolumeSpecName: "config-data") pod "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" (UID: "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.335992 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.336013 4972 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.336025 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.336086 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.336097 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.336105 4972 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.336114 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.337077 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dc57ffef-2527-4b16-b281-9139b6a0f1a1" (UID: "dc57ffef-2527-4b16-b281-9139b6a0f1a1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.340055 4972 scope.go:117] "RemoveContainer" containerID="b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2" Nov 21 10:07:46 crc kubenswrapper[4972]: E1121 10:07:46.340578 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2\": container with ID starting with b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2 not found: ID does not exist" containerID="b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.340613 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2"} err="failed to get container status \"b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2\": rpc error: code = NotFound desc = could not find container \"b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2\": container with ID starting with b9e626a1ff970124d27149621ed868093f660ff094797fea99807c66272dc9d2 not found: ID does not exist" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.340685 4972 scope.go:117] "RemoveContainer" containerID="49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.343129 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-config-data" (OuterVolumeSpecName: "config-data") pod "302b9e1c-affd-4f2f-bacd-98f40dedeb91" (UID: "302b9e1c-affd-4f2f-bacd-98f40dedeb91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.352268 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "272d9c39-ab5b-4fc1-8dbe-209fbe33e293" (UID: "272d9c39-ab5b-4fc1-8dbe-209fbe33e293"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.354129 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" (UID: "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.364411 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "57f61d22-4b79-4f80-b7dc-0f5bea4b506d" (UID: "57f61d22-4b79-4f80-b7dc-0f5bea4b506d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.366938 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "272d9c39-ab5b-4fc1-8dbe-209fbe33e293" (UID: "272d9c39-ab5b-4fc1-8dbe-209fbe33e293"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.387193 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "302b9e1c-affd-4f2f-bacd-98f40dedeb91" (UID: "302b9e1c-affd-4f2f-bacd-98f40dedeb91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.392024 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b2069a31-382b-4fc4-acee-cf202be1de1e" (UID: "b2069a31-382b-4fc4-acee-cf202be1de1e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.393722 4972 scope.go:117] "RemoveContainer" containerID="ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.401314 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f85a9950-7e9d-4e16-9b35-d6912bacadf9" (UID: "f85a9950-7e9d-4e16-9b35-d6912bacadf9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.408615 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-config-data" (OuterVolumeSpecName: "config-data") pod "272d9c39-ab5b-4fc1-8dbe-209fbe33e293" (UID: "272d9c39-ab5b-4fc1-8dbe-209fbe33e293"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.417301 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "302b9e1c-affd-4f2f-bacd-98f40dedeb91" (UID: "302b9e1c-affd-4f2f-bacd-98f40dedeb91"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.417379 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-config-data" (OuterVolumeSpecName: "config-data") pod "f85a9950-7e9d-4e16-9b35-d6912bacadf9" (UID: "f85a9950-7e9d-4e16-9b35-d6912bacadf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.436317 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f85a9950-7e9d-4e16-9b35-d6912bacadf9" (UID: "f85a9950-7e9d-4e16-9b35-d6912bacadf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437418 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437446 4972 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437457 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437468 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437478 4972 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2069a31-382b-4fc4-acee-cf202be1de1e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437488 4972 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437496 4972 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc57ffef-2527-4b16-b281-9139b6a0f1a1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437505 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437513 4972 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f61d22-4b79-4f80-b7dc-0f5bea4b506d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437523 4972 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/272d9c39-ab5b-4fc1-8dbe-209fbe33e293-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437532 4972 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437540 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85a9950-7e9d-4e16-9b35-d6912bacadf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.437548 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b9e1c-affd-4f2f-bacd-98f40dedeb91-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.438276 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" (UID: "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.442746 4972 scope.go:117] "RemoveContainer" containerID="49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07" Nov 21 10:07:46 crc kubenswrapper[4972]: E1121 10:07:46.443153 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07\": container with ID starting with 49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07 not found: ID does not exist" containerID="49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.443182 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07"} err="failed to get container status \"49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07\": rpc error: code = NotFound desc = could not find container \"49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07\": container with ID starting with 49d40728e956c86ac5f50feea6dabb003f5b7e12b18b22782321e4bdfa6a4d07 not found: ID does not exist" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.443203 4972 scope.go:117] "RemoveContainer" containerID="ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad" Nov 21 10:07:46 crc kubenswrapper[4972]: E1121 10:07:46.453108 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad\": container with ID starting with ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad not found: ID does not exist" containerID="ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.453153 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad"} err="failed to get container status \"ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad\": rpc error: code = NotFound desc = could not find container \"ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad\": container with ID starting with ff4509a52935ca39544f656f7b2bbdfab72d26e1ceca8275ec6a319273e973ad not found: ID does not exist" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.453184 4972 scope.go:117] "RemoveContainer" containerID="8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.490202 4972 scope.go:117] "RemoveContainer" containerID="a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.506325 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutrondbea-account-delete-9c96n" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.525472 4972 scope.go:117] "RemoveContainer" containerID="8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f" Nov 21 10:07:46 crc kubenswrapper[4972]: E1121 10:07:46.527332 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f\": container with ID starting with 8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f not found: ID does not exist" containerID="8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.527362 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f"} err="failed to get container status \"8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f\": rpc error: code = NotFound desc = could not find container \"8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f\": container with ID starting with 8281402abc59d2d6389eba9427fb6df68e1ff2f3cf37736cf084b96b31b30e0f not found: ID does not exist" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.527382 4972 scope.go:117] "RemoveContainer" containerID="a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923" Nov 21 10:07:46 crc kubenswrapper[4972]: E1121 10:07:46.527667 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923\": container with ID starting with a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923 not found: ID does not exist" containerID="a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.527684 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923"} err="failed to get container status \"a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923\": rpc error: code = NotFound desc = could not find container \"a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923\": container with ID starting with a0ef5b5653ff065d56f37e3509be4061e6cfc1eda3f19880fd2fed960a808923 not found: ID does not exist" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.527696 4972 scope.go:117] "RemoveContainer" containerID="c170fcfc81ca59f5bc98bc8edc442c5c3a824cf4040a9ddb3b5479628d9471b5" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.531368 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" (UID: "8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.539028 4972 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.539055 4972 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.554253 4972 scope.go:117] "RemoveContainer" containerID="3fe0c9bf4632a5a91bbedb92ac2a74a5be61932a64fbf9dec4c9fe6b9c892be9" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.639512 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ccfb34-fe6c-4529-812a-af30eb178e8b-operator-scripts\") pod \"97ccfb34-fe6c-4529-812a-af30eb178e8b\" (UID: \"97ccfb34-fe6c-4529-812a-af30eb178e8b\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.639613 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5548s\" (UniqueName: \"kubernetes.io/projected/97ccfb34-fe6c-4529-812a-af30eb178e8b-kube-api-access-5548s\") pod \"97ccfb34-fe6c-4529-812a-af30eb178e8b\" (UID: \"97ccfb34-fe6c-4529-812a-af30eb178e8b\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.641219 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ccfb34-fe6c-4529-812a-af30eb178e8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97ccfb34-fe6c-4529-812a-af30eb178e8b" (UID: "97ccfb34-fe6c-4529-812a-af30eb178e8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.655735 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ccfb34-fe6c-4529-812a-af30eb178e8b-kube-api-access-5548s" (OuterVolumeSpecName: "kube-api-access-5548s") pod "97ccfb34-fe6c-4529-812a-af30eb178e8b" (UID: "97ccfb34-fe6c-4529-812a-af30eb178e8b"). InnerVolumeSpecName "kube-api-access-5548s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.741465 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ccfb34-fe6c-4529-812a-af30eb178e8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.741779 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5548s\" (UniqueName: \"kubernetes.io/projected/97ccfb34-fe6c-4529-812a-af30eb178e8b-kube-api-access-5548s\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.837076 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05330-account-delete-cxgw8" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.853797 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.858045 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican958e-account-delete-tfwsx" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.869027 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.870163 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement66d8-account-delete-947ct" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.881916 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi501f-account-delete-4746z" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.901256 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder04d8-account-delete-kvgwd" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.909149 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-78d4f89dc4-2qvzl"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.918753 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-78d4f89dc4-2qvzl"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.922921 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.925267 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.944335 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2ghp\" (UniqueName: \"kubernetes.io/projected/ffb786ba-2a1a-4124-9ef7-116e12402f5c-kube-api-access-r2ghp\") pod \"ffb786ba-2a1a-4124-9ef7-116e12402f5c\" (UID: \"ffb786ba-2a1a-4124-9ef7-116e12402f5c\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.944392 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvgqd\" (UniqueName: \"kubernetes.io/projected/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0-kube-api-access-kvgqd\") pod \"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0\" (UID: \"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.944507 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpzrq\" (UniqueName: \"kubernetes.io/projected/4e11aef3-0c96-44ed-8876-7e54115d181f-kube-api-access-wpzrq\") pod \"4e11aef3-0c96-44ed-8876-7e54115d181f\" (UID: \"4e11aef3-0c96-44ed-8876-7e54115d181f\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.944538 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e11aef3-0c96-44ed-8876-7e54115d181f-operator-scripts\") pod \"4e11aef3-0c96-44ed-8876-7e54115d181f\" (UID: \"4e11aef3-0c96-44ed-8876-7e54115d181f\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.944622 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0-operator-scripts\") pod \"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0\" (UID: \"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.944663 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb786ba-2a1a-4124-9ef7-116e12402f5c-operator-scripts\") pod \"ffb786ba-2a1a-4124-9ef7-116e12402f5c\" (UID: \"ffb786ba-2a1a-4124-9ef7-116e12402f5c\") " Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.945415 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e11aef3-0c96-44ed-8876-7e54115d181f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e11aef3-0c96-44ed-8876-7e54115d181f" (UID: "4e11aef3-0c96-44ed-8876-7e54115d181f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.945504 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb786ba-2a1a-4124-9ef7-116e12402f5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ffb786ba-2a1a-4124-9ef7-116e12402f5c" (UID: "ffb786ba-2a1a-4124-9ef7-116e12402f5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.945581 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8334e5f-f6cb-4c49-91d6-5e414ecc53f0" (UID: "d8334e5f-f6cb-4c49-91d6-5e414ecc53f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.950349 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e11aef3-0c96-44ed-8876-7e54115d181f-kube-api-access-wpzrq" (OuterVolumeSpecName: "kube-api-access-wpzrq") pod "4e11aef3-0c96-44ed-8876-7e54115d181f" (UID: "4e11aef3-0c96-44ed-8876-7e54115d181f"). InnerVolumeSpecName "kube-api-access-wpzrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.951191 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb786ba-2a1a-4124-9ef7-116e12402f5c-kube-api-access-r2ghp" (OuterVolumeSpecName: "kube-api-access-r2ghp") pod "ffb786ba-2a1a-4124-9ef7-116e12402f5c" (UID: "ffb786ba-2a1a-4124-9ef7-116e12402f5c"). InnerVolumeSpecName "kube-api-access-r2ghp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:46 crc kubenswrapper[4972]: I1121 10:07:46.957900 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0-kube-api-access-kvgqd" (OuterVolumeSpecName: "kube-api-access-kvgqd") pod "d8334e5f-f6cb-4c49-91d6-5e414ecc53f0" (UID: "d8334e5f-f6cb-4c49-91d6-5e414ecc53f0"). InnerVolumeSpecName "kube-api-access-kvgqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.046354 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd442c75-9e94-4f54-81b6-68c19f4de9d8-operator-scripts\") pod \"dd442c75-9e94-4f54-81b6-68c19f4de9d8\" (UID: \"dd442c75-9e94-4f54-81b6-68c19f4de9d8\") " Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.046463 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl6fl\" (UniqueName: \"kubernetes.io/projected/dd442c75-9e94-4f54-81b6-68c19f4de9d8-kube-api-access-gl6fl\") pod \"dd442c75-9e94-4f54-81b6-68c19f4de9d8\" (UID: \"dd442c75-9e94-4f54-81b6-68c19f4de9d8\") " Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.046515 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf21c86e-7747-4dca-a870-352dfa214beb-operator-scripts\") pod \"bf21c86e-7747-4dca-a870-352dfa214beb\" (UID: \"bf21c86e-7747-4dca-a870-352dfa214beb\") " Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.046534 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chfz4\" (UniqueName: \"kubernetes.io/projected/bf21c86e-7747-4dca-a870-352dfa214beb-kube-api-access-chfz4\") pod \"bf21c86e-7747-4dca-a870-352dfa214beb\" (UID: \"bf21c86e-7747-4dca-a870-352dfa214beb\") " Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.046875 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.046886 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb786ba-2a1a-4124-9ef7-116e12402f5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.046895 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2ghp\" (UniqueName: \"kubernetes.io/projected/ffb786ba-2a1a-4124-9ef7-116e12402f5c-kube-api-access-r2ghp\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.046904 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvgqd\" (UniqueName: \"kubernetes.io/projected/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0-kube-api-access-kvgqd\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.046914 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpzrq\" (UniqueName: \"kubernetes.io/projected/4e11aef3-0c96-44ed-8876-7e54115d181f-kube-api-access-wpzrq\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.046923 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e11aef3-0c96-44ed-8876-7e54115d181f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.047925 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd442c75-9e94-4f54-81b6-68c19f4de9d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd442c75-9e94-4f54-81b6-68c19f4de9d8" (UID: "dd442c75-9e94-4f54-81b6-68c19f4de9d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.047956 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf21c86e-7747-4dca-a870-352dfa214beb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf21c86e-7747-4dca-a870-352dfa214beb" (UID: "bf21c86e-7747-4dca-a870-352dfa214beb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.050952 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf21c86e-7747-4dca-a870-352dfa214beb-kube-api-access-chfz4" (OuterVolumeSpecName: "kube-api-access-chfz4") pod "bf21c86e-7747-4dca-a870-352dfa214beb" (UID: "bf21c86e-7747-4dca-a870-352dfa214beb"). InnerVolumeSpecName "kube-api-access-chfz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.054597 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd442c75-9e94-4f54-81b6-68c19f4de9d8-kube-api-access-gl6fl" (OuterVolumeSpecName: "kube-api-access-gl6fl") pod "dd442c75-9e94-4f54-81b6-68c19f4de9d8" (UID: "dd442c75-9e94-4f54-81b6-68c19f4de9d8"). InnerVolumeSpecName "kube-api-access-gl6fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.076354 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-669568d65b-4t6gp" event={"ID":"272d9c39-ab5b-4fc1-8dbe-209fbe33e293","Type":"ContainerDied","Data":"e24aaf7f9163025a621092e32257309ef777d6244a6377108ad0b3bf28059de4"} Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.076444 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-669568d65b-4t6gp" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.092401 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutrondbea-account-delete-9c96n" event={"ID":"97ccfb34-fe6c-4529-812a-af30eb178e8b","Type":"ContainerDied","Data":"a4b91b9a8dff31e90dbf88057e35dc59bd92131bc79c918e7fdff4073b1d4ad1"} Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.092438 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4b91b9a8dff31e90dbf88057e35dc59bd92131bc79c918e7fdff4073b1d4ad1" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.092515 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutrondbea-account-delete-9c96n" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.096319 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05330-account-delete-cxgw8" event={"ID":"4e11aef3-0c96-44ed-8876-7e54115d181f","Type":"ContainerDied","Data":"4b985586c5b90053fad24e1a7d2f83cc16f8cf81056613b10f2e37e45f6b69bb"} Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.096359 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b985586c5b90053fad24e1a7d2f83cc16f8cf81056613b10f2e37e45f6b69bb" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.096413 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05330-account-delete-cxgw8" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.114279 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi501f-account-delete-4746z" event={"ID":"dd442c75-9e94-4f54-81b6-68c19f4de9d8","Type":"ContainerDied","Data":"671752f103ef4d1c64fe88269b466e37db7bc52d845fcf07142422a63d6c348d"} Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.114342 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="671752f103ef4d1c64fe88269b466e37db7bc52d845fcf07142422a63d6c348d" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.114430 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi501f-account-delete-4746z" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.116161 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder04d8-account-delete-kvgwd" event={"ID":"bf21c86e-7747-4dca-a870-352dfa214beb","Type":"ContainerDied","Data":"dc7b7f17386e24bb82f37a67e5306778129c25da39931199e4781212f6376366"} Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.116375 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc7b7f17386e24bb82f37a67e5306778129c25da39931199e4781212f6376366" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.116772 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder04d8-account-delete-kvgwd" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.117933 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-669568d65b-4t6gp"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.125978 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-669568d65b-4t6gp"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.127269 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85a9950-7e9d-4e16-9b35-d6912bacadf9","Type":"ContainerDied","Data":"656cfb33de2f54aa731c3f5e8cc9e6f629f5b67eac120e2fd6a6b15f4a722349"} Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.127317 4972 scope.go:117] "RemoveContainer" containerID="5408c1bb0f0589423c4f14602bd190494f5ead1bdc142f1e32e478d7ceca9219" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.127423 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.133370 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutrondbea-account-delete-9c96n"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.138534 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutrondbea-account-delete-9c96n"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.139584 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement66d8-account-delete-947ct" event={"ID":"ffb786ba-2a1a-4124-9ef7-116e12402f5c","Type":"ContainerDied","Data":"89c470b27e0cc5b41977a92000bfd879ba71fb803bedd463313e38c96aae6946"} Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.139610 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89c470b27e0cc5b41977a92000bfd879ba71fb803bedd463313e38c96aae6946" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.139624 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement66d8-account-delete-947ct" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.148233 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf21c86e-7747-4dca-a870-352dfa214beb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.148262 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chfz4\" (UniqueName: \"kubernetes.io/projected/bf21c86e-7747-4dca-a870-352dfa214beb-kube-api-access-chfz4\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.148273 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd442c75-9e94-4f54-81b6-68c19f4de9d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.148286 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl6fl\" (UniqueName: \"kubernetes.io/projected/dd442c75-9e94-4f54-81b6-68c19f4de9d8-kube-api-access-gl6fl\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.153428 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57f61d22-4b79-4f80-b7dc-0f5bea4b506d","Type":"ContainerDied","Data":"7370cbe19069066758df478b0ed4ac37f029e7f9d8cba9d0594d6e1b3147fd7d"} Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.153520 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.153957 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell05330-account-delete-cxgw8"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.167102 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell05330-account-delete-cxgw8"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.170872 4972 scope.go:117] "RemoveContainer" containerID="286335ef286bcd6ad23d79c21ed873d352f5bcd061facf6a215c8094c81d6976" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.172641 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.172955 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican958e-account-delete-tfwsx" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.173618 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican958e-account-delete-tfwsx" event={"ID":"d8334e5f-f6cb-4c49-91d6-5e414ecc53f0","Type":"ContainerDied","Data":"8bfb615f8412efc40930dc9b12e8aaaf48698b3bb0e725b1a96524dffe302dda"} Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.173646 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bfb615f8412efc40930dc9b12e8aaaf48698b3bb0e725b1a96524dffe302dda" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.207531 4972 scope.go:117] "RemoveContainer" containerID="0de233b9671d8c6438cab670dcbf6fed2c7c33040cc7b1af5fe0bdb4e3dd967b" Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.249791 4972 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.249902 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data podName:2bc44abc-7710-432b-b503-fd54e3afeede nodeName:}" failed. No retries permitted until 2025-11-21 10:07:55.24987933 +0000 UTC m=+1620.359021838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data") pod "rabbitmq-cell1-server-0" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede") : configmap "rabbitmq-cell1-config-data" not found Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.290365 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder04d8-account-delete-kvgwd"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.294986 4972 scope.go:117] "RemoveContainer" containerID="102434648feec12743eab24b7f83c96674ffe057bcf52540be9617ee45d178a0" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.300112 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder04d8-account-delete-kvgwd"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.312264 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi501f-account-delete-4746z"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.329455 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapi501f-account-delete-4746z"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.330201 4972 scope.go:117] "RemoveContainer" containerID="9b2a5c600705559dcf3e1539bb652e7d2fca4320b9c98a74794471b4827bdfb0" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.337566 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement66d8-account-delete-947ct"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.343281 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement66d8-account-delete-947ct"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.347782 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.351777 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.356589 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.361872 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.365944 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican958e-account-delete-tfwsx"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.380752 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican958e-account-delete-tfwsx"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.391752 4972 scope.go:117] "RemoveContainer" containerID="278382caff383ae2485ecd6e804ee41e63dbe81738ffa66e0a8508b7e3a9f20e" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.401481 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.410989 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.425213 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.432147 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.433276 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.433336 4972 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server" Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.434682 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.435637 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.439089 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.439201 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3c3ae47e-fcf5-4397-a2a4-8e847e542d75" containerName="nova-scheduler-scheduler" Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.444660 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.445878 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.446812 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.446892 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovs-vswitchd" Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.455311 4972 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.455392 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data podName:392b5094-f8ef-47b8-8dc5-9e1d2dbef612 nodeName:}" failed. No retries permitted until 2025-11-21 10:07:55.455371108 +0000 UTC m=+1620.564513606 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data") pod "rabbitmq-server-0" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612") : configmap "rabbitmq-config-data" not found Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.548896 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.550810 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.558060 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 10:07:47 crc kubenswrapper[4972]: E1121 10:07:47.558135 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8" containerName="nova-cell1-conductor-conductor" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.693569 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cf3edebd-74ab-4b7d-8706-2eda69d91aea/ovn-northd/0.log" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.694357 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.755927 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5q7hj" podUID="9ab92bde-9b45-49ca-a6e9-43c8921b3002" containerName="ovn-controller" probeResult="failure" output="command timed out" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.761906 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k5pg\" (UniqueName: \"kubernetes.io/projected/cf3edebd-74ab-4b7d-8706-2eda69d91aea-kube-api-access-7k5pg\") pod \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.762143 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3edebd-74ab-4b7d-8706-2eda69d91aea-config\") pod \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.762244 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-metrics-certs-tls-certs\") pod \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.762323 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-combined-ca-bundle\") pod \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.762381 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf3edebd-74ab-4b7d-8706-2eda69d91aea-scripts\") pod \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.762444 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cf3edebd-74ab-4b7d-8706-2eda69d91aea-ovn-rundir\") pod \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.762479 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-ovn-northd-tls-certs\") pod \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\" (UID: \"cf3edebd-74ab-4b7d-8706-2eda69d91aea\") " Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.763087 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf3edebd-74ab-4b7d-8706-2eda69d91aea-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "cf3edebd-74ab-4b7d-8706-2eda69d91aea" (UID: "cf3edebd-74ab-4b7d-8706-2eda69d91aea"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.762965 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf3edebd-74ab-4b7d-8706-2eda69d91aea-config" (OuterVolumeSpecName: "config") pod "cf3edebd-74ab-4b7d-8706-2eda69d91aea" (UID: "cf3edebd-74ab-4b7d-8706-2eda69d91aea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.763129 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf3edebd-74ab-4b7d-8706-2eda69d91aea-scripts" (OuterVolumeSpecName: "scripts") pod "cf3edebd-74ab-4b7d-8706-2eda69d91aea" (UID: "cf3edebd-74ab-4b7d-8706-2eda69d91aea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.772506 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" path="/var/lib/kubelet/pods/272d9c39-ab5b-4fc1-8dbe-209fbe33e293/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.773407 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" path="/var/lib/kubelet/pods/302b9e1c-affd-4f2f-bacd-98f40dedeb91/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.774400 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481cc370-a05a-4516-99f2-f94a0056a70e" path="/var/lib/kubelet/pods/481cc370-a05a-4516-99f2-f94a0056a70e/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.775304 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e11aef3-0c96-44ed-8876-7e54115d181f" path="/var/lib/kubelet/pods/4e11aef3-0c96-44ed-8876-7e54115d181f/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.775761 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" path="/var/lib/kubelet/pods/57f61d22-4b79-4f80-b7dc-0f5bea4b506d/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.776359 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eb9b3f4-6710-4818-b94c-494958fe31ad" path="/var/lib/kubelet/pods/5eb9b3f4-6710-4818-b94c-494958fe31ad/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.777463 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ed1f19-43e6-4245-82c1-f51b5f18d1e6" path="/var/lib/kubelet/pods/71ed1f19-43e6-4245-82c1-f51b5f18d1e6/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.778124 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" path="/var/lib/kubelet/pods/78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.779432 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" path="/var/lib/kubelet/pods/8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.780002 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf3edebd-74ab-4b7d-8706-2eda69d91aea-kube-api-access-7k5pg" (OuterVolumeSpecName: "kube-api-access-7k5pg") pod "cf3edebd-74ab-4b7d-8706-2eda69d91aea" (UID: "cf3edebd-74ab-4b7d-8706-2eda69d91aea"). InnerVolumeSpecName "kube-api-access-7k5pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.780372 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ccfb34-fe6c-4529-812a-af30eb178e8b" path="/var/lib/kubelet/pods/97ccfb34-fe6c-4529-812a-af30eb178e8b/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.780921 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2069a31-382b-4fc4-acee-cf202be1de1e" path="/var/lib/kubelet/pods/b2069a31-382b-4fc4-acee-cf202be1de1e/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.795210 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf21c86e-7747-4dca-a870-352dfa214beb" path="/var/lib/kubelet/pods/bf21c86e-7747-4dca-a870-352dfa214beb/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.796782 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8334e5f-f6cb-4c49-91d6-5e414ecc53f0" path="/var/lib/kubelet/pods/d8334e5f-f6cb-4c49-91d6-5e414ecc53f0/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.797429 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5q7hj" podUID="9ab92bde-9b45-49ca-a6e9-43c8921b3002" containerName="ovn-controller" probeResult="failure" output=< Nov 21 10:07:47 crc kubenswrapper[4972]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Nov 21 10:07:47 crc kubenswrapper[4972]: > Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.801389 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc57ffef-2527-4b16-b281-9139b6a0f1a1" path="/var/lib/kubelet/pods/dc57ffef-2527-4b16-b281-9139b6a0f1a1/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.806712 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd442c75-9e94-4f54-81b6-68c19f4de9d8" path="/var/lib/kubelet/pods/dd442c75-9e94-4f54-81b6-68c19f4de9d8/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.813011 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1224d0f-d488-49e6-b6dc-12a188b43a43" path="/var/lib/kubelet/pods/f1224d0f-d488-49e6-b6dc-12a188b43a43/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.814568 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" path="/var/lib/kubelet/pods/f85a9950-7e9d-4e16-9b35-d6912bacadf9/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.817521 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb786ba-2a1a-4124-9ef7-116e12402f5c" path="/var/lib/kubelet/pods/ffb786ba-2a1a-4124-9ef7-116e12402f5c/volumes" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.830115 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf3edebd-74ab-4b7d-8706-2eda69d91aea" (UID: "cf3edebd-74ab-4b7d-8706-2eda69d91aea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.831543 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "cf3edebd-74ab-4b7d-8706-2eda69d91aea" (UID: "cf3edebd-74ab-4b7d-8706-2eda69d91aea"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.856038 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "cf3edebd-74ab-4b7d-8706-2eda69d91aea" (UID: "cf3edebd-74ab-4b7d-8706-2eda69d91aea"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.864266 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k5pg\" (UniqueName: \"kubernetes.io/projected/cf3edebd-74ab-4b7d-8706-2eda69d91aea-kube-api-access-7k5pg\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.864309 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3edebd-74ab-4b7d-8706-2eda69d91aea-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.864324 4972 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.864337 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.864351 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf3edebd-74ab-4b7d-8706-2eda69d91aea-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.864363 4972 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cf3edebd-74ab-4b7d-8706-2eda69d91aea-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.864375 4972 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf3edebd-74ab-4b7d-8706-2eda69d91aea-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:47 crc kubenswrapper[4972]: I1121 10:07:47.948528 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.037863 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.067065 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-combined-ca-bundle\") pod \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.067152 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-galera-tls-certs\") pod \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.067194 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-kolla-config\") pod \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.067235 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-operator-scripts\") pod \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.067264 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-config-data-default\") pod \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.067328 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.067357 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-config-data-generated\") pod \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.067401 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9558f\" (UniqueName: \"kubernetes.io/projected/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-kube-api-access-9558f\") pod \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\" (UID: \"8027f46e-1fe2-46ad-9226-11b2cc3f8da6\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.068086 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "8027f46e-1fe2-46ad-9226-11b2cc3f8da6" (UID: "8027f46e-1fe2-46ad-9226-11b2cc3f8da6"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.068263 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "8027f46e-1fe2-46ad-9226-11b2cc3f8da6" (UID: "8027f46e-1fe2-46ad-9226-11b2cc3f8da6"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.068276 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "8027f46e-1fe2-46ad-9226-11b2cc3f8da6" (UID: "8027f46e-1fe2-46ad-9226-11b2cc3f8da6"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.068417 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8027f46e-1fe2-46ad-9226-11b2cc3f8da6" (UID: "8027f46e-1fe2-46ad-9226-11b2cc3f8da6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.071218 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-kube-api-access-9558f" (OuterVolumeSpecName: "kube-api-access-9558f") pod "8027f46e-1fe2-46ad-9226-11b2cc3f8da6" (UID: "8027f46e-1fe2-46ad-9226-11b2cc3f8da6"). InnerVolumeSpecName "kube-api-access-9558f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.089945 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "8027f46e-1fe2-46ad-9226-11b2cc3f8da6" (UID: "8027f46e-1fe2-46ad-9226-11b2cc3f8da6"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.091106 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8027f46e-1fe2-46ad-9226-11b2cc3f8da6" (UID: "8027f46e-1fe2-46ad-9226-11b2cc3f8da6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.107525 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "8027f46e-1fe2-46ad-9226-11b2cc3f8da6" (UID: "8027f46e-1fe2-46ad-9226-11b2cc3f8da6"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.169460 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-internal-tls-certs\") pod \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.169756 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltvlm\" (UniqueName: \"kubernetes.io/projected/fe028de3-cf0f-4ab0-ab52-0898bd408c89-kube-api-access-ltvlm\") pod \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.169872 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-config-data\") pod \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.169984 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-combined-ca-bundle\") pod \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.170075 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-credential-keys\") pod \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.170141 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-public-tls-certs\") pod \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.170221 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-fernet-keys\") pod \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.170299 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-scripts\") pod \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\" (UID: \"fe028de3-cf0f-4ab0-ab52-0898bd408c89\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.170745 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.170805 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.170889 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.170944 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.170999 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9558f\" (UniqueName: \"kubernetes.io/projected/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-kube-api-access-9558f\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.171058 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.171110 4972 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.171163 4972 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8027f46e-1fe2-46ad-9226-11b2cc3f8da6-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.174729 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fe028de3-cf0f-4ab0-ab52-0898bd408c89" (UID: "fe028de3-cf0f-4ab0-ab52-0898bd408c89"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.180042 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe028de3-cf0f-4ab0-ab52-0898bd408c89-kube-api-access-ltvlm" (OuterVolumeSpecName: "kube-api-access-ltvlm") pod "fe028de3-cf0f-4ab0-ab52-0898bd408c89" (UID: "fe028de3-cf0f-4ab0-ab52-0898bd408c89"). InnerVolumeSpecName "kube-api-access-ltvlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.183946 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fe028de3-cf0f-4ab0-ab52-0898bd408c89" (UID: "fe028de3-cf0f-4ab0-ab52-0898bd408c89"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.186501 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cf3edebd-74ab-4b7d-8706-2eda69d91aea/ovn-northd/0.log" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.187056 4972 generic.go:334] "Generic (PLEG): container finished" podID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" containerID="2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e" exitCode=139 Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.187191 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.187304 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cf3edebd-74ab-4b7d-8706-2eda69d91aea","Type":"ContainerDied","Data":"2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e"} Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.187346 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cf3edebd-74ab-4b7d-8706-2eda69d91aea","Type":"ContainerDied","Data":"3c918d0cee85f88f64195c902f07128cfef54a4b6d436a5109a80299cb794599"} Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.187366 4972 scope.go:117] "RemoveContainer" containerID="2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.189320 4972 generic.go:334] "Generic (PLEG): container finished" podID="2bc44abc-7710-432b-b503-fd54e3afeede" containerID="c8fbc9ceb2b6148e29eeae60a7cccd8704bb5b0088efc4a03700f71500ec7ef2" exitCode=0 Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.189381 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2bc44abc-7710-432b-b503-fd54e3afeede","Type":"ContainerDied","Data":"c8fbc9ceb2b6148e29eeae60a7cccd8704bb5b0088efc4a03700f71500ec7ef2"} Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.191876 4972 generic.go:334] "Generic (PLEG): container finished" podID="fe028de3-cf0f-4ab0-ab52-0898bd408c89" containerID="fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188" exitCode=0 Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.191915 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db44576f7-2qgwb" event={"ID":"fe028de3-cf0f-4ab0-ab52-0898bd408c89","Type":"ContainerDied","Data":"fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188"} Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.192001 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db44576f7-2qgwb" event={"ID":"fe028de3-cf0f-4ab0-ab52-0898bd408c89","Type":"ContainerDied","Data":"d5e727113c90bc7f1cbb3731746ce893b8da96ada655758dfad1ccc50e6f8bf0"} Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.192055 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db44576f7-2qgwb" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.192503 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-scripts" (OuterVolumeSpecName: "scripts") pod "fe028de3-cf0f-4ab0-ab52-0898bd408c89" (UID: "fe028de3-cf0f-4ab0-ab52-0898bd408c89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.195005 4972 generic.go:334] "Generic (PLEG): container finished" podID="8027f46e-1fe2-46ad-9226-11b2cc3f8da6" containerID="2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae" exitCode=0 Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.195045 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8027f46e-1fe2-46ad-9226-11b2cc3f8da6","Type":"ContainerDied","Data":"2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae"} Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.195061 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8027f46e-1fe2-46ad-9226-11b2cc3f8da6","Type":"ContainerDied","Data":"7d69305495d9d3b2d9a52587a6ae09d35762d7a68f360075751ea5685f4b08ca"} Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.195108 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.203742 4972 generic.go:334] "Generic (PLEG): container finished" podID="392b5094-f8ef-47b8-8dc5-9e1d2dbef612" containerID="40fd57bb0048a573eb9c5e1aa41727272375095e934fe8e65459e974a94e41af" exitCode=0 Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.203787 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"392b5094-f8ef-47b8-8dc5-9e1d2dbef612","Type":"ContainerDied","Data":"40fd57bb0048a573eb9c5e1aa41727272375095e934fe8e65459e974a94e41af"} Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.223009 4972 scope.go:117] "RemoveContainer" containerID="8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.233376 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.243551 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.250402 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.251601 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.254928 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe028de3-cf0f-4ab0-ab52-0898bd408c89" (UID: "fe028de3-cf0f-4ab0-ab52-0898bd408c89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.260415 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.261774 4972 scope.go:117] "RemoveContainer" containerID="2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.265320 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.268377 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fe028de3-cf0f-4ab0-ab52-0898bd408c89" (UID: "fe028de3-cf0f-4ab0-ab52-0898bd408c89"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.268516 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-config-data" (OuterVolumeSpecName: "config-data") pod "fe028de3-cf0f-4ab0-ab52-0898bd408c89" (UID: "fe028de3-cf0f-4ab0-ab52-0898bd408c89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.272366 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.272383 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.272422 4972 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.272431 4972 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.272440 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.272448 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.272456 4972 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.272465 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltvlm\" (UniqueName: \"kubernetes.io/projected/fe028de3-cf0f-4ab0-ab52-0898bd408c89-kube-api-access-ltvlm\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: E1121 10:07:48.278441 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e\": container with ID starting with 2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e not found: ID does not exist" containerID="2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.278490 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e"} err="failed to get container status \"2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e\": rpc error: code = NotFound desc = could not find container \"2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e\": container with ID starting with 2414b220c5f009ec8c602f60f3e9160067fa81228e1aef74c65b742822eda70e not found: ID does not exist" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.278520 4972 scope.go:117] "RemoveContainer" containerID="8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37" Nov 21 10:07:48 crc kubenswrapper[4972]: E1121 10:07:48.279137 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37\": container with ID starting with 8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37 not found: ID does not exist" containerID="8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.279189 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37"} err="failed to get container status \"8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37\": rpc error: code = NotFound desc = could not find container \"8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37\": container with ID starting with 8afa005bf75971cd8c3eab6a73627f83a30f054d36b834a57873a7d31d1a2e37 not found: ID does not exist" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.279219 4972 scope.go:117] "RemoveContainer" containerID="fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.300872 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fe028de3-cf0f-4ab0-ab52-0898bd408c89" (UID: "fe028de3-cf0f-4ab0-ab52-0898bd408c89"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.316053 4972 scope.go:117] "RemoveContainer" containerID="fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188" Nov 21 10:07:48 crc kubenswrapper[4972]: E1121 10:07:48.316491 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188\": container with ID starting with fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188 not found: ID does not exist" containerID="fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.316520 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188"} err="failed to get container status \"fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188\": rpc error: code = NotFound desc = could not find container \"fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188\": container with ID starting with fd986699756945449cded494e1de01714d7ff00650c9222a9828300ebf637188 not found: ID does not exist" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.316540 4972 scope.go:117] "RemoveContainer" containerID="2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.339966 4972 scope.go:117] "RemoveContainer" containerID="711835bfec997cc5c4cd5bb8aa782593a04256cba1b1b130be09cf0a32345a38" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.353567 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.362120 4972 scope.go:117] "RemoveContainer" containerID="2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae" Nov 21 10:07:48 crc kubenswrapper[4972]: E1121 10:07:48.362412 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae\": container with ID starting with 2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae not found: ID does not exist" containerID="2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.362454 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae"} err="failed to get container status \"2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae\": rpc error: code = NotFound desc = could not find container \"2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae\": container with ID starting with 2184a31d34063d8ee8c51f71676340442da843ea99dcf47ca9042791a8af2bae not found: ID does not exist" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.362482 4972 scope.go:117] "RemoveContainer" containerID="711835bfec997cc5c4cd5bb8aa782593a04256cba1b1b130be09cf0a32345a38" Nov 21 10:07:48 crc kubenswrapper[4972]: E1121 10:07:48.362766 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"711835bfec997cc5c4cd5bb8aa782593a04256cba1b1b130be09cf0a32345a38\": container with ID starting with 711835bfec997cc5c4cd5bb8aa782593a04256cba1b1b130be09cf0a32345a38 not found: ID does not exist" containerID="711835bfec997cc5c4cd5bb8aa782593a04256cba1b1b130be09cf0a32345a38" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.362787 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"711835bfec997cc5c4cd5bb8aa782593a04256cba1b1b130be09cf0a32345a38"} err="failed to get container status \"711835bfec997cc5c4cd5bb8aa782593a04256cba1b1b130be09cf0a32345a38\": rpc error: code = NotFound desc = could not find container \"711835bfec997cc5c4cd5bb8aa782593a04256cba1b1b130be09cf0a32345a38\": container with ID starting with 711835bfec997cc5c4cd5bb8aa782593a04256cba1b1b130be09cf0a32345a38 not found: ID does not exist" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.377716 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-erlang-cookie\") pod \"2bc44abc-7710-432b-b503-fd54e3afeede\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.379215 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-plugins\") pod \"2bc44abc-7710-432b-b503-fd54e3afeede\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.379280 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data\") pod \"2bc44abc-7710-432b-b503-fd54e3afeede\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.379336 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2bc44abc-7710-432b-b503-fd54e3afeede-erlang-cookie-secret\") pod \"2bc44abc-7710-432b-b503-fd54e3afeede\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.379357 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2bc44abc-7710-432b-b503-fd54e3afeede-pod-info\") pod \"2bc44abc-7710-432b-b503-fd54e3afeede\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.379379 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"2bc44abc-7710-432b-b503-fd54e3afeede\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.379422 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-server-conf\") pod \"2bc44abc-7710-432b-b503-fd54e3afeede\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.379462 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2bc44abc-7710-432b-b503-fd54e3afeede" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.379493 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-plugins-conf\") pod \"2bc44abc-7710-432b-b503-fd54e3afeede\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.379515 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-confd\") pod \"2bc44abc-7710-432b-b503-fd54e3afeede\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.379543 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-tls\") pod \"2bc44abc-7710-432b-b503-fd54e3afeede\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.379602 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltknk\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-kube-api-access-ltknk\") pod \"2bc44abc-7710-432b-b503-fd54e3afeede\" (UID: \"2bc44abc-7710-432b-b503-fd54e3afeede\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.380088 4972 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe028de3-cf0f-4ab0-ab52-0898bd408c89-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.380099 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.380730 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2bc44abc-7710-432b-b503-fd54e3afeede" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.382111 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2bc44abc-7710-432b-b503-fd54e3afeede" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.399427 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "2bc44abc-7710-432b-b503-fd54e3afeede" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.399439 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2bc44abc-7710-432b-b503-fd54e3afeede-pod-info" (OuterVolumeSpecName: "pod-info") pod "2bc44abc-7710-432b-b503-fd54e3afeede" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.399452 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bc44abc-7710-432b-b503-fd54e3afeede-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2bc44abc-7710-432b-b503-fd54e3afeede" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.399486 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2bc44abc-7710-432b-b503-fd54e3afeede" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.399520 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-kube-api-access-ltknk" (OuterVolumeSpecName: "kube-api-access-ltknk") pod "2bc44abc-7710-432b-b503-fd54e3afeede" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede"). InnerVolumeSpecName "kube-api-access-ltknk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.409442 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data" (OuterVolumeSpecName: "config-data") pod "2bc44abc-7710-432b-b503-fd54e3afeede" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.423558 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-server-conf" (OuterVolumeSpecName: "server-conf") pod "2bc44abc-7710-432b-b503-fd54e3afeede" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.472907 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2bc44abc-7710-432b-b503-fd54e3afeede" (UID: "2bc44abc-7710-432b-b503-fd54e3afeede"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.481300 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-plugins-conf\") pod \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.481388 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data\") pod \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.481413 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-confd\") pod \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.481476 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-erlang-cookie-secret\") pod \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.481497 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.481521 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-erlang-cookie\") pod \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.481748 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-plugins\") pod \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.481775 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7jvj\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-kube-api-access-c7jvj\") pod \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.481792 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-pod-info\") pod \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.481847 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-server-conf\") pod \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.481870 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-tls\") pod \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\" (UID: \"392b5094-f8ef-47b8-8dc5-9e1d2dbef612\") " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.482111 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltknk\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-kube-api-access-ltknk\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.482121 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.482130 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.482140 4972 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2bc44abc-7710-432b-b503-fd54e3afeede-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.482148 4972 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2bc44abc-7710-432b-b503-fd54e3afeede-pod-info\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.482164 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.482173 4972 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-server-conf\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.482182 4972 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2bc44abc-7710-432b-b503-fd54e3afeede-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.482190 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.482198 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2bc44abc-7710-432b-b503-fd54e3afeede-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.483105 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "392b5094-f8ef-47b8-8dc5-9e1d2dbef612" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.483321 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "392b5094-f8ef-47b8-8dc5-9e1d2dbef612" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.484131 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "392b5094-f8ef-47b8-8dc5-9e1d2dbef612" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.486823 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-kube-api-access-c7jvj" (OuterVolumeSpecName: "kube-api-access-c7jvj") pod "392b5094-f8ef-47b8-8dc5-9e1d2dbef612" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612"). InnerVolumeSpecName "kube-api-access-c7jvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.486871 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-pod-info" (OuterVolumeSpecName: "pod-info") pod "392b5094-f8ef-47b8-8dc5-9e1d2dbef612" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.487423 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "392b5094-f8ef-47b8-8dc5-9e1d2dbef612" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.487523 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "392b5094-f8ef-47b8-8dc5-9e1d2dbef612" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.487555 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "392b5094-f8ef-47b8-8dc5-9e1d2dbef612" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.498674 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.507708 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data" (OuterVolumeSpecName: "config-data") pod "392b5094-f8ef-47b8-8dc5-9e1d2dbef612" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.528615 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db44576f7-2qgwb"] Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.533538 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db44576f7-2qgwb"] Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.535679 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-server-conf" (OuterVolumeSpecName: "server-conf") pod "392b5094-f8ef-47b8-8dc5-9e1d2dbef612" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.580436 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "392b5094-f8ef-47b8-8dc5-9e1d2dbef612" (UID: "392b5094-f8ef-47b8-8dc5-9e1d2dbef612"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.583686 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.583717 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7jvj\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-kube-api-access-c7jvj\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.583746 4972 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-pod-info\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.583756 4972 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-server-conf\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.583764 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.583773 4972 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.583782 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.583790 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.583800 4972 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.583856 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.583866 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.583875 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/392b5094-f8ef-47b8-8dc5-9e1d2dbef612-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.599699 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 21 10:07:48 crc kubenswrapper[4972]: I1121 10:07:48.685005 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.250133 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"392b5094-f8ef-47b8-8dc5-9e1d2dbef612","Type":"ContainerDied","Data":"af52000d929d09038b9c9513fb117f07e78c19b6a2715e2afbbe9ecf4b69b07f"} Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.250200 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.250474 4972 scope.go:117] "RemoveContainer" containerID="40fd57bb0048a573eb9c5e1aa41727272375095e934fe8e65459e974a94e41af" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.261130 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2bc44abc-7710-432b-b503-fd54e3afeede","Type":"ContainerDied","Data":"a3ce3ee1c820c06cf672fdfc25dc02c7cd4b5101b0db3597d6e585aad5886b89"} Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.261199 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.265568 4972 generic.go:334] "Generic (PLEG): container finished" podID="fbaa8ec7-5499-43d1-ac80-dd8708d28643" containerID="8e1c5eaa82bd2eee5d1cc7e05fbf76fc1373742047fa91d2696d0552ca0cc505" exitCode=0 Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.265645 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" event={"ID":"fbaa8ec7-5499-43d1-ac80-dd8708d28643","Type":"ContainerDied","Data":"8e1c5eaa82bd2eee5d1cc7e05fbf76fc1373742047fa91d2696d0552ca0cc505"} Nov 21 10:07:49 crc kubenswrapper[4972]: E1121 10:07:49.384819 4972 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 21 10:07:49 crc kubenswrapper[4972]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-11-21T10:07:42Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 21 10:07:49 crc kubenswrapper[4972]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Nov 21 10:07:49 crc kubenswrapper[4972]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-5q7hj" message=< Nov 21 10:07:49 crc kubenswrapper[4972]: Exiting ovn-controller (1) [FAILED] Nov 21 10:07:49 crc kubenswrapper[4972]: Killing ovn-controller (1) [ OK ] Nov 21 10:07:49 crc kubenswrapper[4972]: Killing ovn-controller (1) with SIGKILL [ OK ] Nov 21 10:07:49 crc kubenswrapper[4972]: 2025-11-21T10:07:42Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 21 10:07:49 crc kubenswrapper[4972]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Nov 21 10:07:49 crc kubenswrapper[4972]: > Nov 21 10:07:49 crc kubenswrapper[4972]: E1121 10:07:49.384866 4972 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 21 10:07:49 crc kubenswrapper[4972]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-11-21T10:07:42Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 21 10:07:49 crc kubenswrapper[4972]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Nov 21 10:07:49 crc kubenswrapper[4972]: > pod="openstack/ovn-controller-5q7hj" podUID="9ab92bde-9b45-49ca-a6e9-43c8921b3002" containerName="ovn-controller" containerID="cri-o://6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.384905 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-5q7hj" podUID="9ab92bde-9b45-49ca-a6e9-43c8921b3002" containerName="ovn-controller" containerID="cri-o://6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207" gracePeriod=22 Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.484366 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.489304 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.497683 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.508375 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.514995 4972 scope.go:117] "RemoveContainer" containerID="a2591e9b6da9f52ba55bc3c5cc658736bb1d86090db5b0174bf09055e27e205d" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.530204 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.550102 4972 scope.go:117] "RemoveContainer" containerID="c8fbc9ceb2b6148e29eeae60a7cccd8704bb5b0088efc4a03700f71500ec7ef2" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.600602 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-combined-ca-bundle\") pod \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.600672 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgnl7\" (UniqueName: \"kubernetes.io/projected/fbaa8ec7-5499-43d1-ac80-dd8708d28643-kube-api-access-bgnl7\") pod \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.600703 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-config-data\") pod \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.600771 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-config-data-custom\") pod \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.600805 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbaa8ec7-5499-43d1-ac80-dd8708d28643-logs\") pod \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\" (UID: \"fbaa8ec7-5499-43d1-ac80-dd8708d28643\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.601602 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbaa8ec7-5499-43d1-ac80-dd8708d28643-logs" (OuterVolumeSpecName: "logs") pod "fbaa8ec7-5499-43d1-ac80-dd8708d28643" (UID: "fbaa8ec7-5499-43d1-ac80-dd8708d28643"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.605500 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbaa8ec7-5499-43d1-ac80-dd8708d28643-kube-api-access-bgnl7" (OuterVolumeSpecName: "kube-api-access-bgnl7") pod "fbaa8ec7-5499-43d1-ac80-dd8708d28643" (UID: "fbaa8ec7-5499-43d1-ac80-dd8708d28643"). InnerVolumeSpecName "kube-api-access-bgnl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.606416 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fbaa8ec7-5499-43d1-ac80-dd8708d28643" (UID: "fbaa8ec7-5499-43d1-ac80-dd8708d28643"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.627253 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbaa8ec7-5499-43d1-ac80-dd8708d28643" (UID: "fbaa8ec7-5499-43d1-ac80-dd8708d28643"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.639039 4972 scope.go:117] "RemoveContainer" containerID="695ff7e74d4466cf78e5259f9386de929cd91903ca07545dbfd50157060920ad" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.641801 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-config-data" (OuterVolumeSpecName: "config-data") pod "fbaa8ec7-5499-43d1-ac80-dd8708d28643" (UID: "fbaa8ec7-5499-43d1-ac80-dd8708d28643"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.702526 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.702555 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbaa8ec7-5499-43d1-ac80-dd8708d28643-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.702565 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.702574 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgnl7\" (UniqueName: \"kubernetes.io/projected/fbaa8ec7-5499-43d1-ac80-dd8708d28643-kube-api-access-bgnl7\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.702586 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbaa8ec7-5499-43d1-ac80-dd8708d28643-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.720879 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.774032 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bc44abc-7710-432b-b503-fd54e3afeede" path="/var/lib/kubelet/pods/2bc44abc-7710-432b-b503-fd54e3afeede/volumes" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.774731 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392b5094-f8ef-47b8-8dc5-9e1d2dbef612" path="/var/lib/kubelet/pods/392b5094-f8ef-47b8-8dc5-9e1d2dbef612/volumes" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.779285 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8027f46e-1fe2-46ad-9226-11b2cc3f8da6" path="/var/lib/kubelet/pods/8027f46e-1fe2-46ad-9226-11b2cc3f8da6/volumes" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.779939 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" path="/var/lib/kubelet/pods/cf3edebd-74ab-4b7d-8706-2eda69d91aea/volumes" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.780445 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe028de3-cf0f-4ab0-ab52-0898bd408c89" path="/var/lib/kubelet/pods/fe028de3-cf0f-4ab0-ab52-0898bd408c89/volumes" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.803824 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1934e8d3-ef66-4d0e-8d12-bd958545270a-logs\") pod \"1934e8d3-ef66-4d0e-8d12-bd958545270a\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.803908 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-combined-ca-bundle\") pod \"1934e8d3-ef66-4d0e-8d12-bd958545270a\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.804024 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-config-data-custom\") pod \"1934e8d3-ef66-4d0e-8d12-bd958545270a\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.804073 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpc5q\" (UniqueName: \"kubernetes.io/projected/1934e8d3-ef66-4d0e-8d12-bd958545270a-kube-api-access-cpc5q\") pod \"1934e8d3-ef66-4d0e-8d12-bd958545270a\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.804109 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-config-data\") pod \"1934e8d3-ef66-4d0e-8d12-bd958545270a\" (UID: \"1934e8d3-ef66-4d0e-8d12-bd958545270a\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.807561 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1934e8d3-ef66-4d0e-8d12-bd958545270a" (UID: "1934e8d3-ef66-4d0e-8d12-bd958545270a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.807877 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1934e8d3-ef66-4d0e-8d12-bd958545270a-logs" (OuterVolumeSpecName: "logs") pod "1934e8d3-ef66-4d0e-8d12-bd958545270a" (UID: "1934e8d3-ef66-4d0e-8d12-bd958545270a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.826449 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1934e8d3-ef66-4d0e-8d12-bd958545270a-kube-api-access-cpc5q" (OuterVolumeSpecName: "kube-api-access-cpc5q") pod "1934e8d3-ef66-4d0e-8d12-bd958545270a" (UID: "1934e8d3-ef66-4d0e-8d12-bd958545270a"). InnerVolumeSpecName "kube-api-access-cpc5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.832207 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1934e8d3-ef66-4d0e-8d12-bd958545270a" (UID: "1934e8d3-ef66-4d0e-8d12-bd958545270a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.848837 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.857986 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.860225 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-config-data" (OuterVolumeSpecName: "config-data") pod "1934e8d3-ef66-4d0e-8d12-bd958545270a" (UID: "1934e8d3-ef66-4d0e-8d12-bd958545270a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.864236 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5q7hj_9ab92bde-9b45-49ca-a6e9-43c8921b3002/ovn-controller/0.log" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.864341 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q7hj" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.905613 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncjs4\" (UniqueName: \"kubernetes.io/projected/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-kube-api-access-ncjs4\") pod \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\" (UID: \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.905659 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-config-data\") pod \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\" (UID: \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.905684 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-combined-ca-bundle\") pod \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\" (UID: \"3c3ae47e-fcf5-4397-a2a4-8e847e542d75\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.905717 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncbmk\" (UniqueName: \"kubernetes.io/projected/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-kube-api-access-ncbmk\") pod \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\" (UID: \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.905765 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-config-data\") pod \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\" (UID: \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.905792 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-combined-ca-bundle\") pod \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\" (UID: \"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8\") " Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.906256 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpc5q\" (UniqueName: \"kubernetes.io/projected/1934e8d3-ef66-4d0e-8d12-bd958545270a-kube-api-access-cpc5q\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.906277 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.906288 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1934e8d3-ef66-4d0e-8d12-bd958545270a-logs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.906465 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.906479 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1934e8d3-ef66-4d0e-8d12-bd958545270a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.911730 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-kube-api-access-ncjs4" (OuterVolumeSpecName: "kube-api-access-ncjs4") pod "3c3ae47e-fcf5-4397-a2a4-8e847e542d75" (UID: "3c3ae47e-fcf5-4397-a2a4-8e847e542d75"). InnerVolumeSpecName "kube-api-access-ncjs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.911968 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-kube-api-access-ncbmk" (OuterVolumeSpecName: "kube-api-access-ncbmk") pod "9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8" (UID: "9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8"). InnerVolumeSpecName "kube-api-access-ncbmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.927004 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c3ae47e-fcf5-4397-a2a4-8e847e542d75" (UID: "3c3ae47e-fcf5-4397-a2a4-8e847e542d75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.931379 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8" (UID: "9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.932994 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-config-data" (OuterVolumeSpecName: "config-data") pod "3c3ae47e-fcf5-4397-a2a4-8e847e542d75" (UID: "3c3ae47e-fcf5-4397-a2a4-8e847e542d75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:49 crc kubenswrapper[4972]: I1121 10:07:49.940949 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-config-data" (OuterVolumeSpecName: "config-data") pod "9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8" (UID: "9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.007673 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-run-ovn\") pod \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.007762 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9ab92bde-9b45-49ca-a6e9-43c8921b3002" (UID: "9ab92bde-9b45-49ca-a6e9-43c8921b3002"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.007805 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-run\") pod \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.007825 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-run" (OuterVolumeSpecName: "var-run") pod "9ab92bde-9b45-49ca-a6e9-43c8921b3002" (UID: "9ab92bde-9b45-49ca-a6e9-43c8921b3002"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.007991 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab92bde-9b45-49ca-a6e9-43c8921b3002-scripts\") pod \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.008063 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-log-ovn\") pod \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.008099 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab92bde-9b45-49ca-a6e9-43c8921b3002-ovn-controller-tls-certs\") pod \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.008157 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab92bde-9b45-49ca-a6e9-43c8921b3002-combined-ca-bundle\") pod \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.008252 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdqnl\" (UniqueName: \"kubernetes.io/projected/9ab92bde-9b45-49ca-a6e9-43c8921b3002-kube-api-access-qdqnl\") pod \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\" (UID: \"9ab92bde-9b45-49ca-a6e9-43c8921b3002\") " Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.008462 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9ab92bde-9b45-49ca-a6e9-43c8921b3002" (UID: "9ab92bde-9b45-49ca-a6e9-43c8921b3002"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.009484 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ab92bde-9b45-49ca-a6e9-43c8921b3002-scripts" (OuterVolumeSpecName: "scripts") pod "9ab92bde-9b45-49ca-a6e9-43c8921b3002" (UID: "9ab92bde-9b45-49ca-a6e9-43c8921b3002"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.009789 4972 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.009884 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncjs4\" (UniqueName: \"kubernetes.io/projected/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-kube-api-access-ncjs4\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.009898 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.009931 4972 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.009945 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3ae47e-fcf5-4397-a2a4-8e847e542d75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.009958 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncbmk\" (UniqueName: \"kubernetes.io/projected/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-kube-api-access-ncbmk\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.009969 4972 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ab92bde-9b45-49ca-a6e9-43c8921b3002-var-run\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.009982 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.010018 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.010039 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab92bde-9b45-49ca-a6e9-43c8921b3002-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.013448 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ab92bde-9b45-49ca-a6e9-43c8921b3002-kube-api-access-qdqnl" (OuterVolumeSpecName: "kube-api-access-qdqnl") pod "9ab92bde-9b45-49ca-a6e9-43c8921b3002" (UID: "9ab92bde-9b45-49ca-a6e9-43c8921b3002"). InnerVolumeSpecName "kube-api-access-qdqnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.036651 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ab92bde-9b45-49ca-a6e9-43c8921b3002-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ab92bde-9b45-49ca-a6e9-43c8921b3002" (UID: "9ab92bde-9b45-49ca-a6e9-43c8921b3002"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.077685 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ab92bde-9b45-49ca-a6e9-43c8921b3002-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "9ab92bde-9b45-49ca-a6e9-43c8921b3002" (UID: "9ab92bde-9b45-49ca-a6e9-43c8921b3002"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.111157 4972 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab92bde-9b45-49ca-a6e9-43c8921b3002-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.111186 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab92bde-9b45-49ca-a6e9-43c8921b3002-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.111195 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdqnl\" (UniqueName: \"kubernetes.io/projected/9ab92bde-9b45-49ca-a6e9-43c8921b3002-kube-api-access-qdqnl\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.296773 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5q7hj_9ab92bde-9b45-49ca-a6e9-43c8921b3002/ovn-controller/0.log" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.297368 4972 generic.go:334] "Generic (PLEG): container finished" podID="9ab92bde-9b45-49ca-a6e9-43c8921b3002" containerID="6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207" exitCode=137 Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.297536 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q7hj" event={"ID":"9ab92bde-9b45-49ca-a6e9-43c8921b3002","Type":"ContainerDied","Data":"6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207"} Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.297587 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q7hj" event={"ID":"9ab92bde-9b45-49ca-a6e9-43c8921b3002","Type":"ContainerDied","Data":"2ed5f7e094f6a525bdafa2a2d6503702b9e17e50fc2f306eae145566d1142eea"} Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.297611 4972 scope.go:117] "RemoveContainer" containerID="6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.298174 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q7hj" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.302034 4972 generic.go:334] "Generic (PLEG): container finished" podID="3c3ae47e-fcf5-4397-a2a4-8e847e542d75" containerID="528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180" exitCode=0 Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.302101 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3c3ae47e-fcf5-4397-a2a4-8e847e542d75","Type":"ContainerDied","Data":"528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180"} Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.302133 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3c3ae47e-fcf5-4397-a2a4-8e847e542d75","Type":"ContainerDied","Data":"dddc8e3ecb0b3a13568d7e208c2d4946e4e4b11413f9eb2ab601daae140e4758"} Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.302198 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.311529 4972 generic.go:334] "Generic (PLEG): container finished" podID="9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8" containerID="2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c" exitCode=0 Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.311624 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8","Type":"ContainerDied","Data":"2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c"} Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.311663 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8","Type":"ContainerDied","Data":"1f27328f827320adc170a99d6d7a584065f7290ce0b4546472acd6161887137b"} Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.311742 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.348722 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.348709 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78666b77b6-ll6mt" event={"ID":"fbaa8ec7-5499-43d1-ac80-dd8708d28643","Type":"ContainerDied","Data":"55dee782b334cdc279970f9cac4c47950b7bbe5b2f6f00f92ae492c03d54793a"} Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.357321 4972 generic.go:334] "Generic (PLEG): container finished" podID="1934e8d3-ef66-4d0e-8d12-bd958545270a" containerID="1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f" exitCode=0 Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.357368 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" event={"ID":"1934e8d3-ef66-4d0e-8d12-bd958545270a","Type":"ContainerDied","Data":"1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f"} Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.357373 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.357396 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7fcd667fc5-5ctgv" event={"ID":"1934e8d3-ef66-4d0e-8d12-bd958545270a","Type":"ContainerDied","Data":"5fbd2f9d2fec5da940c5d561a6666dae5c0a5537bd48f5d1767a133a503b3413"} Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.369755 4972 scope.go:117] "RemoveContainer" containerID="6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207" Nov 21 10:07:50 crc kubenswrapper[4972]: E1121 10:07:50.370651 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207\": container with ID starting with 6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207 not found: ID does not exist" containerID="6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.370705 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207"} err="failed to get container status \"6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207\": rpc error: code = NotFound desc = could not find container \"6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207\": container with ID starting with 6c8e922d3ed20c26120dd95de293105d1546c257b28a0dadd79eaa8178afa207 not found: ID does not exist" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.370732 4972 scope.go:117] "RemoveContainer" containerID="528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.388312 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.397784 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.404434 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5q7hj"] Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.410868 4972 scope.go:117] "RemoveContainer" containerID="528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180" Nov 21 10:07:50 crc kubenswrapper[4972]: E1121 10:07:50.411774 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180\": container with ID starting with 528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180 not found: ID does not exist" containerID="528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.411808 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180"} err="failed to get container status \"528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180\": rpc error: code = NotFound desc = could not find container \"528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180\": container with ID starting with 528382eac18bd0308541931e46106ebb14493b19c4b625ee808c7a117ef52180 not found: ID does not exist" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.411925 4972 scope.go:117] "RemoveContainer" containerID="2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.417169 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5q7hj"] Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.426226 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.432140 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.440321 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-78666b77b6-ll6mt"] Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.446910 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-78666b77b6-ll6mt"] Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.452029 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7fcd667fc5-5ctgv"] Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.455912 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7fcd667fc5-5ctgv"] Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.467006 4972 scope.go:117] "RemoveContainer" containerID="2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c" Nov 21 10:07:50 crc kubenswrapper[4972]: E1121 10:07:50.468159 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c\": container with ID starting with 2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c not found: ID does not exist" containerID="2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.468266 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c"} err="failed to get container status \"2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c\": rpc error: code = NotFound desc = could not find container \"2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c\": container with ID starting with 2c8b3c0c3518327c81d51434735d0ba0511f266e78981d13077248c64dbb2a4c not found: ID does not exist" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.468357 4972 scope.go:117] "RemoveContainer" containerID="8e1c5eaa82bd2eee5d1cc7e05fbf76fc1373742047fa91d2696d0552ca0cc505" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.492974 4972 scope.go:117] "RemoveContainer" containerID="cfd792eb202fbf7b53ee8748aadb575b4d7545be47e73ef984e2cbe95e0adcce" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.509110 4972 scope.go:117] "RemoveContainer" containerID="1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.528900 4972 scope.go:117] "RemoveContainer" containerID="b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.547090 4972 scope.go:117] "RemoveContainer" containerID="1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f" Nov 21 10:07:50 crc kubenswrapper[4972]: E1121 10:07:50.547661 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f\": container with ID starting with 1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f not found: ID does not exist" containerID="1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.547697 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f"} err="failed to get container status \"1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f\": rpc error: code = NotFound desc = could not find container \"1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f\": container with ID starting with 1490090909ceb9184fad5aa95d87536f218026b674ea5f4c01d93e9061fced2f not found: ID does not exist" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.547719 4972 scope.go:117] "RemoveContainer" containerID="b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad" Nov 21 10:07:50 crc kubenswrapper[4972]: E1121 10:07:50.548096 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad\": container with ID starting with b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad not found: ID does not exist" containerID="b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.548169 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad"} err="failed to get container status \"b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad\": rpc error: code = NotFound desc = could not find container \"b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad\": container with ID starting with b4cd6783c1c066e41ca01043747c17250cebc9cc0aed250c754bd49748a690ad not found: ID does not exist" Nov 21 10:07:50 crc kubenswrapper[4972]: I1121 10:07:50.760035 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:07:50 crc kubenswrapper[4972]: E1121 10:07:50.760417 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:07:51 crc kubenswrapper[4972]: I1121 10:07:51.773069 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1934e8d3-ef66-4d0e-8d12-bd958545270a" path="/var/lib/kubelet/pods/1934e8d3-ef66-4d0e-8d12-bd958545270a/volumes" Nov 21 10:07:51 crc kubenswrapper[4972]: I1121 10:07:51.775771 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3ae47e-fcf5-4397-a2a4-8e847e542d75" path="/var/lib/kubelet/pods/3c3ae47e-fcf5-4397-a2a4-8e847e542d75/volumes" Nov 21 10:07:51 crc kubenswrapper[4972]: I1121 10:07:51.776824 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ab92bde-9b45-49ca-a6e9-43c8921b3002" path="/var/lib/kubelet/pods/9ab92bde-9b45-49ca-a6e9-43c8921b3002/volumes" Nov 21 10:07:51 crc kubenswrapper[4972]: I1121 10:07:51.777692 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8" path="/var/lib/kubelet/pods/9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8/volumes" Nov 21 10:07:51 crc kubenswrapper[4972]: I1121 10:07:51.779248 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbaa8ec7-5499-43d1-ac80-dd8708d28643" path="/var/lib/kubelet/pods/fbaa8ec7-5499-43d1-ac80-dd8708d28643/volumes" Nov 21 10:07:52 crc kubenswrapper[4972]: E1121 10:07:52.424426 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:07:52 crc kubenswrapper[4972]: E1121 10:07:52.425585 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:07:52 crc kubenswrapper[4972]: E1121 10:07:52.426098 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:07:52 crc kubenswrapper[4972]: E1121 10:07:52.426179 4972 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server" Nov 21 10:07:52 crc kubenswrapper[4972]: E1121 10:07:52.426441 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:07:52 crc kubenswrapper[4972]: E1121 10:07:52.428275 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:07:52 crc kubenswrapper[4972]: E1121 10:07:52.429772 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:07:52 crc kubenswrapper[4972]: E1121 10:07:52.429925 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovs-vswitchd" Nov 21 10:07:57 crc kubenswrapper[4972]: E1121 10:07:57.424684 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:07:57 crc kubenswrapper[4972]: E1121 10:07:57.425580 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:07:57 crc kubenswrapper[4972]: E1121 10:07:57.425796 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:07:57 crc kubenswrapper[4972]: E1121 10:07:57.427301 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:07:57 crc kubenswrapper[4972]: E1121 10:07:57.427341 4972 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server" Nov 21 10:07:57 crc kubenswrapper[4972]: E1121 10:07:57.427846 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:07:57 crc kubenswrapper[4972]: E1121 10:07:57.432272 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:07:57 crc kubenswrapper[4972]: E1121 10:07:57.432313 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovs-vswitchd" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.448691 4972 generic.go:334] "Generic (PLEG): container finished" podID="56aac81e-b855-4419-b8a5-8f1fc099b5e6" containerID="a84df8a5c99a95c300cc9bc766b529621a802a107975b46bcdb8f96199772bb6" exitCode=0 Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.448734 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f8cf4757-8cflk" event={"ID":"56aac81e-b855-4419-b8a5-8f1fc099b5e6","Type":"ContainerDied","Data":"a84df8a5c99a95c300cc9bc766b529621a802a107975b46bcdb8f96199772bb6"} Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.448802 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f8cf4757-8cflk" event={"ID":"56aac81e-b855-4419-b8a5-8f1fc099b5e6","Type":"ContainerDied","Data":"cd60b1b2d6420183cc66e449c1678dedc4fe565967fabc882fd0a4a7eca66999"} Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.448821 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd60b1b2d6420183cc66e449c1678dedc4fe565967fabc882fd0a4a7eca66999" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.480495 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.562116 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-combined-ca-bundle\") pod \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.562202 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k56wn\" (UniqueName: \"kubernetes.io/projected/56aac81e-b855-4419-b8a5-8f1fc099b5e6-kube-api-access-k56wn\") pod \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.562250 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-ovndb-tls-certs\") pod \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.562307 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-httpd-config\") pod \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.562355 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-public-tls-certs\") pod \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.562391 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-internal-tls-certs\") pod \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.562457 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-config\") pod \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\" (UID: \"56aac81e-b855-4419-b8a5-8f1fc099b5e6\") " Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.567662 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "56aac81e-b855-4419-b8a5-8f1fc099b5e6" (UID: "56aac81e-b855-4419-b8a5-8f1fc099b5e6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.568421 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56aac81e-b855-4419-b8a5-8f1fc099b5e6-kube-api-access-k56wn" (OuterVolumeSpecName: "kube-api-access-k56wn") pod "56aac81e-b855-4419-b8a5-8f1fc099b5e6" (UID: "56aac81e-b855-4419-b8a5-8f1fc099b5e6"). InnerVolumeSpecName "kube-api-access-k56wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.605020 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56aac81e-b855-4419-b8a5-8f1fc099b5e6" (UID: "56aac81e-b855-4419-b8a5-8f1fc099b5e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.621929 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-config" (OuterVolumeSpecName: "config") pod "56aac81e-b855-4419-b8a5-8f1fc099b5e6" (UID: "56aac81e-b855-4419-b8a5-8f1fc099b5e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.625689 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "56aac81e-b855-4419-b8a5-8f1fc099b5e6" (UID: "56aac81e-b855-4419-b8a5-8f1fc099b5e6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.642145 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "56aac81e-b855-4419-b8a5-8f1fc099b5e6" (UID: "56aac81e-b855-4419-b8a5-8f1fc099b5e6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.651725 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "56aac81e-b855-4419-b8a5-8f1fc099b5e6" (UID: "56aac81e-b855-4419-b8a5-8f1fc099b5e6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.664871 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.664923 4972 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.664943 4972 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.664965 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-config\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.664984 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.665002 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k56wn\" (UniqueName: \"kubernetes.io/projected/56aac81e-b855-4419-b8a5-8f1fc099b5e6-kube-api-access-k56wn\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:57 crc kubenswrapper[4972]: I1121 10:07:57.665020 4972 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56aac81e-b855-4419-b8a5-8f1fc099b5e6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 10:07:58 crc kubenswrapper[4972]: I1121 10:07:58.457758 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79f8cf4757-8cflk" Nov 21 10:07:58 crc kubenswrapper[4972]: I1121 10:07:58.481872 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-79f8cf4757-8cflk"] Nov 21 10:07:58 crc kubenswrapper[4972]: I1121 10:07:58.487292 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-79f8cf4757-8cflk"] Nov 21 10:07:59 crc kubenswrapper[4972]: I1121 10:07:59.775909 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56aac81e-b855-4419-b8a5-8f1fc099b5e6" path="/var/lib/kubelet/pods/56aac81e-b855-4419-b8a5-8f1fc099b5e6/volumes" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.782921 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5cbtn"] Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.785161 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" containerName="nova-metadata-metadata" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.785329 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" containerName="nova-metadata-metadata" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.785486 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed54a06-08b9-41a2-92d9-a745631e053c" containerName="galera" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.785611 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed54a06-08b9-41a2-92d9-a745631e053c" containerName="galera" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.785735 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2069a31-382b-4fc4-acee-cf202be1de1e" containerName="glance-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.785887 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2069a31-382b-4fc4-acee-cf202be1de1e" containerName="glance-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.786039 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4280cc0e-ca6a-47d7-be4d-a05beb85de3c" containerName="init" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.786151 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4280cc0e-ca6a-47d7-be4d-a05beb85de3c" containerName="init" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.786276 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="ceilometer-notification-agent" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.786398 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="ceilometer-notification-agent" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.786551 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2069a31-382b-4fc4-acee-cf202be1de1e" containerName="glance-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.786676 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2069a31-382b-4fc4-acee-cf202be1de1e" containerName="glance-log" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.786789 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c438ca-0f93-434d-81ea-29ae82b217bf" containerName="ovsdbserver-nb" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.786987 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c438ca-0f93-434d-81ea-29ae82b217bf" containerName="ovsdbserver-nb" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.787167 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" containerName="placement-api" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.787352 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" containerName="placement-api" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.787539 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56aac81e-b855-4419-b8a5-8f1fc099b5e6" containerName="neutron-api" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.787710 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="56aac81e-b855-4419-b8a5-8f1fc099b5e6" containerName="neutron-api" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.787918 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" containerName="glance-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.788053 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" containerName="glance-log" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.788166 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4280cc0e-ca6a-47d7-be4d-a05beb85de3c" containerName="dnsmasq-dns" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.788275 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4280cc0e-ca6a-47d7-be4d-a05beb85de3c" containerName="dnsmasq-dns" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.788442 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe028de3-cf0f-4ab0-ab52-0898bd408c89" containerName="keystone-api" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.788582 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe028de3-cf0f-4ab0-ab52-0898bd408c89" containerName="keystone-api" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.788743 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="proxy-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.788962 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="proxy-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.789115 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf21c86e-7747-4dca-a870-352dfa214beb" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.789243 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf21c86e-7747-4dca-a870-352dfa214beb" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.789521 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" containerName="nova-api-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.789671 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" containerName="nova-api-log" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.790939 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" containerName="openstack-network-exporter" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791005 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" containerName="openstack-network-exporter" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791053 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd442c75-9e94-4f54-81b6-68c19f4de9d8" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791071 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd442c75-9e94-4f54-81b6-68c19f4de9d8" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791091 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44805331-e34b-4455-a744-4c8fe27a1b9e" containerName="openstack-network-exporter" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791103 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="44805331-e34b-4455-a744-4c8fe27a1b9e" containerName="openstack-network-exporter" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791129 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bb7ffc3-501c-420f-834c-0509b4a509eb" containerName="openstack-network-exporter" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791141 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bb7ffc3-501c-420f-834c-0509b4a509eb" containerName="openstack-network-exporter" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791156 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8027f46e-1fe2-46ad-9226-11b2cc3f8da6" containerName="galera" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791166 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8027f46e-1fe2-46ad-9226-11b2cc3f8da6" containerName="galera" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791191 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc57ffef-2527-4b16-b281-9139b6a0f1a1" containerName="cinder-api" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791202 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc57ffef-2527-4b16-b281-9139b6a0f1a1" containerName="cinder-api" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791224 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392b5094-f8ef-47b8-8dc5-9e1d2dbef612" containerName="setup-container" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791235 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="392b5094-f8ef-47b8-8dc5-9e1d2dbef612" containerName="setup-container" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791254 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44805331-e34b-4455-a744-4c8fe27a1b9e" containerName="ovsdbserver-sb" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791265 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="44805331-e34b-4455-a744-4c8fe27a1b9e" containerName="ovsdbserver-sb" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791281 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb786ba-2a1a-4124-9ef7-116e12402f5c" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791314 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb786ba-2a1a-4124-9ef7-116e12402f5c" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791333 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befdbf4d-7d20-40ca-9985-8309a0295dad" containerName="cinder-scheduler" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791343 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="befdbf4d-7d20-40ca-9985-8309a0295dad" containerName="cinder-scheduler" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791367 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" containerName="placement-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791380 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" containerName="placement-log" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791402 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" containerName="glance-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791415 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" containerName="glance-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791428 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" containerName="nova-api-api" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791438 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" containerName="nova-api-api" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791454 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481cc370-a05a-4516-99f2-f94a0056a70e" containerName="memcached" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791464 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="481cc370-a05a-4516-99f2-f94a0056a70e" containerName="memcached" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791481 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69d7d80-dc29-4483-917c-c25921b56e9c" containerName="proxy-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791492 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69d7d80-dc29-4483-917c-c25921b56e9c" containerName="proxy-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791502 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc61b266-e156-4999-8ec7-8aa1f1988e42" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791512 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc61b266-e156-4999-8ec7-8aa1f1988e42" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791530 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392b5094-f8ef-47b8-8dc5-9e1d2dbef612" containerName="rabbitmq" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791540 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="392b5094-f8ef-47b8-8dc5-9e1d2dbef612" containerName="rabbitmq" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791554 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" containerName="barbican-api" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791564 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" containerName="barbican-api" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791587 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ccfb34-fe6c-4529-812a-af30eb178e8b" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791598 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ccfb34-fe6c-4529-812a-af30eb178e8b" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791620 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56aac81e-b855-4419-b8a5-8f1fc099b5e6" containerName="neutron-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791630 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="56aac81e-b855-4419-b8a5-8f1fc099b5e6" containerName="neutron-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791644 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8334e5f-f6cb-4c49-91d6-5e414ecc53f0" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791654 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8334e5f-f6cb-4c49-91d6-5e414ecc53f0" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791674 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="ceilometer-central-agent" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791684 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="ceilometer-central-agent" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791697 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8027f46e-1fe2-46ad-9226-11b2cc3f8da6" containerName="mysql-bootstrap" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791707 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8027f46e-1fe2-46ad-9226-11b2cc3f8da6" containerName="mysql-bootstrap" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791720 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e11aef3-0c96-44ed-8876-7e54115d181f" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791730 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e11aef3-0c96-44ed-8876-7e54115d181f" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791745 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1934e8d3-ef66-4d0e-8d12-bd958545270a" containerName="barbican-worker-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791757 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1934e8d3-ef66-4d0e-8d12-bd958545270a" containerName="barbican-worker-log" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791776 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1224d0f-d488-49e6-b6dc-12a188b43a43" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791788 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1224d0f-d488-49e6-b6dc-12a188b43a43" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791801 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbaa8ec7-5499-43d1-ac80-dd8708d28643" containerName="barbican-keystone-listener-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791814 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbaa8ec7-5499-43d1-ac80-dd8708d28643" containerName="barbican-keystone-listener-log" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791851 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bc44abc-7710-432b-b503-fd54e3afeede" containerName="rabbitmq" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791863 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bc44abc-7710-432b-b503-fd54e3afeede" containerName="rabbitmq" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791881 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69d7d80-dc29-4483-917c-c25921b56e9c" containerName="proxy-server" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791892 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69d7d80-dc29-4483-917c-c25921b56e9c" containerName="proxy-server" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791910 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eb9b3f4-6710-4818-b94c-494958fe31ad" containerName="nova-cell0-conductor-conductor" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791921 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eb9b3f4-6710-4818-b94c-494958fe31ad" containerName="nova-cell0-conductor-conductor" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791940 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbaa8ec7-5499-43d1-ac80-dd8708d28643" containerName="barbican-keystone-listener" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.791969 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbaa8ec7-5499-43d1-ac80-dd8708d28643" containerName="barbican-keystone-listener" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.791989 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" containerName="ovn-northd" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792000 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" containerName="ovn-northd" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792012 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" containerName="nova-metadata-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792022 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" containerName="nova-metadata-log" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792091 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8" containerName="nova-cell1-conductor-conductor" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792104 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8" containerName="nova-cell1-conductor-conductor" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792120 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab92bde-9b45-49ca-a6e9-43c8921b3002" containerName="ovn-controller" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792130 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab92bde-9b45-49ca-a6e9-43c8921b3002" containerName="ovn-controller" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792147 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ed1f19-43e6-4245-82c1-f51b5f18d1e6" containerName="kube-state-metrics" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792158 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ed1f19-43e6-4245-82c1-f51b5f18d1e6" containerName="kube-state-metrics" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792173 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c438ca-0f93-434d-81ea-29ae82b217bf" containerName="openstack-network-exporter" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792184 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c438ca-0f93-434d-81ea-29ae82b217bf" containerName="openstack-network-exporter" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792195 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1934e8d3-ef66-4d0e-8d12-bd958545270a" containerName="barbican-worker" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792205 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1934e8d3-ef66-4d0e-8d12-bd958545270a" containerName="barbican-worker" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792220 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" containerName="barbican-api-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792231 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" containerName="barbican-api-log" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792253 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc57ffef-2527-4b16-b281-9139b6a0f1a1" containerName="cinder-api-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792265 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc57ffef-2527-4b16-b281-9139b6a0f1a1" containerName="cinder-api-log" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792282 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed54a06-08b9-41a2-92d9-a745631e053c" containerName="mysql-bootstrap" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792293 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed54a06-08b9-41a2-92d9-a745631e053c" containerName="mysql-bootstrap" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792313 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bc44abc-7710-432b-b503-fd54e3afeede" containerName="setup-container" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792325 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bc44abc-7710-432b-b503-fd54e3afeede" containerName="setup-container" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792343 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3ae47e-fcf5-4397-a2a4-8e847e542d75" containerName="nova-scheduler-scheduler" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792354 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3ae47e-fcf5-4397-a2a4-8e847e542d75" containerName="nova-scheduler-scheduler" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792372 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="sg-core" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792383 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="sg-core" Nov 21 10:08:01 crc kubenswrapper[4972]: E1121 10:08:01.792401 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befdbf4d-7d20-40ca-9985-8309a0295dad" containerName="probe" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792412 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="befdbf4d-7d20-40ca-9985-8309a0295dad" containerName="probe" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792745 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="44805331-e34b-4455-a744-4c8fe27a1b9e" containerName="openstack-network-exporter" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792799 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bb7ffc3-501c-420f-834c-0509b4a509eb" containerName="openstack-network-exporter" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792812 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbaa8ec7-5499-43d1-ac80-dd8708d28643" containerName="barbican-keystone-listener-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792825 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bc44abc-7710-432b-b503-fd54e3afeede" containerName="rabbitmq" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792870 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbaa8ec7-5499-43d1-ac80-dd8708d28643" containerName="barbican-keystone-listener" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792892 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc57ffef-2527-4b16-b281-9139b6a0f1a1" containerName="cinder-api" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792902 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ccfb34-fe6c-4529-812a-af30eb178e8b" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792915 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" containerName="barbican-api" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792929 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2069a31-382b-4fc4-acee-cf202be1de1e" containerName="glance-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792944 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c438ca-0f93-434d-81ea-29ae82b217bf" containerName="ovsdbserver-nb" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792963 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8027f46e-1fe2-46ad-9226-11b2cc3f8da6" containerName="galera" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.792980 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" containerName="openstack-network-exporter" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793012 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69d7d80-dc29-4483-917c-c25921b56e9c" containerName="proxy-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793028 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e11aef3-0c96-44ed-8876-7e54115d181f" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793052 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="56aac81e-b855-4419-b8a5-8f1fc099b5e6" containerName="neutron-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793070 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="proxy-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793088 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" containerName="placement-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793101 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" containerName="nova-metadata-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793119 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf3edebd-74ab-4b7d-8706-2eda69d91aea" containerName="ovn-northd" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793132 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" containerName="glance-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793148 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eb9b3f4-6710-4818-b94c-494958fe31ad" containerName="nova-cell0-conductor-conductor" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793166 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="56aac81e-b855-4419-b8a5-8f1fc099b5e6" containerName="neutron-api" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793185 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" containerName="nova-api-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793198 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="44805331-e34b-4455-a744-4c8fe27a1b9e" containerName="ovsdbserver-sb" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793219 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="272d9c39-ab5b-4fc1-8dbe-209fbe33e293" containerName="barbican-api-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793238 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="ceilometer-central-agent" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793253 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="befdbf4d-7d20-40ca-9985-8309a0295dad" containerName="probe" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793268 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e91068-ef38-4ff8-ab30-5eb2a6c0c5c8" containerName="glance-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793287 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cbe1b2f-7a4a-4749-ab1f-e8ecb4152bfd" containerName="placement-api" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793299 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe028de3-cf0f-4ab0-ab52-0898bd408c89" containerName="keystone-api" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793314 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf21c86e-7747-4dca-a870-352dfa214beb" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793331 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="481cc370-a05a-4516-99f2-f94a0056a70e" containerName="memcached" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793346 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ed1f19-43e6-4245-82c1-f51b5f18d1e6" containerName="kube-state-metrics" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793358 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="302b9e1c-affd-4f2f-bacd-98f40dedeb91" containerName="nova-api-api" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793375 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c7e44b1-0938-480f-9ab1-6e7e16c6c0e8" containerName="nova-cell1-conductor-conductor" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793389 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="befdbf4d-7d20-40ca-9985-8309a0295dad" containerName="cinder-scheduler" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793401 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ab92bde-9b45-49ca-a6e9-43c8921b3002" containerName="ovn-controller" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793429 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="392b5094-f8ef-47b8-8dc5-9e1d2dbef612" containerName="rabbitmq" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793447 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="ceilometer-notification-agent" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793458 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85a9950-7e9d-4e16-9b35-d6912bacadf9" containerName="sg-core" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793472 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1934e8d3-ef66-4d0e-8d12-bd958545270a" containerName="barbican-worker-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793488 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3ae47e-fcf5-4397-a2a4-8e847e542d75" containerName="nova-scheduler-scheduler" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793504 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc57ffef-2527-4b16-b281-9139b6a0f1a1" containerName="cinder-api-log" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793516 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f61d22-4b79-4f80-b7dc-0f5bea4b506d" containerName="nova-metadata-metadata" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793531 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1224d0f-d488-49e6-b6dc-12a188b43a43" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793546 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="4280cc0e-ca6a-47d7-be4d-a05beb85de3c" containerName="dnsmasq-dns" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793559 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c438ca-0f93-434d-81ea-29ae82b217bf" containerName="openstack-network-exporter" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793575 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd442c75-9e94-4f54-81b6-68c19f4de9d8" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793587 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc61b266-e156-4999-8ec7-8aa1f1988e42" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793598 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed54a06-08b9-41a2-92d9-a745631e053c" containerName="galera" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793612 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69d7d80-dc29-4483-917c-c25921b56e9c" containerName="proxy-server" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793628 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1934e8d3-ef66-4d0e-8d12-bd958545270a" containerName="barbican-worker" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793640 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2069a31-382b-4fc4-acee-cf202be1de1e" containerName="glance-httpd" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793652 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8334e5f-f6cb-4c49-91d6-5e414ecc53f0" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.793666 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb786ba-2a1a-4124-9ef7-116e12402f5c" containerName="mariadb-account-delete" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.795435 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.801591 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5cbtn"] Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.934823 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1993446-d537-404f-b2c5-c294ea85f04f-catalog-content\") pod \"redhat-operators-5cbtn\" (UID: \"a1993446-d537-404f-b2c5-c294ea85f04f\") " pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.935040 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz2xn\" (UniqueName: \"kubernetes.io/projected/a1993446-d537-404f-b2c5-c294ea85f04f-kube-api-access-tz2xn\") pod \"redhat-operators-5cbtn\" (UID: \"a1993446-d537-404f-b2c5-c294ea85f04f\") " pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:01 crc kubenswrapper[4972]: I1121 10:08:01.935146 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1993446-d537-404f-b2c5-c294ea85f04f-utilities\") pod \"redhat-operators-5cbtn\" (UID: \"a1993446-d537-404f-b2c5-c294ea85f04f\") " pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:02 crc kubenswrapper[4972]: I1121 10:08:02.037576 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz2xn\" (UniqueName: \"kubernetes.io/projected/a1993446-d537-404f-b2c5-c294ea85f04f-kube-api-access-tz2xn\") pod \"redhat-operators-5cbtn\" (UID: \"a1993446-d537-404f-b2c5-c294ea85f04f\") " pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:02 crc kubenswrapper[4972]: I1121 10:08:02.037664 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1993446-d537-404f-b2c5-c294ea85f04f-utilities\") pod \"redhat-operators-5cbtn\" (UID: \"a1993446-d537-404f-b2c5-c294ea85f04f\") " pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:02 crc kubenswrapper[4972]: I1121 10:08:02.037866 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1993446-d537-404f-b2c5-c294ea85f04f-catalog-content\") pod \"redhat-operators-5cbtn\" (UID: \"a1993446-d537-404f-b2c5-c294ea85f04f\") " pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:02 crc kubenswrapper[4972]: I1121 10:08:02.038333 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1993446-d537-404f-b2c5-c294ea85f04f-utilities\") pod \"redhat-operators-5cbtn\" (UID: \"a1993446-d537-404f-b2c5-c294ea85f04f\") " pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:02 crc kubenswrapper[4972]: I1121 10:08:02.038568 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1993446-d537-404f-b2c5-c294ea85f04f-catalog-content\") pod \"redhat-operators-5cbtn\" (UID: \"a1993446-d537-404f-b2c5-c294ea85f04f\") " pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:02 crc kubenswrapper[4972]: I1121 10:08:02.065369 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz2xn\" (UniqueName: \"kubernetes.io/projected/a1993446-d537-404f-b2c5-c294ea85f04f-kube-api-access-tz2xn\") pod \"redhat-operators-5cbtn\" (UID: \"a1993446-d537-404f-b2c5-c294ea85f04f\") " pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:02 crc kubenswrapper[4972]: I1121 10:08:02.135933 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:02 crc kubenswrapper[4972]: E1121 10:08:02.423747 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:08:02 crc kubenswrapper[4972]: E1121 10:08:02.424563 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:08:02 crc kubenswrapper[4972]: E1121 10:08:02.424913 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:08:02 crc kubenswrapper[4972]: E1121 10:08:02.424970 4972 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server" Nov 21 10:08:02 crc kubenswrapper[4972]: E1121 10:08:02.425294 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:08:02 crc kubenswrapper[4972]: E1121 10:08:02.426779 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:08:02 crc kubenswrapper[4972]: E1121 10:08:02.429929 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:08:02 crc kubenswrapper[4972]: E1121 10:08:02.429977 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovs-vswitchd" Nov 21 10:08:02 crc kubenswrapper[4972]: I1121 10:08:02.596919 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5cbtn"] Nov 21 10:08:02 crc kubenswrapper[4972]: E1121 10:08:02.889538 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1993446_d537_404f_b2c5_c294ea85f04f.slice/crio-29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1993446_d537_404f_b2c5_c294ea85f04f.slice/crio-conmon-29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 10:08:03 crc kubenswrapper[4972]: I1121 10:08:03.513047 4972 generic.go:334] "Generic (PLEG): container finished" podID="a1993446-d537-404f-b2c5-c294ea85f04f" containerID="29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7" exitCode=0 Nov 21 10:08:03 crc kubenswrapper[4972]: I1121 10:08:03.513135 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cbtn" event={"ID":"a1993446-d537-404f-b2c5-c294ea85f04f","Type":"ContainerDied","Data":"29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7"} Nov 21 10:08:03 crc kubenswrapper[4972]: I1121 10:08:03.513552 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cbtn" event={"ID":"a1993446-d537-404f-b2c5-c294ea85f04f","Type":"ContainerStarted","Data":"944d49cd539ba3664b174c7f01abdcaa3385edcfe7d497c067bff5831e714ba7"} Nov 21 10:08:03 crc kubenswrapper[4972]: I1121 10:08:03.759847 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:08:03 crc kubenswrapper[4972]: E1121 10:08:03.760207 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.186675 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vt47f"] Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.190283 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.230301 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vt47f"] Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.275049 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37c248f-8b67-4f04-90b8-299d845f8ace-utilities\") pod \"certified-operators-vt47f\" (UID: \"d37c248f-8b67-4f04-90b8-299d845f8ace\") " pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.275390 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2pfd\" (UniqueName: \"kubernetes.io/projected/d37c248f-8b67-4f04-90b8-299d845f8ace-kube-api-access-x2pfd\") pod \"certified-operators-vt47f\" (UID: \"d37c248f-8b67-4f04-90b8-299d845f8ace\") " pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.275461 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37c248f-8b67-4f04-90b8-299d845f8ace-catalog-content\") pod \"certified-operators-vt47f\" (UID: \"d37c248f-8b67-4f04-90b8-299d845f8ace\") " pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.377484 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37c248f-8b67-4f04-90b8-299d845f8ace-catalog-content\") pod \"certified-operators-vt47f\" (UID: \"d37c248f-8b67-4f04-90b8-299d845f8ace\") " pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.377617 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37c248f-8b67-4f04-90b8-299d845f8ace-utilities\") pod \"certified-operators-vt47f\" (UID: \"d37c248f-8b67-4f04-90b8-299d845f8ace\") " pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.377742 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2pfd\" (UniqueName: \"kubernetes.io/projected/d37c248f-8b67-4f04-90b8-299d845f8ace-kube-api-access-x2pfd\") pod \"certified-operators-vt47f\" (UID: \"d37c248f-8b67-4f04-90b8-299d845f8ace\") " pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.378501 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37c248f-8b67-4f04-90b8-299d845f8ace-catalog-content\") pod \"certified-operators-vt47f\" (UID: \"d37c248f-8b67-4f04-90b8-299d845f8ace\") " pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.380609 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37c248f-8b67-4f04-90b8-299d845f8ace-utilities\") pod \"certified-operators-vt47f\" (UID: \"d37c248f-8b67-4f04-90b8-299d845f8ace\") " pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.401903 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2pfd\" (UniqueName: \"kubernetes.io/projected/d37c248f-8b67-4f04-90b8-299d845f8ace-kube-api-access-x2pfd\") pod \"certified-operators-vt47f\" (UID: \"d37c248f-8b67-4f04-90b8-299d845f8ace\") " pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.527873 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:04 crc kubenswrapper[4972]: I1121 10:08:04.778757 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vt47f"] Nov 21 10:08:05 crc kubenswrapper[4972]: I1121 10:08:05.538036 4972 generic.go:334] "Generic (PLEG): container finished" podID="d37c248f-8b67-4f04-90b8-299d845f8ace" containerID="6e0821fcac40bc69aa38b175a345a1ac5696fd7552b2fe6748496905fe4c8d2d" exitCode=0 Nov 21 10:08:05 crc kubenswrapper[4972]: I1121 10:08:05.538121 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt47f" event={"ID":"d37c248f-8b67-4f04-90b8-299d845f8ace","Type":"ContainerDied","Data":"6e0821fcac40bc69aa38b175a345a1ac5696fd7552b2fe6748496905fe4c8d2d"} Nov 21 10:08:05 crc kubenswrapper[4972]: I1121 10:08:05.538152 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt47f" event={"ID":"d37c248f-8b67-4f04-90b8-299d845f8ace","Type":"ContainerStarted","Data":"b8b92a8e40d7d891b2143e35b96e9e5a42bd237e4dfdb357c1c1ee1a1c7b5219"} Nov 21 10:08:05 crc kubenswrapper[4972]: I1121 10:08:05.541710 4972 generic.go:334] "Generic (PLEG): container finished" podID="a1993446-d537-404f-b2c5-c294ea85f04f" containerID="fca247c73c0f274841f34b9b1a75f864115195b62251abaa6b18826af2b9715e" exitCode=0 Nov 21 10:08:05 crc kubenswrapper[4972]: I1121 10:08:05.541749 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cbtn" event={"ID":"a1993446-d537-404f-b2c5-c294ea85f04f","Type":"ContainerDied","Data":"fca247c73c0f274841f34b9b1a75f864115195b62251abaa6b18826af2b9715e"} Nov 21 10:08:06 crc kubenswrapper[4972]: I1121 10:08:06.557882 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cbtn" event={"ID":"a1993446-d537-404f-b2c5-c294ea85f04f","Type":"ContainerStarted","Data":"33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69"} Nov 21 10:08:07 crc kubenswrapper[4972]: E1121 10:08:07.424156 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:08:07 crc kubenswrapper[4972]: E1121 10:08:07.424975 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:08:07 crc kubenswrapper[4972]: E1121 10:08:07.425140 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:08:07 crc kubenswrapper[4972]: E1121 10:08:07.425380 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 21 10:08:07 crc kubenswrapper[4972]: E1121 10:08:07.425441 4972 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server" Nov 21 10:08:07 crc kubenswrapper[4972]: E1121 10:08:07.428249 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:08:07 crc kubenswrapper[4972]: E1121 10:08:07.429524 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 21 10:08:07 crc kubenswrapper[4972]: E1121 10:08:07.429557 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4z7b5" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovs-vswitchd" Nov 21 10:08:07 crc kubenswrapper[4972]: I1121 10:08:07.576248 4972 generic.go:334] "Generic (PLEG): container finished" podID="d37c248f-8b67-4f04-90b8-299d845f8ace" containerID="1fda2a34adec61d533abd180b1e9a4cf35004c7c318fe908f7f7d211081d62cc" exitCode=0 Nov 21 10:08:07 crc kubenswrapper[4972]: I1121 10:08:07.576316 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt47f" event={"ID":"d37c248f-8b67-4f04-90b8-299d845f8ace","Type":"ContainerDied","Data":"1fda2a34adec61d533abd180b1e9a4cf35004c7c318fe908f7f7d211081d62cc"} Nov 21 10:08:07 crc kubenswrapper[4972]: I1121 10:08:07.604655 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5cbtn" podStartSLOduration=4.048852297 podStartE2EDuration="6.604631195s" podCreationTimestamp="2025-11-21 10:08:01 +0000 UTC" firstStartedPulling="2025-11-21 10:08:03.515300666 +0000 UTC m=+1628.624443204" lastFinishedPulling="2025-11-21 10:08:06.071079594 +0000 UTC m=+1631.180222102" observedRunningTime="2025-11-21 10:08:06.578460483 +0000 UTC m=+1631.687603001" watchObservedRunningTime="2025-11-21 10:08:07.604631195 +0000 UTC m=+1632.713773703" Nov 21 10:08:08 crc kubenswrapper[4972]: I1121 10:08:08.590855 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt47f" event={"ID":"d37c248f-8b67-4f04-90b8-299d845f8ace","Type":"ContainerStarted","Data":"ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048"} Nov 21 10:08:08 crc kubenswrapper[4972]: I1121 10:08:08.618271 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vt47f" podStartSLOduration=2.085978743 podStartE2EDuration="4.618249772s" podCreationTimestamp="2025-11-21 10:08:04 +0000 UTC" firstStartedPulling="2025-11-21 10:08:05.539541189 +0000 UTC m=+1630.648683697" lastFinishedPulling="2025-11-21 10:08:08.071812188 +0000 UTC m=+1633.180954726" observedRunningTime="2025-11-21 10:08:08.615471227 +0000 UTC m=+1633.724613745" watchObservedRunningTime="2025-11-21 10:08:08.618249772 +0000 UTC m=+1633.727392290" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.637685 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4z7b5_5ea385c8-0af5-4759-acf1-ee6dee48e488/ovs-vswitchd/0.log" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.639148 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" exitCode=137 Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.639235 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4z7b5" event={"ID":"5ea385c8-0af5-4759-acf1-ee6dee48e488","Type":"ContainerDied","Data":"00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad"} Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.639264 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4z7b5" event={"ID":"5ea385c8-0af5-4759-acf1-ee6dee48e488","Type":"ContainerDied","Data":"e2ddc3bc1d6938f973cb6fdd78406930d38a0abdf2eb4cb8cfe33cb6537c9980"} Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.639276 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ddc3bc1d6938f973cb6fdd78406930d38a0abdf2eb4cb8cfe33cb6537c9980" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.646875 4972 generic.go:334] "Generic (PLEG): container finished" podID="31e140ab-a53a-4af2-864f-4c399d44f217" containerID="d4d2c9d3e605844fc00e4083833139b1121a575ad83be76839782a80b770f46a" exitCode=137 Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.646907 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"d4d2c9d3e605844fc00e4083833139b1121a575ad83be76839782a80b770f46a"} Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.651682 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4z7b5_5ea385c8-0af5-4759-acf1-ee6dee48e488/ovs-vswitchd/0.log" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.652606 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.732029 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.789373 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-etc-ovs\") pod \"5ea385c8-0af5-4759-acf1-ee6dee48e488\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.789430 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/31e140ab-a53a-4af2-864f-4c399d44f217-lock\") pod \"31e140ab-a53a-4af2-864f-4c399d44f217\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.789496 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-run\") pod \"5ea385c8-0af5-4759-acf1-ee6dee48e488\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.789562 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhght\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-kube-api-access-bhght\") pod \"31e140ab-a53a-4af2-864f-4c399d44f217\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.789584 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"31e140ab-a53a-4af2-864f-4c399d44f217\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.789611 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-lib\") pod \"5ea385c8-0af5-4759-acf1-ee6dee48e488\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.789682 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ea385c8-0af5-4759-acf1-ee6dee48e488-scripts\") pod \"5ea385c8-0af5-4759-acf1-ee6dee48e488\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.789705 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/31e140ab-a53a-4af2-864f-4c399d44f217-cache\") pod \"31e140ab-a53a-4af2-864f-4c399d44f217\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.789736 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift\") pod \"31e140ab-a53a-4af2-864f-4c399d44f217\" (UID: \"31e140ab-a53a-4af2-864f-4c399d44f217\") " Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.789759 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4h5g\" (UniqueName: \"kubernetes.io/projected/5ea385c8-0af5-4759-acf1-ee6dee48e488-kube-api-access-b4h5g\") pod \"5ea385c8-0af5-4759-acf1-ee6dee48e488\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.789789 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-log\") pod \"5ea385c8-0af5-4759-acf1-ee6dee48e488\" (UID: \"5ea385c8-0af5-4759-acf1-ee6dee48e488\") " Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.790110 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-log" (OuterVolumeSpecName: "var-log") pod "5ea385c8-0af5-4759-acf1-ee6dee48e488" (UID: "5ea385c8-0af5-4759-acf1-ee6dee48e488"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.790173 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "5ea385c8-0af5-4759-acf1-ee6dee48e488" (UID: "5ea385c8-0af5-4759-acf1-ee6dee48e488"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.791067 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-run" (OuterVolumeSpecName: "var-run") pod "5ea385c8-0af5-4759-acf1-ee6dee48e488" (UID: "5ea385c8-0af5-4759-acf1-ee6dee48e488"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.791271 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e140ab-a53a-4af2-864f-4c399d44f217-cache" (OuterVolumeSpecName: "cache") pod "31e140ab-a53a-4af2-864f-4c399d44f217" (UID: "31e140ab-a53a-4af2-864f-4c399d44f217"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.791330 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-lib" (OuterVolumeSpecName: "var-lib") pod "5ea385c8-0af5-4759-acf1-ee6dee48e488" (UID: "5ea385c8-0af5-4759-acf1-ee6dee48e488"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.792015 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e140ab-a53a-4af2-864f-4c399d44f217-lock" (OuterVolumeSpecName: "lock") pod "31e140ab-a53a-4af2-864f-4c399d44f217" (UID: "31e140ab-a53a-4af2-864f-4c399d44f217"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.792354 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ea385c8-0af5-4759-acf1-ee6dee48e488-scripts" (OuterVolumeSpecName: "scripts") pod "5ea385c8-0af5-4759-acf1-ee6dee48e488" (UID: "5ea385c8-0af5-4759-acf1-ee6dee48e488"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.796649 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "swift") pod "31e140ab-a53a-4af2-864f-4c399d44f217" (UID: "31e140ab-a53a-4af2-864f-4c399d44f217"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.797281 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-kube-api-access-bhght" (OuterVolumeSpecName: "kube-api-access-bhght") pod "31e140ab-a53a-4af2-864f-4c399d44f217" (UID: "31e140ab-a53a-4af2-864f-4c399d44f217"). InnerVolumeSpecName "kube-api-access-bhght". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.797628 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea385c8-0af5-4759-acf1-ee6dee48e488-kube-api-access-b4h5g" (OuterVolumeSpecName: "kube-api-access-b4h5g") pod "5ea385c8-0af5-4759-acf1-ee6dee48e488" (UID: "5ea385c8-0af5-4759-acf1-ee6dee48e488"). InnerVolumeSpecName "kube-api-access-b4h5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.797940 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "31e140ab-a53a-4af2-864f-4c399d44f217" (UID: "31e140ab-a53a-4af2-864f-4c399d44f217"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.892913 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ea385c8-0af5-4759-acf1-ee6dee48e488-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.892976 4972 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/31e140ab-a53a-4af2-864f-4c399d44f217-cache\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.893000 4972 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.893022 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4h5g\" (UniqueName: \"kubernetes.io/projected/5ea385c8-0af5-4759-acf1-ee6dee48e488-kube-api-access-b4h5g\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.893043 4972 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-log\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.893061 4972 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-etc-ovs\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.893079 4972 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/31e140ab-a53a-4af2-864f-4c399d44f217-lock\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.893098 4972 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-run\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.893145 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.893167 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhght\" (UniqueName: \"kubernetes.io/projected/31e140ab-a53a-4af2-864f-4c399d44f217-kube-api-access-bhght\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.893186 4972 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5ea385c8-0af5-4759-acf1-ee6dee48e488-var-lib\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.925495 4972 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 21 10:08:11 crc kubenswrapper[4972]: I1121 10:08:11.995188 4972 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.136225 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.136787 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.670646 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"31e140ab-a53a-4af2-864f-4c399d44f217","Type":"ContainerDied","Data":"19de5ae47de38759656d8b02d8d0f5cd55c1234a94a1c1b7a5f0ad33a98b5d58"} Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.670770 4972 scope.go:117] "RemoveContainer" containerID="d4d2c9d3e605844fc00e4083833139b1121a575ad83be76839782a80b770f46a" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.670781 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4z7b5" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.670805 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.709720 4972 scope.go:117] "RemoveContainer" containerID="d6c241802e71e9521da5b44bb300b3ed93a83b5a2a3b5384891a37d0477bcf5f" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.729721 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-4z7b5"] Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.740360 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-4z7b5"] Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.767081 4972 scope.go:117] "RemoveContainer" containerID="b27dea1fedce06fdcc7b8b10bfa4e01b3977a2c1835d79507b63bffd8cd7cf4f" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.782857 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.798099 4972 scope.go:117] "RemoveContainer" containerID="0f6fda84aaa98d450bf8db3dd84c394bcbdd91eb2c614ce51ee1f7e2fdf05d9e" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.801039 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.835178 4972 scope.go:117] "RemoveContainer" containerID="7e3179b2cf36ea30c1f398322b657083876aff67dca73310812bf6eda27e562d" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.864326 4972 scope.go:117] "RemoveContainer" containerID="6b51383c400616239b3920aae870a35808849c73b781889b2d7c3fca1086fcc9" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.892018 4972 scope.go:117] "RemoveContainer" containerID="6d2997e2bf31afa38122b707eaffd973a10f37e15af3ef380d90f5a0e46e40a2" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.923081 4972 scope.go:117] "RemoveContainer" containerID="ef4d5eb5bf9e2085aa31deab41d35f315b471c1a281ec7d0fdb5669055ceae7e" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.954181 4972 scope.go:117] "RemoveContainer" containerID="da2ba4db5685edc3025f879f7e189cf7165c184ef92f7d26d3118102cbc00186" Nov 21 10:08:12 crc kubenswrapper[4972]: I1121 10:08:12.981049 4972 scope.go:117] "RemoveContainer" containerID="56e50d004614f42f95a39a005d2e581ae7498a4ab2ace52e0c8e44e4cb64b156" Nov 21 10:08:13 crc kubenswrapper[4972]: I1121 10:08:13.017422 4972 scope.go:117] "RemoveContainer" containerID="4e7f746ee8e85533e7ed177d7195703edc2217f4d9450127a0eefddf988dd729" Nov 21 10:08:13 crc kubenswrapper[4972]: I1121 10:08:13.044514 4972 scope.go:117] "RemoveContainer" containerID="1d432671871d10b2f9d36122beb37f70113843388eedcb543148c0842f970029" Nov 21 10:08:13 crc kubenswrapper[4972]: I1121 10:08:13.078165 4972 scope.go:117] "RemoveContainer" containerID="8c31ccc0050d4e99074a90c40277647465e43314e1fdbb8b1f6a9b4753e956a8" Nov 21 10:08:13 crc kubenswrapper[4972]: I1121 10:08:13.104723 4972 scope.go:117] "RemoveContainer" containerID="7ac0c52eaf55d9c6a4f11a7c5914428a511032a6d41ca1f5562b5b774ab41f34" Nov 21 10:08:13 crc kubenswrapper[4972]: I1121 10:08:13.125385 4972 scope.go:117] "RemoveContainer" containerID="ba577ff7853e877e687486121c6f0ab731e335150c782fdb6337e45da1ea7e56" Nov 21 10:08:13 crc kubenswrapper[4972]: I1121 10:08:13.212540 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5cbtn" podUID="a1993446-d537-404f-b2c5-c294ea85f04f" containerName="registry-server" probeResult="failure" output=< Nov 21 10:08:13 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 10:08:13 crc kubenswrapper[4972]: > Nov 21 10:08:13 crc kubenswrapper[4972]: I1121 10:08:13.776428 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" path="/var/lib/kubelet/pods/31e140ab-a53a-4af2-864f-4c399d44f217/volumes" Nov 21 10:08:13 crc kubenswrapper[4972]: I1121 10:08:13.778995 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" path="/var/lib/kubelet/pods/5ea385c8-0af5-4759-acf1-ee6dee48e488/volumes" Nov 21 10:08:14 crc kubenswrapper[4972]: I1121 10:08:14.528437 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:14 crc kubenswrapper[4972]: I1121 10:08:14.528763 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:14 crc kubenswrapper[4972]: I1121 10:08:14.592799 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:14 crc kubenswrapper[4972]: I1121 10:08:14.763826 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:14 crc kubenswrapper[4972]: I1121 10:08:14.845132 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vt47f"] Nov 21 10:08:15 crc kubenswrapper[4972]: I1121 10:08:15.259543 4972 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podbefdbf4d-7d20-40ca-9985-8309a0295dad"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podbefdbf4d-7d20-40ca-9985-8309a0295dad] : Timed out while waiting for systemd to remove kubepods-besteffort-podbefdbf4d_7d20_40ca_9985_8309a0295dad.slice" Nov 21 10:08:15 crc kubenswrapper[4972]: E1121 10:08:15.259632 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podbefdbf4d-7d20-40ca-9985-8309a0295dad] : unable to destroy cgroup paths for cgroup [kubepods besteffort podbefdbf4d-7d20-40ca-9985-8309a0295dad] : Timed out while waiting for systemd to remove kubepods-besteffort-podbefdbf4d_7d20_40ca_9985_8309a0295dad.slice" pod="openstack/cinder-scheduler-0" podUID="befdbf4d-7d20-40ca-9985-8309a0295dad" Nov 21 10:08:15 crc kubenswrapper[4972]: I1121 10:08:15.721773 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 10:08:15 crc kubenswrapper[4972]: I1121 10:08:15.754165 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 10:08:15 crc kubenswrapper[4972]: I1121 10:08:15.778746 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 10:08:16 crc kubenswrapper[4972]: I1121 10:08:16.754756 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vt47f" podUID="d37c248f-8b67-4f04-90b8-299d845f8ace" containerName="registry-server" containerID="cri-o://ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048" gracePeriod=2 Nov 21 10:08:16 crc kubenswrapper[4972]: I1121 10:08:16.759678 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:08:16 crc kubenswrapper[4972]: E1121 10:08:16.760557 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.306191 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.390695 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37c248f-8b67-4f04-90b8-299d845f8ace-catalog-content\") pod \"d37c248f-8b67-4f04-90b8-299d845f8ace\" (UID: \"d37c248f-8b67-4f04-90b8-299d845f8ace\") " Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.391298 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37c248f-8b67-4f04-90b8-299d845f8ace-utilities\") pod \"d37c248f-8b67-4f04-90b8-299d845f8ace\" (UID: \"d37c248f-8b67-4f04-90b8-299d845f8ace\") " Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.391688 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2pfd\" (UniqueName: \"kubernetes.io/projected/d37c248f-8b67-4f04-90b8-299d845f8ace-kube-api-access-x2pfd\") pod \"d37c248f-8b67-4f04-90b8-299d845f8ace\" (UID: \"d37c248f-8b67-4f04-90b8-299d845f8ace\") " Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.392493 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d37c248f-8b67-4f04-90b8-299d845f8ace-utilities" (OuterVolumeSpecName: "utilities") pod "d37c248f-8b67-4f04-90b8-299d845f8ace" (UID: "d37c248f-8b67-4f04-90b8-299d845f8ace"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.400806 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d37c248f-8b67-4f04-90b8-299d845f8ace-kube-api-access-x2pfd" (OuterVolumeSpecName: "kube-api-access-x2pfd") pod "d37c248f-8b67-4f04-90b8-299d845f8ace" (UID: "d37c248f-8b67-4f04-90b8-299d845f8ace"). InnerVolumeSpecName "kube-api-access-x2pfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.493651 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37c248f-8b67-4f04-90b8-299d845f8ace-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.493701 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2pfd\" (UniqueName: \"kubernetes.io/projected/d37c248f-8b67-4f04-90b8-299d845f8ace-kube-api-access-x2pfd\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.773173 4972 generic.go:334] "Generic (PLEG): container finished" podID="d37c248f-8b67-4f04-90b8-299d845f8ace" containerID="ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048" exitCode=0 Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.773293 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vt47f" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.778366 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="befdbf4d-7d20-40ca-9985-8309a0295dad" path="/var/lib/kubelet/pods/befdbf4d-7d20-40ca-9985-8309a0295dad/volumes" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.784130 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt47f" event={"ID":"d37c248f-8b67-4f04-90b8-299d845f8ace","Type":"ContainerDied","Data":"ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048"} Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.784218 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt47f" event={"ID":"d37c248f-8b67-4f04-90b8-299d845f8ace","Type":"ContainerDied","Data":"b8b92a8e40d7d891b2143e35b96e9e5a42bd237e4dfdb357c1c1ee1a1c7b5219"} Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.784262 4972 scope.go:117] "RemoveContainer" containerID="ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.823344 4972 scope.go:117] "RemoveContainer" containerID="1fda2a34adec61d533abd180b1e9a4cf35004c7c318fe908f7f7d211081d62cc" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.860626 4972 scope.go:117] "RemoveContainer" containerID="6e0821fcac40bc69aa38b175a345a1ac5696fd7552b2fe6748496905fe4c8d2d" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.902042 4972 scope.go:117] "RemoveContainer" containerID="ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048" Nov 21 10:08:17 crc kubenswrapper[4972]: E1121 10:08:17.902776 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048\": container with ID starting with ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048 not found: ID does not exist" containerID="ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.902905 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048"} err="failed to get container status \"ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048\": rpc error: code = NotFound desc = could not find container \"ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048\": container with ID starting with ff806456fe3f8453f26aeb9cfe06ce65d16376a9178d1f0cb7a4bc8fc0c8d048 not found: ID does not exist" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.902943 4972 scope.go:117] "RemoveContainer" containerID="1fda2a34adec61d533abd180b1e9a4cf35004c7c318fe908f7f7d211081d62cc" Nov 21 10:08:17 crc kubenswrapper[4972]: E1121 10:08:17.903786 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fda2a34adec61d533abd180b1e9a4cf35004c7c318fe908f7f7d211081d62cc\": container with ID starting with 1fda2a34adec61d533abd180b1e9a4cf35004c7c318fe908f7f7d211081d62cc not found: ID does not exist" containerID="1fda2a34adec61d533abd180b1e9a4cf35004c7c318fe908f7f7d211081d62cc" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.903894 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fda2a34adec61d533abd180b1e9a4cf35004c7c318fe908f7f7d211081d62cc"} err="failed to get container status \"1fda2a34adec61d533abd180b1e9a4cf35004c7c318fe908f7f7d211081d62cc\": rpc error: code = NotFound desc = could not find container \"1fda2a34adec61d533abd180b1e9a4cf35004c7c318fe908f7f7d211081d62cc\": container with ID starting with 1fda2a34adec61d533abd180b1e9a4cf35004c7c318fe908f7f7d211081d62cc not found: ID does not exist" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.903937 4972 scope.go:117] "RemoveContainer" containerID="6e0821fcac40bc69aa38b175a345a1ac5696fd7552b2fe6748496905fe4c8d2d" Nov 21 10:08:17 crc kubenswrapper[4972]: E1121 10:08:17.904370 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e0821fcac40bc69aa38b175a345a1ac5696fd7552b2fe6748496905fe4c8d2d\": container with ID starting with 6e0821fcac40bc69aa38b175a345a1ac5696fd7552b2fe6748496905fe4c8d2d not found: ID does not exist" containerID="6e0821fcac40bc69aa38b175a345a1ac5696fd7552b2fe6748496905fe4c8d2d" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.904437 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e0821fcac40bc69aa38b175a345a1ac5696fd7552b2fe6748496905fe4c8d2d"} err="failed to get container status \"6e0821fcac40bc69aa38b175a345a1ac5696fd7552b2fe6748496905fe4c8d2d\": rpc error: code = NotFound desc = could not find container \"6e0821fcac40bc69aa38b175a345a1ac5696fd7552b2fe6748496905fe4c8d2d\": container with ID starting with 6e0821fcac40bc69aa38b175a345a1ac5696fd7552b2fe6748496905fe4c8d2d not found: ID does not exist" Nov 21 10:08:17 crc kubenswrapper[4972]: I1121 10:08:17.997614 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d37c248f-8b67-4f04-90b8-299d845f8ace-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d37c248f-8b67-4f04-90b8-299d845f8ace" (UID: "d37c248f-8b67-4f04-90b8-299d845f8ace"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:08:18 crc kubenswrapper[4972]: I1121 10:08:18.004500 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37c248f-8b67-4f04-90b8-299d845f8ace-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:18 crc kubenswrapper[4972]: I1121 10:08:18.127289 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vt47f"] Nov 21 10:08:18 crc kubenswrapper[4972]: I1121 10:08:18.146373 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vt47f"] Nov 21 10:08:19 crc kubenswrapper[4972]: I1121 10:08:19.786303 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d37c248f-8b67-4f04-90b8-299d845f8ace" path="/var/lib/kubelet/pods/d37c248f-8b67-4f04-90b8-299d845f8ace/volumes" Nov 21 10:08:22 crc kubenswrapper[4972]: I1121 10:08:22.673206 4972 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gb5dr container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 10:08:22 crc kubenswrapper[4972]: I1121 10:08:22.673668 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" podUID="9741a397-9e67-459c-9dcd-9163fb05c6e4" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 10:08:22 crc kubenswrapper[4972]: I1121 10:08:22.674800 4972 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gb5dr container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 10:08:22 crc kubenswrapper[4972]: I1121 10:08:22.674832 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gb5dr" podUID="9741a397-9e67-459c-9dcd-9163fb05c6e4" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 10:08:22 crc kubenswrapper[4972]: I1121 10:08:22.741782 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:22 crc kubenswrapper[4972]: I1121 10:08:22.794162 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:22 crc kubenswrapper[4972]: I1121 10:08:22.974466 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5cbtn"] Nov 21 10:08:23 crc kubenswrapper[4972]: I1121 10:08:23.845640 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5cbtn" podUID="a1993446-d537-404f-b2c5-c294ea85f04f" containerName="registry-server" containerID="cri-o://33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69" gracePeriod=2 Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.331126 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.490757 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1993446-d537-404f-b2c5-c294ea85f04f-catalog-content\") pod \"a1993446-d537-404f-b2c5-c294ea85f04f\" (UID: \"a1993446-d537-404f-b2c5-c294ea85f04f\") " Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.490903 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz2xn\" (UniqueName: \"kubernetes.io/projected/a1993446-d537-404f-b2c5-c294ea85f04f-kube-api-access-tz2xn\") pod \"a1993446-d537-404f-b2c5-c294ea85f04f\" (UID: \"a1993446-d537-404f-b2c5-c294ea85f04f\") " Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.491114 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1993446-d537-404f-b2c5-c294ea85f04f-utilities\") pod \"a1993446-d537-404f-b2c5-c294ea85f04f\" (UID: \"a1993446-d537-404f-b2c5-c294ea85f04f\") " Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.492681 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1993446-d537-404f-b2c5-c294ea85f04f-utilities" (OuterVolumeSpecName: "utilities") pod "a1993446-d537-404f-b2c5-c294ea85f04f" (UID: "a1993446-d537-404f-b2c5-c294ea85f04f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.499944 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1993446-d537-404f-b2c5-c294ea85f04f-kube-api-access-tz2xn" (OuterVolumeSpecName: "kube-api-access-tz2xn") pod "a1993446-d537-404f-b2c5-c294ea85f04f" (UID: "a1993446-d537-404f-b2c5-c294ea85f04f"). InnerVolumeSpecName "kube-api-access-tz2xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.592811 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz2xn\" (UniqueName: \"kubernetes.io/projected/a1993446-d537-404f-b2c5-c294ea85f04f-kube-api-access-tz2xn\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.592934 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1993446-d537-404f-b2c5-c294ea85f04f-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.632137 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1993446-d537-404f-b2c5-c294ea85f04f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1993446-d537-404f-b2c5-c294ea85f04f" (UID: "a1993446-d537-404f-b2c5-c294ea85f04f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.694518 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1993446-d537-404f-b2c5-c294ea85f04f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.868559 4972 generic.go:334] "Generic (PLEG): container finished" podID="a1993446-d537-404f-b2c5-c294ea85f04f" containerID="33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69" exitCode=0 Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.868672 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cbtn" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.869444 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cbtn" event={"ID":"a1993446-d537-404f-b2c5-c294ea85f04f","Type":"ContainerDied","Data":"33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69"} Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.869677 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cbtn" event={"ID":"a1993446-d537-404f-b2c5-c294ea85f04f","Type":"ContainerDied","Data":"944d49cd539ba3664b174c7f01abdcaa3385edcfe7d497c067bff5831e714ba7"} Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.869737 4972 scope.go:117] "RemoveContainer" containerID="33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.900248 4972 scope.go:117] "RemoveContainer" containerID="fca247c73c0f274841f34b9b1a75f864115195b62251abaa6b18826af2b9715e" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.904879 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5cbtn"] Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.913849 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5cbtn"] Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.921071 4972 scope.go:117] "RemoveContainer" containerID="29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.959776 4972 scope.go:117] "RemoveContainer" containerID="33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69" Nov 21 10:08:24 crc kubenswrapper[4972]: E1121 10:08:24.960259 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69\": container with ID starting with 33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69 not found: ID does not exist" containerID="33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.960291 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69"} err="failed to get container status \"33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69\": rpc error: code = NotFound desc = could not find container \"33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69\": container with ID starting with 33d52269b3d2f5cfb4601f3b2e674cd6a24e1d9716eec0899f9744f9a226ee69 not found: ID does not exist" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.960312 4972 scope.go:117] "RemoveContainer" containerID="fca247c73c0f274841f34b9b1a75f864115195b62251abaa6b18826af2b9715e" Nov 21 10:08:24 crc kubenswrapper[4972]: E1121 10:08:24.960516 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fca247c73c0f274841f34b9b1a75f864115195b62251abaa6b18826af2b9715e\": container with ID starting with fca247c73c0f274841f34b9b1a75f864115195b62251abaa6b18826af2b9715e not found: ID does not exist" containerID="fca247c73c0f274841f34b9b1a75f864115195b62251abaa6b18826af2b9715e" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.960543 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fca247c73c0f274841f34b9b1a75f864115195b62251abaa6b18826af2b9715e"} err="failed to get container status \"fca247c73c0f274841f34b9b1a75f864115195b62251abaa6b18826af2b9715e\": rpc error: code = NotFound desc = could not find container \"fca247c73c0f274841f34b9b1a75f864115195b62251abaa6b18826af2b9715e\": container with ID starting with fca247c73c0f274841f34b9b1a75f864115195b62251abaa6b18826af2b9715e not found: ID does not exist" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.960556 4972 scope.go:117] "RemoveContainer" containerID="29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7" Nov 21 10:08:24 crc kubenswrapper[4972]: E1121 10:08:24.961122 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7\": container with ID starting with 29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7 not found: ID does not exist" containerID="29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7" Nov 21 10:08:24 crc kubenswrapper[4972]: I1121 10:08:24.961149 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7"} err="failed to get container status \"29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7\": rpc error: code = NotFound desc = could not find container \"29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7\": container with ID starting with 29a276a74c6070f711961461f28193df29d5738657c6ae619be1d15d59c11da7 not found: ID does not exist" Nov 21 10:08:25 crc kubenswrapper[4972]: I1121 10:08:25.774430 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1993446-d537-404f-b2c5-c294ea85f04f" path="/var/lib/kubelet/pods/a1993446-d537-404f-b2c5-c294ea85f04f/volumes" Nov 21 10:08:30 crc kubenswrapper[4972]: I1121 10:08:30.759679 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:08:30 crc kubenswrapper[4972]: E1121 10:08:30.761563 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:08:45 crc kubenswrapper[4972]: I1121 10:08:45.767894 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:08:45 crc kubenswrapper[4972]: E1121 10:08:45.768951 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:08:57 crc kubenswrapper[4972]: I1121 10:08:57.759445 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:08:57 crc kubenswrapper[4972]: E1121 10:08:57.760456 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:09:10 crc kubenswrapper[4972]: I1121 10:09:10.761120 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:09:10 crc kubenswrapper[4972]: E1121 10:09:10.762266 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.113928 4972 scope.go:117] "RemoveContainer" containerID="7abb8b4c502b2cb32d88ebecf840b58358ed86ae8173e0a5c658fa64af90dfec" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.141718 4972 scope.go:117] "RemoveContainer" containerID="83552defc6eb86a6d2f1be27f0156209bcf898e7697a7b3a1905020948794f66" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.167045 4972 scope.go:117] "RemoveContainer" containerID="39748cd594210f7c13af38928aceccdc179584a99252069659d800a228a58a60" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.209047 4972 scope.go:117] "RemoveContainer" containerID="613470ef4119922307ef6d589fc1c4ded51811bb8032294cf93c302245167b27" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.246824 4972 scope.go:117] "RemoveContainer" containerID="a909c70e3b54475753a77772967a465d3b32beecf8c0a0fe9b25def80fdbd717" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.269561 4972 scope.go:117] "RemoveContainer" containerID="2987bfc8c3433f1d4cc0836276916780ba23045d1c20f275a6b22907e06e8fe3" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.298170 4972 scope.go:117] "RemoveContainer" containerID="3d03e81a2709bb4bb9d8ad9ba4f1732af07bf0dcd30da3511f821d5655ee6022" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.323501 4972 scope.go:117] "RemoveContainer" containerID="77f55bd48f2ffa8e96492bbb83fe188afc99a3622fa28987f4339fc82a7e2d1f" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.352109 4972 scope.go:117] "RemoveContainer" containerID="463bd654660c4276c81f29475b78c5e4042bdef3578a136a6a307ae0665277d4" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.381568 4972 scope.go:117] "RemoveContainer" containerID="0ebc458b32b9d10b1184b304a5e7fcaa59a0d788defdd71615e2e02413c09037" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.413887 4972 scope.go:117] "RemoveContainer" containerID="540eaef147f61e4660d957279b7e669f8f34f32ded5be72d74f06e156f789e70" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.452399 4972 scope.go:117] "RemoveContainer" containerID="33a42aa8758cd57fb1c197ce576cf4fcc274c6934767a362e4b3ba80e0e8193d" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.481365 4972 scope.go:117] "RemoveContainer" containerID="ce8d0e8927723578b54d7bbfaba904b7ac707ae47d865ec3b2caf2ab8d994389" Nov 21 10:09:21 crc kubenswrapper[4972]: I1121 10:09:21.511029 4972 scope.go:117] "RemoveContainer" containerID="00822a0cba1d910be0ad9e49feb16ef0b93ccfecf4fc7b79c5cd0b4df69bf4ad" Nov 21 10:09:25 crc kubenswrapper[4972]: I1121 10:09:25.767563 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:09:25 crc kubenswrapper[4972]: E1121 10:09:25.768107 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:09:38 crc kubenswrapper[4972]: I1121 10:09:38.760295 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:09:38 crc kubenswrapper[4972]: E1121 10:09:38.761219 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:09:50 crc kubenswrapper[4972]: I1121 10:09:50.759511 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:09:50 crc kubenswrapper[4972]: E1121 10:09:50.760179 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:10:04 crc kubenswrapper[4972]: I1121 10:10:04.760571 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:10:04 crc kubenswrapper[4972]: E1121 10:10:04.762076 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:10:19 crc kubenswrapper[4972]: I1121 10:10:19.761340 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:10:19 crc kubenswrapper[4972]: E1121 10:10:19.762429 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:10:21 crc kubenswrapper[4972]: I1121 10:10:21.834397 4972 scope.go:117] "RemoveContainer" containerID="c4226bed7fef0e427cad20601d46d442deb53e7fb865a67e0d99d0deaaa2b853" Nov 21 10:10:21 crc kubenswrapper[4972]: I1121 10:10:21.882002 4972 scope.go:117] "RemoveContainer" containerID="016624325af5539ff9d7e73defed3e0b610a2c3e4ff6a386d9970d3beb0ddb8a" Nov 21 10:10:21 crc kubenswrapper[4972]: I1121 10:10:21.906363 4972 scope.go:117] "RemoveContainer" containerID="ccd21252aef50de80ab417e8456d9fe1e4d96060e10f6fc8f8afed0fbd2131d1" Nov 21 10:10:21 crc kubenswrapper[4972]: I1121 10:10:21.929263 4972 scope.go:117] "RemoveContainer" containerID="dbf761b28dcff3b4c93b8b8713f7d86b5a2245b941292e19874fd9aa1d054251" Nov 21 10:10:21 crc kubenswrapper[4972]: I1121 10:10:21.974228 4972 scope.go:117] "RemoveContainer" containerID="8b4f21364893f0f5f283d8468e4025288bceaa720f5ffd92ef7f40c79cbb6d87" Nov 21 10:10:22 crc kubenswrapper[4972]: I1121 10:10:22.020882 4972 scope.go:117] "RemoveContainer" containerID="e0dff9e68692c505602b38482f5ad3c36d4b795e8864a675a304157763e3ed7e" Nov 21 10:10:22 crc kubenswrapper[4972]: I1121 10:10:22.043312 4972 scope.go:117] "RemoveContainer" containerID="7281a142a4c53eaf85f98f739a1bc21ac3985c85ea2af36bfcd9fa7599671dbb" Nov 21 10:10:22 crc kubenswrapper[4972]: I1121 10:10:22.067329 4972 scope.go:117] "RemoveContainer" containerID="941818d891525d6c6ed7988263f09932e3cbafbbf488025ab0072f5debeb0701" Nov 21 10:10:22 crc kubenswrapper[4972]: I1121 10:10:22.090118 4972 scope.go:117] "RemoveContainer" containerID="750a20df693e3ef6fdeb49daa0b334d27d70c08a01a27a3ea0406685b4a367fd" Nov 21 10:10:22 crc kubenswrapper[4972]: I1121 10:10:22.115082 4972 scope.go:117] "RemoveContainer" containerID="8dd0d4b4fdaee9e203bdfba8f14112ab1b42869a816c8570d3ef18bd5b1ea26d" Nov 21 10:10:22 crc kubenswrapper[4972]: I1121 10:10:22.164912 4972 scope.go:117] "RemoveContainer" containerID="d9a7623dd3801db2be940faeb7090155ee653942661cca43e56c0af4e156fbfc" Nov 21 10:10:22 crc kubenswrapper[4972]: I1121 10:10:22.191565 4972 scope.go:117] "RemoveContainer" containerID="71dca4990d9c5583e8c8a7fe6073ef33d3ed04c9a4b5ebf3eb0c8b7f8769c385" Nov 21 10:10:22 crc kubenswrapper[4972]: I1121 10:10:22.217199 4972 scope.go:117] "RemoveContainer" containerID="9689c03b0a8e65aa033cb5caf6a5738e37e8766032cb4c5608cf0d7247a3f626" Nov 21 10:10:22 crc kubenswrapper[4972]: I1121 10:10:22.239507 4972 scope.go:117] "RemoveContainer" containerID="bcd8fe3e44217018095632c736ff35f44419a3efd411bc546910c3270f906dfe" Nov 21 10:10:32 crc kubenswrapper[4972]: I1121 10:10:32.759389 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:10:32 crc kubenswrapper[4972]: E1121 10:10:32.760657 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:10:45 crc kubenswrapper[4972]: I1121 10:10:45.769555 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:10:45 crc kubenswrapper[4972]: E1121 10:10:45.770643 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:10:56 crc kubenswrapper[4972]: I1121 10:10:56.759162 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:10:56 crc kubenswrapper[4972]: E1121 10:10:56.760366 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:11:08 crc kubenswrapper[4972]: I1121 10:11:08.759624 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:11:08 crc kubenswrapper[4972]: E1121 10:11:08.762205 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:11:19 crc kubenswrapper[4972]: I1121 10:11:19.760218 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:11:19 crc kubenswrapper[4972]: E1121 10:11:19.762371 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.526519 4972 scope.go:117] "RemoveContainer" containerID="970b2cf0e7c0d0678a5e91fb11daab6d9b02b23cb875fda14cfe0c82fc707282" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.570654 4972 scope.go:117] "RemoveContainer" containerID="de1b1ead1be9bb4700eff0120e321dcdcc207643851d89741743ca89c0feb9be" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.604286 4972 scope.go:117] "RemoveContainer" containerID="be888e4c1ff80facebd8b83ac71955557b3ee41fc26a589d45f4417d3e5dd817" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.632150 4972 scope.go:117] "RemoveContainer" containerID="e72281d879c70916014fd08fab0961f2ce60d1723ec476c4e8eaac55838d50c5" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.660123 4972 scope.go:117] "RemoveContainer" containerID="a84df8a5c99a95c300cc9bc766b529621a802a107975b46bcdb8f96199772bb6" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.688460 4972 scope.go:117] "RemoveContainer" containerID="4743a7a33e8a464b0ea411bfad83c19a075010f2dcd322939fb901f06f09722d" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.715679 4972 scope.go:117] "RemoveContainer" containerID="b74ee39cc4ce9125ba478173dc5e89ad053a7a0c316ad87b411b9c18475c318b" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.744417 4972 scope.go:117] "RemoveContainer" containerID="f7e978a05b49d9a8b55170b6916c286f1ba8d5d193d9fb52b446f78bd3d0ec08" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.804168 4972 scope.go:117] "RemoveContainer" containerID="2921c78fe04ce5e118035b984bf13a134833efbd0278281daf335ac8e8cdab45" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.839509 4972 scope.go:117] "RemoveContainer" containerID="4d77ecd5438c1e9b16f7c8d4f0e5a8b33983d1efefc68af6391bbc8b9f26e966" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.860026 4972 scope.go:117] "RemoveContainer" containerID="e14cb4aad212d1998014fdf6f5ffdb1b7c811353ca8ed380411d627dab945835" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.889126 4972 scope.go:117] "RemoveContainer" containerID="f5f7c37dc9ae815f57a02f21d1296f31ab5066f826a28262e76dc7b2ea449e3c" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.913157 4972 scope.go:117] "RemoveContainer" containerID="3d2cc2221d9f2b7335dabef232673e1b90f3e68b118b80db724d2b99225db57e" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.937415 4972 scope.go:117] "RemoveContainer" containerID="7051de170622f6e3d9ee0aeb88f14d4e81e53671ef41a9e5a7aa056b0f637786" Nov 21 10:11:22 crc kubenswrapper[4972]: I1121 10:11:22.983815 4972 scope.go:117] "RemoveContainer" containerID="6d2aa63779319b38cf4983db05e91553bbee48b755832575671149b098b5a84b" Nov 21 10:11:33 crc kubenswrapper[4972]: I1121 10:11:33.759167 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:11:33 crc kubenswrapper[4972]: E1121 10:11:33.760060 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:11:44 crc kubenswrapper[4972]: I1121 10:11:44.759168 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:11:44 crc kubenswrapper[4972]: E1121 10:11:44.760035 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:11:56 crc kubenswrapper[4972]: I1121 10:11:56.759915 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:11:56 crc kubenswrapper[4972]: E1121 10:11:56.762660 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:12:11 crc kubenswrapper[4972]: I1121 10:12:11.759534 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:12:11 crc kubenswrapper[4972]: E1121 10:12:11.762387 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:12:23 crc kubenswrapper[4972]: I1121 10:12:23.270197 4972 scope.go:117] "RemoveContainer" containerID="06c737d7c83c0bc7a7bdce717dd3e7a0257f299bce0776d4d99cc1591e820d5f" Nov 21 10:12:23 crc kubenswrapper[4972]: I1121 10:12:23.290323 4972 scope.go:117] "RemoveContainer" containerID="fe81fdd868966682f89fccba99d42378bc0de72b90894870947da9e565b84dd0" Nov 21 10:12:23 crc kubenswrapper[4972]: I1121 10:12:23.308631 4972 scope.go:117] "RemoveContainer" containerID="c8508fa7978ab01e9ed41259f44d3ae46c68a33ace18516f62967a60ede2d29a" Nov 21 10:12:23 crc kubenswrapper[4972]: I1121 10:12:23.324369 4972 scope.go:117] "RemoveContainer" containerID="dec471ad525a075f9c99d2422dca58d35515dfaf88d497c02e47ce89e9a10e48" Nov 21 10:12:23 crc kubenswrapper[4972]: I1121 10:12:23.351775 4972 scope.go:117] "RemoveContainer" containerID="907a74a8da1e0c67123d39c0abc7dfdbefecbc274401b4ca5d2db3f701c867f3" Nov 21 10:12:23 crc kubenswrapper[4972]: I1121 10:12:23.372020 4972 scope.go:117] "RemoveContainer" containerID="815f529eea17e9f7242ec1816284b3062a0b5d36440a8c09059c8536e6dd206a" Nov 21 10:12:23 crc kubenswrapper[4972]: I1121 10:12:23.404013 4972 scope.go:117] "RemoveContainer" containerID="d7c94a0ec2d64bdac85ed6b31b82d18571c8887351544e015a31a7312cb84e80" Nov 21 10:12:23 crc kubenswrapper[4972]: I1121 10:12:23.418393 4972 scope.go:117] "RemoveContainer" containerID="7aba1f8585fdd39e8cd959cda54a96eaf1e17261fba2d57da5be6f64f842deb5" Nov 21 10:12:23 crc kubenswrapper[4972]: I1121 10:12:23.455054 4972 scope.go:117] "RemoveContainer" containerID="c60b2c13bcf0e6d2ad9127ed62e64813d1e6a8bfa8cfc5f265d4cfb21ca4e9e5" Nov 21 10:12:23 crc kubenswrapper[4972]: I1121 10:12:23.759485 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:12:23 crc kubenswrapper[4972]: E1121 10:12:23.759753 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:12:37 crc kubenswrapper[4972]: I1121 10:12:37.759474 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:12:38 crc kubenswrapper[4972]: I1121 10:12:38.904528 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"136a0fe52643dc5baace5594cd21942ff3034baa9018d09747515da442185ed0"} Nov 21 10:13:23 crc kubenswrapper[4972]: I1121 10:13:23.601491 4972 scope.go:117] "RemoveContainer" containerID="9663e5eeed349feea42f465ac185ce4a281832b49f5ad7e6676845ad1940d586" Nov 21 10:13:23 crc kubenswrapper[4972]: I1121 10:13:23.640550 4972 scope.go:117] "RemoveContainer" containerID="a049093f6f09a795c8830a24c84bf46a23071b861fe261c2d86c5c7a2d1602c9" Nov 21 10:13:23 crc kubenswrapper[4972]: I1121 10:13:23.705572 4972 scope.go:117] "RemoveContainer" containerID="a50f5adc14def76f321fa0ba2955141d2c00ea811995acd79453b97be54e414e" Nov 21 10:14:23 crc kubenswrapper[4972]: I1121 10:14:23.785807 4972 scope.go:117] "RemoveContainer" containerID="afd0454d2d22fbe7e1b75217dcf7c4a0117d4b1c86410c4d152284906a742754" Nov 21 10:14:23 crc kubenswrapper[4972]: I1121 10:14:23.824506 4972 scope.go:117] "RemoveContainer" containerID="67e8a5e3395fadd7aeb273e85f7b2a4f78e8f4c78e7fb3ffbe5561d7becea437" Nov 21 10:14:23 crc kubenswrapper[4972]: I1121 10:14:23.874640 4972 scope.go:117] "RemoveContainer" containerID="018ecfda8736af92fbfa98308446bcbcff66c1d5bd21c2f05351dc0453b58305" Nov 21 10:14:23 crc kubenswrapper[4972]: I1121 10:14:23.908900 4972 scope.go:117] "RemoveContainer" containerID="5c4178bb82f3320e37e3c09aa58e76bbdcf7d74bf35c0a1b2ed17a19ce71599a" Nov 21 10:14:23 crc kubenswrapper[4972]: I1121 10:14:23.942188 4972 scope.go:117] "RemoveContainer" containerID="07786baae1ddf77982c2d5f450f534ebeb2e7cc9884d66f899ba3263e65f0ad8" Nov 21 10:14:23 crc kubenswrapper[4972]: I1121 10:14:23.976420 4972 scope.go:117] "RemoveContainer" containerID="113314d71f8dd77620c3845233c4a215fb34baafe37ddbba0898cb7f503dba83" Nov 21 10:14:24 crc kubenswrapper[4972]: I1121 10:14:24.002202 4972 scope.go:117] "RemoveContainer" containerID="89ccc751e52d22869faea0021e71a2a785e988e4b549c89fec6ec8009f3c77b5" Nov 21 10:14:56 crc kubenswrapper[4972]: I1121 10:14:56.182615 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:14:56 crc kubenswrapper[4972]: I1121 10:14:56.183643 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.166174 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv"] Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167097 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-auditor" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167125 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-auditor" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167152 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-expirer" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167165 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-expirer" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167184 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-replicator" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167198 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-replicator" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167217 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1993446-d537-404f-b2c5-c294ea85f04f" containerName="registry-server" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167229 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1993446-d537-404f-b2c5-c294ea85f04f" containerName="registry-server" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167253 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-reaper" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167264 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-reaper" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167282 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1993446-d537-404f-b2c5-c294ea85f04f" containerName="extract-content" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167294 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1993446-d537-404f-b2c5-c294ea85f04f" containerName="extract-content" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167309 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-server" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167320 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-server" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167341 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37c248f-8b67-4f04-90b8-299d845f8ace" containerName="registry-server" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167353 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37c248f-8b67-4f04-90b8-299d845f8ace" containerName="registry-server" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167370 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-server" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167381 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-server" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167396 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="rsync" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167408 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="rsync" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167430 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-updater" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167443 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-updater" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167466 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="swift-recon-cron" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167479 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="swift-recon-cron" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167499 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-auditor" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167511 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-auditor" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167535 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server-init" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167548 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server-init" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167562 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-updater" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167574 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-updater" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167590 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1993446-d537-404f-b2c5-c294ea85f04f" containerName="extract-utilities" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167602 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1993446-d537-404f-b2c5-c294ea85f04f" containerName="extract-utilities" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167625 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-replicator" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167637 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-replicator" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167657 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-server" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167670 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-server" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167683 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovs-vswitchd" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167694 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovs-vswitchd" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167713 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167725 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167742 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37c248f-8b67-4f04-90b8-299d845f8ace" containerName="extract-utilities" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167753 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37c248f-8b67-4f04-90b8-299d845f8ace" containerName="extract-utilities" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167776 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37c248f-8b67-4f04-90b8-299d845f8ace" containerName="extract-content" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167788 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37c248f-8b67-4f04-90b8-299d845f8ace" containerName="extract-content" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167803 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-replicator" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167814 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-replicator" Nov 21 10:15:00 crc kubenswrapper[4972]: E1121 10:15:00.167869 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-auditor" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.167890 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-auditor" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168126 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-auditor" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168153 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-server" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168169 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-updater" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168190 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovsdb-server" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168209 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="swift-recon-cron" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168225 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-server" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168247 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-auditor" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168269 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-reaper" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168288 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-auditor" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168308 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-replicator" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168328 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea385c8-0af5-4759-acf1-ee6dee48e488" containerName="ovs-vswitchd" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168346 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-expirer" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168367 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="account-replicator" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168380 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-updater" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168400 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d37c248f-8b67-4f04-90b8-299d845f8ace" containerName="registry-server" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168423 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="container-replicator" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168442 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="rsync" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168453 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1993446-d537-404f-b2c5-c294ea85f04f" containerName="registry-server" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.168469 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e140ab-a53a-4af2-864f-4c399d44f217" containerName="object-server" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.169183 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.172457 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.174174 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.189724 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv"] Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.237761 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0f1691f-af0d-4343-827a-513217babb7d-config-volume\") pod \"collect-profiles-29395335-nbbnv\" (UID: \"e0f1691f-af0d-4343-827a-513217babb7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.238039 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0f1691f-af0d-4343-827a-513217babb7d-secret-volume\") pod \"collect-profiles-29395335-nbbnv\" (UID: \"e0f1691f-af0d-4343-827a-513217babb7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.238247 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkrzf\" (UniqueName: \"kubernetes.io/projected/e0f1691f-af0d-4343-827a-513217babb7d-kube-api-access-tkrzf\") pod \"collect-profiles-29395335-nbbnv\" (UID: \"e0f1691f-af0d-4343-827a-513217babb7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.340435 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkrzf\" (UniqueName: \"kubernetes.io/projected/e0f1691f-af0d-4343-827a-513217babb7d-kube-api-access-tkrzf\") pod \"collect-profiles-29395335-nbbnv\" (UID: \"e0f1691f-af0d-4343-827a-513217babb7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.340606 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0f1691f-af0d-4343-827a-513217babb7d-config-volume\") pod \"collect-profiles-29395335-nbbnv\" (UID: \"e0f1691f-af0d-4343-827a-513217babb7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.340759 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0f1691f-af0d-4343-827a-513217babb7d-secret-volume\") pod \"collect-profiles-29395335-nbbnv\" (UID: \"e0f1691f-af0d-4343-827a-513217babb7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.342316 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0f1691f-af0d-4343-827a-513217babb7d-config-volume\") pod \"collect-profiles-29395335-nbbnv\" (UID: \"e0f1691f-af0d-4343-827a-513217babb7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.355000 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0f1691f-af0d-4343-827a-513217babb7d-secret-volume\") pod \"collect-profiles-29395335-nbbnv\" (UID: \"e0f1691f-af0d-4343-827a-513217babb7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.371493 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkrzf\" (UniqueName: \"kubernetes.io/projected/e0f1691f-af0d-4343-827a-513217babb7d-kube-api-access-tkrzf\") pod \"collect-profiles-29395335-nbbnv\" (UID: \"e0f1691f-af0d-4343-827a-513217babb7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:00 crc kubenswrapper[4972]: I1121 10:15:00.508022 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:01 crc kubenswrapper[4972]: I1121 10:15:01.027936 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv"] Nov 21 10:15:01 crc kubenswrapper[4972]: W1121 10:15:01.043132 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0f1691f_af0d_4343_827a_513217babb7d.slice/crio-227a4fbf0c86e44b895e5354c010e9f0fef158266a3aec3b802bcd2ef1646980 WatchSource:0}: Error finding container 227a4fbf0c86e44b895e5354c010e9f0fef158266a3aec3b802bcd2ef1646980: Status 404 returned error can't find the container with id 227a4fbf0c86e44b895e5354c010e9f0fef158266a3aec3b802bcd2ef1646980 Nov 21 10:15:01 crc kubenswrapper[4972]: I1121 10:15:01.312955 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" event={"ID":"e0f1691f-af0d-4343-827a-513217babb7d","Type":"ContainerStarted","Data":"227a4fbf0c86e44b895e5354c010e9f0fef158266a3aec3b802bcd2ef1646980"} Nov 21 10:15:02 crc kubenswrapper[4972]: E1121 10:15:02.180107 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0f1691f_af0d_4343_827a_513217babb7d.slice/crio-4cff9a935e8d7c70e3d4e30bacd44a6aef447885d0d0b33807c157573fab6ea3.scope\": RecentStats: unable to find data in memory cache]" Nov 21 10:15:02 crc kubenswrapper[4972]: I1121 10:15:02.323165 4972 generic.go:334] "Generic (PLEG): container finished" podID="e0f1691f-af0d-4343-827a-513217babb7d" containerID="4cff9a935e8d7c70e3d4e30bacd44a6aef447885d0d0b33807c157573fab6ea3" exitCode=0 Nov 21 10:15:02 crc kubenswrapper[4972]: I1121 10:15:02.323344 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" event={"ID":"e0f1691f-af0d-4343-827a-513217babb7d","Type":"ContainerDied","Data":"4cff9a935e8d7c70e3d4e30bacd44a6aef447885d0d0b33807c157573fab6ea3"} Nov 21 10:15:03 crc kubenswrapper[4972]: I1121 10:15:03.694007 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:03 crc kubenswrapper[4972]: I1121 10:15:03.793752 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkrzf\" (UniqueName: \"kubernetes.io/projected/e0f1691f-af0d-4343-827a-513217babb7d-kube-api-access-tkrzf\") pod \"e0f1691f-af0d-4343-827a-513217babb7d\" (UID: \"e0f1691f-af0d-4343-827a-513217babb7d\") " Nov 21 10:15:03 crc kubenswrapper[4972]: I1121 10:15:03.794155 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0f1691f-af0d-4343-827a-513217babb7d-config-volume\") pod \"e0f1691f-af0d-4343-827a-513217babb7d\" (UID: \"e0f1691f-af0d-4343-827a-513217babb7d\") " Nov 21 10:15:03 crc kubenswrapper[4972]: I1121 10:15:03.794216 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0f1691f-af0d-4343-827a-513217babb7d-secret-volume\") pod \"e0f1691f-af0d-4343-827a-513217babb7d\" (UID: \"e0f1691f-af0d-4343-827a-513217babb7d\") " Nov 21 10:15:03 crc kubenswrapper[4972]: I1121 10:15:03.794941 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f1691f-af0d-4343-827a-513217babb7d-config-volume" (OuterVolumeSpecName: "config-volume") pod "e0f1691f-af0d-4343-827a-513217babb7d" (UID: "e0f1691f-af0d-4343-827a-513217babb7d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:15:03 crc kubenswrapper[4972]: I1121 10:15:03.801217 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f1691f-af0d-4343-827a-513217babb7d-kube-api-access-tkrzf" (OuterVolumeSpecName: "kube-api-access-tkrzf") pod "e0f1691f-af0d-4343-827a-513217babb7d" (UID: "e0f1691f-af0d-4343-827a-513217babb7d"). InnerVolumeSpecName "kube-api-access-tkrzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:15:03 crc kubenswrapper[4972]: I1121 10:15:03.801541 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f1691f-af0d-4343-827a-513217babb7d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e0f1691f-af0d-4343-827a-513217babb7d" (UID: "e0f1691f-af0d-4343-827a-513217babb7d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:15:03 crc kubenswrapper[4972]: I1121 10:15:03.896798 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkrzf\" (UniqueName: \"kubernetes.io/projected/e0f1691f-af0d-4343-827a-513217babb7d-kube-api-access-tkrzf\") on node \"crc\" DevicePath \"\"" Nov 21 10:15:03 crc kubenswrapper[4972]: I1121 10:15:03.896860 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0f1691f-af0d-4343-827a-513217babb7d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 10:15:03 crc kubenswrapper[4972]: I1121 10:15:03.896878 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0f1691f-af0d-4343-827a-513217babb7d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 10:15:04 crc kubenswrapper[4972]: I1121 10:15:04.344964 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" event={"ID":"e0f1691f-af0d-4343-827a-513217babb7d","Type":"ContainerDied","Data":"227a4fbf0c86e44b895e5354c010e9f0fef158266a3aec3b802bcd2ef1646980"} Nov 21 10:15:04 crc kubenswrapper[4972]: I1121 10:15:04.345008 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="227a4fbf0c86e44b895e5354c010e9f0fef158266a3aec3b802bcd2ef1646980" Nov 21 10:15:04 crc kubenswrapper[4972]: I1121 10:15:04.345279 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv" Nov 21 10:15:04 crc kubenswrapper[4972]: I1121 10:15:04.773122 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc"] Nov 21 10:15:04 crc kubenswrapper[4972]: I1121 10:15:04.780881 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395290-bswxc"] Nov 21 10:15:05 crc kubenswrapper[4972]: I1121 10:15:05.798015 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fc91391-3c93-4fe0-9c24-f8aad9c21fd2" path="/var/lib/kubelet/pods/5fc91391-3c93-4fe0-9c24-f8aad9c21fd2/volumes" Nov 21 10:15:24 crc kubenswrapper[4972]: I1121 10:15:24.130808 4972 scope.go:117] "RemoveContainer" containerID="c770f2b39614924c55c37a5e6f1314439f648f8b6a36680aa10924ad5d983fba" Nov 21 10:15:26 crc kubenswrapper[4972]: I1121 10:15:26.178771 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:15:26 crc kubenswrapper[4972]: I1121 10:15:26.180537 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:15:56 crc kubenswrapper[4972]: I1121 10:15:56.178681 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:15:56 crc kubenswrapper[4972]: I1121 10:15:56.179391 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:15:56 crc kubenswrapper[4972]: I1121 10:15:56.179462 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 10:15:56 crc kubenswrapper[4972]: I1121 10:15:56.180667 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"136a0fe52643dc5baace5594cd21942ff3034baa9018d09747515da442185ed0"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 10:15:56 crc kubenswrapper[4972]: I1121 10:15:56.180757 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://136a0fe52643dc5baace5594cd21942ff3034baa9018d09747515da442185ed0" gracePeriod=600 Nov 21 10:15:56 crc kubenswrapper[4972]: I1121 10:15:56.880063 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="136a0fe52643dc5baace5594cd21942ff3034baa9018d09747515da442185ed0" exitCode=0 Nov 21 10:15:56 crc kubenswrapper[4972]: I1121 10:15:56.880135 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"136a0fe52643dc5baace5594cd21942ff3034baa9018d09747515da442185ed0"} Nov 21 10:15:56 crc kubenswrapper[4972]: I1121 10:15:56.880720 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a"} Nov 21 10:15:56 crc kubenswrapper[4972]: I1121 10:15:56.880746 4972 scope.go:117] "RemoveContainer" containerID="c81a66255bd2890338f7a2b59c2571a7882306e7f2f58eed8192d329f7b819a8" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.062052 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wkr62"] Nov 21 10:16:33 crc kubenswrapper[4972]: E1121 10:16:33.062898 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f1691f-af0d-4343-827a-513217babb7d" containerName="collect-profiles" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.062912 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f1691f-af0d-4343-827a-513217babb7d" containerName="collect-profiles" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.063115 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f1691f-af0d-4343-827a-513217babb7d" containerName="collect-profiles" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.064382 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.086290 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkr62"] Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.166495 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f149b7-37e9-4277-a508-08e99000340c-utilities\") pod \"community-operators-wkr62\" (UID: \"99f149b7-37e9-4277-a508-08e99000340c\") " pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.166989 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f149b7-37e9-4277-a508-08e99000340c-catalog-content\") pod \"community-operators-wkr62\" (UID: \"99f149b7-37e9-4277-a508-08e99000340c\") " pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.167051 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt4lh\" (UniqueName: \"kubernetes.io/projected/99f149b7-37e9-4277-a508-08e99000340c-kube-api-access-qt4lh\") pod \"community-operators-wkr62\" (UID: \"99f149b7-37e9-4277-a508-08e99000340c\") " pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.268033 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f149b7-37e9-4277-a508-08e99000340c-utilities\") pod \"community-operators-wkr62\" (UID: \"99f149b7-37e9-4277-a508-08e99000340c\") " pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.268112 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f149b7-37e9-4277-a508-08e99000340c-catalog-content\") pod \"community-operators-wkr62\" (UID: \"99f149b7-37e9-4277-a508-08e99000340c\") " pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.268151 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt4lh\" (UniqueName: \"kubernetes.io/projected/99f149b7-37e9-4277-a508-08e99000340c-kube-api-access-qt4lh\") pod \"community-operators-wkr62\" (UID: \"99f149b7-37e9-4277-a508-08e99000340c\") " pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.268700 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f149b7-37e9-4277-a508-08e99000340c-utilities\") pod \"community-operators-wkr62\" (UID: \"99f149b7-37e9-4277-a508-08e99000340c\") " pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.268711 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f149b7-37e9-4277-a508-08e99000340c-catalog-content\") pod \"community-operators-wkr62\" (UID: \"99f149b7-37e9-4277-a508-08e99000340c\") " pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.298774 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt4lh\" (UniqueName: \"kubernetes.io/projected/99f149b7-37e9-4277-a508-08e99000340c-kube-api-access-qt4lh\") pod \"community-operators-wkr62\" (UID: \"99f149b7-37e9-4277-a508-08e99000340c\") " pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.393397 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:33 crc kubenswrapper[4972]: I1121 10:16:33.919571 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkr62"] Nov 21 10:16:34 crc kubenswrapper[4972]: I1121 10:16:34.228476 4972 generic.go:334] "Generic (PLEG): container finished" podID="99f149b7-37e9-4277-a508-08e99000340c" containerID="d5eabb0bc2925a5005512387f243fbe585f2e2da9c5c91f10e96659ec113cade" exitCode=0 Nov 21 10:16:34 crc kubenswrapper[4972]: I1121 10:16:34.228546 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkr62" event={"ID":"99f149b7-37e9-4277-a508-08e99000340c","Type":"ContainerDied","Data":"d5eabb0bc2925a5005512387f243fbe585f2e2da9c5c91f10e96659ec113cade"} Nov 21 10:16:34 crc kubenswrapper[4972]: I1121 10:16:34.228884 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkr62" event={"ID":"99f149b7-37e9-4277-a508-08e99000340c","Type":"ContainerStarted","Data":"de600a356fe78c7d1141897fd195f4df32e2977067a14934b917d074ee2dd62d"} Nov 21 10:16:34 crc kubenswrapper[4972]: I1121 10:16:34.232287 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 10:16:35 crc kubenswrapper[4972]: I1121 10:16:35.241403 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkr62" event={"ID":"99f149b7-37e9-4277-a508-08e99000340c","Type":"ContainerStarted","Data":"93ffbc14282a167fff6120e38a0574bbb18bd4bd6879ad76d4ea7e6af12a24dd"} Nov 21 10:16:36 crc kubenswrapper[4972]: I1121 10:16:36.255705 4972 generic.go:334] "Generic (PLEG): container finished" podID="99f149b7-37e9-4277-a508-08e99000340c" containerID="93ffbc14282a167fff6120e38a0574bbb18bd4bd6879ad76d4ea7e6af12a24dd" exitCode=0 Nov 21 10:16:36 crc kubenswrapper[4972]: I1121 10:16:36.255882 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkr62" event={"ID":"99f149b7-37e9-4277-a508-08e99000340c","Type":"ContainerDied","Data":"93ffbc14282a167fff6120e38a0574bbb18bd4bd6879ad76d4ea7e6af12a24dd"} Nov 21 10:16:38 crc kubenswrapper[4972]: I1121 10:16:38.309001 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkr62" event={"ID":"99f149b7-37e9-4277-a508-08e99000340c","Type":"ContainerStarted","Data":"da7cfbfde1031b0e0010b0315719a90eed9927e55ccab763dcb33cb5c34699d1"} Nov 21 10:16:38 crc kubenswrapper[4972]: I1121 10:16:38.344736 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wkr62" podStartSLOduration=2.576900979 podStartE2EDuration="5.344706626s" podCreationTimestamp="2025-11-21 10:16:33 +0000 UTC" firstStartedPulling="2025-11-21 10:16:34.231572879 +0000 UTC m=+2139.340715387" lastFinishedPulling="2025-11-21 10:16:36.999378496 +0000 UTC m=+2142.108521034" observedRunningTime="2025-11-21 10:16:38.332823299 +0000 UTC m=+2143.441965817" watchObservedRunningTime="2025-11-21 10:16:38.344706626 +0000 UTC m=+2143.453849164" Nov 21 10:16:43 crc kubenswrapper[4972]: I1121 10:16:43.393928 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:43 crc kubenswrapper[4972]: I1121 10:16:43.394650 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:43 crc kubenswrapper[4972]: I1121 10:16:43.475331 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:44 crc kubenswrapper[4972]: I1121 10:16:44.434212 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:44 crc kubenswrapper[4972]: I1121 10:16:44.488410 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkr62"] Nov 21 10:16:46 crc kubenswrapper[4972]: I1121 10:16:46.378383 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wkr62" podUID="99f149b7-37e9-4277-a508-08e99000340c" containerName="registry-server" containerID="cri-o://da7cfbfde1031b0e0010b0315719a90eed9927e55ccab763dcb33cb5c34699d1" gracePeriod=2 Nov 21 10:16:47 crc kubenswrapper[4972]: I1121 10:16:47.390864 4972 generic.go:334] "Generic (PLEG): container finished" podID="99f149b7-37e9-4277-a508-08e99000340c" containerID="da7cfbfde1031b0e0010b0315719a90eed9927e55ccab763dcb33cb5c34699d1" exitCode=0 Nov 21 10:16:47 crc kubenswrapper[4972]: I1121 10:16:47.391038 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkr62" event={"ID":"99f149b7-37e9-4277-a508-08e99000340c","Type":"ContainerDied","Data":"da7cfbfde1031b0e0010b0315719a90eed9927e55ccab763dcb33cb5c34699d1"} Nov 21 10:16:47 crc kubenswrapper[4972]: I1121 10:16:47.534129 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:47 crc kubenswrapper[4972]: I1121 10:16:47.716764 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt4lh\" (UniqueName: \"kubernetes.io/projected/99f149b7-37e9-4277-a508-08e99000340c-kube-api-access-qt4lh\") pod \"99f149b7-37e9-4277-a508-08e99000340c\" (UID: \"99f149b7-37e9-4277-a508-08e99000340c\") " Nov 21 10:16:47 crc kubenswrapper[4972]: I1121 10:16:47.716934 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f149b7-37e9-4277-a508-08e99000340c-utilities\") pod \"99f149b7-37e9-4277-a508-08e99000340c\" (UID: \"99f149b7-37e9-4277-a508-08e99000340c\") " Nov 21 10:16:47 crc kubenswrapper[4972]: I1121 10:16:47.717174 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f149b7-37e9-4277-a508-08e99000340c-catalog-content\") pod \"99f149b7-37e9-4277-a508-08e99000340c\" (UID: \"99f149b7-37e9-4277-a508-08e99000340c\") " Nov 21 10:16:47 crc kubenswrapper[4972]: I1121 10:16:47.719024 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f149b7-37e9-4277-a508-08e99000340c-utilities" (OuterVolumeSpecName: "utilities") pod "99f149b7-37e9-4277-a508-08e99000340c" (UID: "99f149b7-37e9-4277-a508-08e99000340c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:16:47 crc kubenswrapper[4972]: I1121 10:16:47.726773 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f149b7-37e9-4277-a508-08e99000340c-kube-api-access-qt4lh" (OuterVolumeSpecName: "kube-api-access-qt4lh") pod "99f149b7-37e9-4277-a508-08e99000340c" (UID: "99f149b7-37e9-4277-a508-08e99000340c"). InnerVolumeSpecName "kube-api-access-qt4lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:16:47 crc kubenswrapper[4972]: I1121 10:16:47.786025 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f149b7-37e9-4277-a508-08e99000340c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99f149b7-37e9-4277-a508-08e99000340c" (UID: "99f149b7-37e9-4277-a508-08e99000340c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:16:47 crc kubenswrapper[4972]: I1121 10:16:47.820128 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f149b7-37e9-4277-a508-08e99000340c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:16:47 crc kubenswrapper[4972]: I1121 10:16:47.820392 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt4lh\" (UniqueName: \"kubernetes.io/projected/99f149b7-37e9-4277-a508-08e99000340c-kube-api-access-qt4lh\") on node \"crc\" DevicePath \"\"" Nov 21 10:16:47 crc kubenswrapper[4972]: I1121 10:16:47.820477 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f149b7-37e9-4277-a508-08e99000340c-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:16:48 crc kubenswrapper[4972]: I1121 10:16:48.406743 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkr62" event={"ID":"99f149b7-37e9-4277-a508-08e99000340c","Type":"ContainerDied","Data":"de600a356fe78c7d1141897fd195f4df32e2977067a14934b917d074ee2dd62d"} Nov 21 10:16:48 crc kubenswrapper[4972]: I1121 10:16:48.406853 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkr62" Nov 21 10:16:48 crc kubenswrapper[4972]: I1121 10:16:48.406858 4972 scope.go:117] "RemoveContainer" containerID="da7cfbfde1031b0e0010b0315719a90eed9927e55ccab763dcb33cb5c34699d1" Nov 21 10:16:48 crc kubenswrapper[4972]: I1121 10:16:48.449642 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkr62"] Nov 21 10:16:48 crc kubenswrapper[4972]: I1121 10:16:48.461768 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wkr62"] Nov 21 10:16:48 crc kubenswrapper[4972]: I1121 10:16:48.464884 4972 scope.go:117] "RemoveContainer" containerID="93ffbc14282a167fff6120e38a0574bbb18bd4bd6879ad76d4ea7e6af12a24dd" Nov 21 10:16:48 crc kubenswrapper[4972]: I1121 10:16:48.495911 4972 scope.go:117] "RemoveContainer" containerID="d5eabb0bc2925a5005512387f243fbe585f2e2da9c5c91f10e96659ec113cade" Nov 21 10:16:49 crc kubenswrapper[4972]: I1121 10:16:49.775732 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99f149b7-37e9-4277-a508-08e99000340c" path="/var/lib/kubelet/pods/99f149b7-37e9-4277-a508-08e99000340c/volumes" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.322507 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-snzzh"] Nov 21 10:17:25 crc kubenswrapper[4972]: E1121 10:17:25.323921 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f149b7-37e9-4277-a508-08e99000340c" containerName="registry-server" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.324075 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f149b7-37e9-4277-a508-08e99000340c" containerName="registry-server" Nov 21 10:17:25 crc kubenswrapper[4972]: E1121 10:17:25.324111 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f149b7-37e9-4277-a508-08e99000340c" containerName="extract-content" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.324122 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f149b7-37e9-4277-a508-08e99000340c" containerName="extract-content" Nov 21 10:17:25 crc kubenswrapper[4972]: E1121 10:17:25.324152 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f149b7-37e9-4277-a508-08e99000340c" containerName="extract-utilities" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.324165 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f149b7-37e9-4277-a508-08e99000340c" containerName="extract-utilities" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.324403 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f149b7-37e9-4277-a508-08e99000340c" containerName="registry-server" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.326930 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.330026 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-snzzh"] Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.410379 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184e9ef3-0854-4260-b7b2-679a4beefa29-catalog-content\") pod \"redhat-marketplace-snzzh\" (UID: \"184e9ef3-0854-4260-b7b2-679a4beefa29\") " pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.410471 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184e9ef3-0854-4260-b7b2-679a4beefa29-utilities\") pod \"redhat-marketplace-snzzh\" (UID: \"184e9ef3-0854-4260-b7b2-679a4beefa29\") " pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.410498 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7pb7\" (UniqueName: \"kubernetes.io/projected/184e9ef3-0854-4260-b7b2-679a4beefa29-kube-api-access-c7pb7\") pod \"redhat-marketplace-snzzh\" (UID: \"184e9ef3-0854-4260-b7b2-679a4beefa29\") " pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.512019 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184e9ef3-0854-4260-b7b2-679a4beefa29-catalog-content\") pod \"redhat-marketplace-snzzh\" (UID: \"184e9ef3-0854-4260-b7b2-679a4beefa29\") " pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.512096 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184e9ef3-0854-4260-b7b2-679a4beefa29-utilities\") pod \"redhat-marketplace-snzzh\" (UID: \"184e9ef3-0854-4260-b7b2-679a4beefa29\") " pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.512122 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7pb7\" (UniqueName: \"kubernetes.io/projected/184e9ef3-0854-4260-b7b2-679a4beefa29-kube-api-access-c7pb7\") pod \"redhat-marketplace-snzzh\" (UID: \"184e9ef3-0854-4260-b7b2-679a4beefa29\") " pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.512625 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184e9ef3-0854-4260-b7b2-679a4beefa29-catalog-content\") pod \"redhat-marketplace-snzzh\" (UID: \"184e9ef3-0854-4260-b7b2-679a4beefa29\") " pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.512714 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184e9ef3-0854-4260-b7b2-679a4beefa29-utilities\") pod \"redhat-marketplace-snzzh\" (UID: \"184e9ef3-0854-4260-b7b2-679a4beefa29\") " pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.537999 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7pb7\" (UniqueName: \"kubernetes.io/projected/184e9ef3-0854-4260-b7b2-679a4beefa29-kube-api-access-c7pb7\") pod \"redhat-marketplace-snzzh\" (UID: \"184e9ef3-0854-4260-b7b2-679a4beefa29\") " pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:25 crc kubenswrapper[4972]: I1121 10:17:25.659625 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:26 crc kubenswrapper[4972]: I1121 10:17:26.112039 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-snzzh"] Nov 21 10:17:26 crc kubenswrapper[4972]: I1121 10:17:26.836625 4972 generic.go:334] "Generic (PLEG): container finished" podID="184e9ef3-0854-4260-b7b2-679a4beefa29" containerID="68e0cc8804c56585fbda8d4ed2ffd171ba115da2d6a16cba3ef6d079be3ec0d2" exitCode=0 Nov 21 10:17:26 crc kubenswrapper[4972]: I1121 10:17:26.836740 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snzzh" event={"ID":"184e9ef3-0854-4260-b7b2-679a4beefa29","Type":"ContainerDied","Data":"68e0cc8804c56585fbda8d4ed2ffd171ba115da2d6a16cba3ef6d079be3ec0d2"} Nov 21 10:17:26 crc kubenswrapper[4972]: I1121 10:17:26.836965 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snzzh" event={"ID":"184e9ef3-0854-4260-b7b2-679a4beefa29","Type":"ContainerStarted","Data":"8797e1fdd37dac0a2d3e78cad0c305a7213411c6cee0abe607bbf47f4b3a3170"} Nov 21 10:17:27 crc kubenswrapper[4972]: I1121 10:17:27.844804 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snzzh" event={"ID":"184e9ef3-0854-4260-b7b2-679a4beefa29","Type":"ContainerStarted","Data":"d43ccfb9d5def774dc0fd231c6e0d4c43976c6a380297973817dd9084f5fa9d4"} Nov 21 10:17:28 crc kubenswrapper[4972]: I1121 10:17:28.857893 4972 generic.go:334] "Generic (PLEG): container finished" podID="184e9ef3-0854-4260-b7b2-679a4beefa29" containerID="d43ccfb9d5def774dc0fd231c6e0d4c43976c6a380297973817dd9084f5fa9d4" exitCode=0 Nov 21 10:17:28 crc kubenswrapper[4972]: I1121 10:17:28.858017 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snzzh" event={"ID":"184e9ef3-0854-4260-b7b2-679a4beefa29","Type":"ContainerDied","Data":"d43ccfb9d5def774dc0fd231c6e0d4c43976c6a380297973817dd9084f5fa9d4"} Nov 21 10:17:30 crc kubenswrapper[4972]: I1121 10:17:30.875858 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snzzh" event={"ID":"184e9ef3-0854-4260-b7b2-679a4beefa29","Type":"ContainerStarted","Data":"1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48"} Nov 21 10:17:30 crc kubenswrapper[4972]: I1121 10:17:30.908928 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-snzzh" podStartSLOduration=2.947980695 podStartE2EDuration="5.908897824s" podCreationTimestamp="2025-11-21 10:17:25 +0000 UTC" firstStartedPulling="2025-11-21 10:17:26.83863451 +0000 UTC m=+2191.947777028" lastFinishedPulling="2025-11-21 10:17:29.799551659 +0000 UTC m=+2194.908694157" observedRunningTime="2025-11-21 10:17:30.896618936 +0000 UTC m=+2196.005761494" watchObservedRunningTime="2025-11-21 10:17:30.908897824 +0000 UTC m=+2196.018040342" Nov 21 10:17:35 crc kubenswrapper[4972]: I1121 10:17:35.660901 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:35 crc kubenswrapper[4972]: I1121 10:17:35.661799 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:35 crc kubenswrapper[4972]: I1121 10:17:35.705213 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:35 crc kubenswrapper[4972]: I1121 10:17:35.960236 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:36 crc kubenswrapper[4972]: I1121 10:17:36.005096 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-snzzh"] Nov 21 10:17:37 crc kubenswrapper[4972]: I1121 10:17:37.924351 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-snzzh" podUID="184e9ef3-0854-4260-b7b2-679a4beefa29" containerName="registry-server" containerID="cri-o://1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48" gracePeriod=2 Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.334980 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.496855 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184e9ef3-0854-4260-b7b2-679a4beefa29-catalog-content\") pod \"184e9ef3-0854-4260-b7b2-679a4beefa29\" (UID: \"184e9ef3-0854-4260-b7b2-679a4beefa29\") " Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.497052 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184e9ef3-0854-4260-b7b2-679a4beefa29-utilities\") pod \"184e9ef3-0854-4260-b7b2-679a4beefa29\" (UID: \"184e9ef3-0854-4260-b7b2-679a4beefa29\") " Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.497124 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7pb7\" (UniqueName: \"kubernetes.io/projected/184e9ef3-0854-4260-b7b2-679a4beefa29-kube-api-access-c7pb7\") pod \"184e9ef3-0854-4260-b7b2-679a4beefa29\" (UID: \"184e9ef3-0854-4260-b7b2-679a4beefa29\") " Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.498941 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184e9ef3-0854-4260-b7b2-679a4beefa29-utilities" (OuterVolumeSpecName: "utilities") pod "184e9ef3-0854-4260-b7b2-679a4beefa29" (UID: "184e9ef3-0854-4260-b7b2-679a4beefa29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.504510 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/184e9ef3-0854-4260-b7b2-679a4beefa29-kube-api-access-c7pb7" (OuterVolumeSpecName: "kube-api-access-c7pb7") pod "184e9ef3-0854-4260-b7b2-679a4beefa29" (UID: "184e9ef3-0854-4260-b7b2-679a4beefa29"). InnerVolumeSpecName "kube-api-access-c7pb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.537976 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184e9ef3-0854-4260-b7b2-679a4beefa29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "184e9ef3-0854-4260-b7b2-679a4beefa29" (UID: "184e9ef3-0854-4260-b7b2-679a4beefa29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.599239 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184e9ef3-0854-4260-b7b2-679a4beefa29-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.599267 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184e9ef3-0854-4260-b7b2-679a4beefa29-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.599277 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7pb7\" (UniqueName: \"kubernetes.io/projected/184e9ef3-0854-4260-b7b2-679a4beefa29-kube-api-access-c7pb7\") on node \"crc\" DevicePath \"\"" Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.935172 4972 generic.go:334] "Generic (PLEG): container finished" podID="184e9ef3-0854-4260-b7b2-679a4beefa29" containerID="1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48" exitCode=0 Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.935248 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snzzh" event={"ID":"184e9ef3-0854-4260-b7b2-679a4beefa29","Type":"ContainerDied","Data":"1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48"} Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.935310 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snzzh" event={"ID":"184e9ef3-0854-4260-b7b2-679a4beefa29","Type":"ContainerDied","Data":"8797e1fdd37dac0a2d3e78cad0c305a7213411c6cee0abe607bbf47f4b3a3170"} Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.935356 4972 scope.go:117] "RemoveContainer" containerID="1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48" Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.935252 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snzzh" Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.960274 4972 scope.go:117] "RemoveContainer" containerID="d43ccfb9d5def774dc0fd231c6e0d4c43976c6a380297973817dd9084f5fa9d4" Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.972215 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-snzzh"] Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.976754 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-snzzh"] Nov 21 10:17:38 crc kubenswrapper[4972]: I1121 10:17:38.998616 4972 scope.go:117] "RemoveContainer" containerID="68e0cc8804c56585fbda8d4ed2ffd171ba115da2d6a16cba3ef6d079be3ec0d2" Nov 21 10:17:39 crc kubenswrapper[4972]: I1121 10:17:39.034377 4972 scope.go:117] "RemoveContainer" containerID="1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48" Nov 21 10:17:39 crc kubenswrapper[4972]: E1121 10:17:39.034927 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48\": container with ID starting with 1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48 not found: ID does not exist" containerID="1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48" Nov 21 10:17:39 crc kubenswrapper[4972]: I1121 10:17:39.034971 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48"} err="failed to get container status \"1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48\": rpc error: code = NotFound desc = could not find container \"1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48\": container with ID starting with 1489b8a1378d0121cec115d4275d4d120f9c2a421aa4f7fe3ed03207dd45cc48 not found: ID does not exist" Nov 21 10:17:39 crc kubenswrapper[4972]: I1121 10:17:39.034998 4972 scope.go:117] "RemoveContainer" containerID="d43ccfb9d5def774dc0fd231c6e0d4c43976c6a380297973817dd9084f5fa9d4" Nov 21 10:17:39 crc kubenswrapper[4972]: E1121 10:17:39.035548 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d43ccfb9d5def774dc0fd231c6e0d4c43976c6a380297973817dd9084f5fa9d4\": container with ID starting with d43ccfb9d5def774dc0fd231c6e0d4c43976c6a380297973817dd9084f5fa9d4 not found: ID does not exist" containerID="d43ccfb9d5def774dc0fd231c6e0d4c43976c6a380297973817dd9084f5fa9d4" Nov 21 10:17:39 crc kubenswrapper[4972]: I1121 10:17:39.035635 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d43ccfb9d5def774dc0fd231c6e0d4c43976c6a380297973817dd9084f5fa9d4"} err="failed to get container status \"d43ccfb9d5def774dc0fd231c6e0d4c43976c6a380297973817dd9084f5fa9d4\": rpc error: code = NotFound desc = could not find container \"d43ccfb9d5def774dc0fd231c6e0d4c43976c6a380297973817dd9084f5fa9d4\": container with ID starting with d43ccfb9d5def774dc0fd231c6e0d4c43976c6a380297973817dd9084f5fa9d4 not found: ID does not exist" Nov 21 10:17:39 crc kubenswrapper[4972]: I1121 10:17:39.035693 4972 scope.go:117] "RemoveContainer" containerID="68e0cc8804c56585fbda8d4ed2ffd171ba115da2d6a16cba3ef6d079be3ec0d2" Nov 21 10:17:39 crc kubenswrapper[4972]: E1121 10:17:39.036331 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68e0cc8804c56585fbda8d4ed2ffd171ba115da2d6a16cba3ef6d079be3ec0d2\": container with ID starting with 68e0cc8804c56585fbda8d4ed2ffd171ba115da2d6a16cba3ef6d079be3ec0d2 not found: ID does not exist" containerID="68e0cc8804c56585fbda8d4ed2ffd171ba115da2d6a16cba3ef6d079be3ec0d2" Nov 21 10:17:39 crc kubenswrapper[4972]: I1121 10:17:39.036405 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e0cc8804c56585fbda8d4ed2ffd171ba115da2d6a16cba3ef6d079be3ec0d2"} err="failed to get container status \"68e0cc8804c56585fbda8d4ed2ffd171ba115da2d6a16cba3ef6d079be3ec0d2\": rpc error: code = NotFound desc = could not find container \"68e0cc8804c56585fbda8d4ed2ffd171ba115da2d6a16cba3ef6d079be3ec0d2\": container with ID starting with 68e0cc8804c56585fbda8d4ed2ffd171ba115da2d6a16cba3ef6d079be3ec0d2 not found: ID does not exist" Nov 21 10:17:39 crc kubenswrapper[4972]: I1121 10:17:39.774512 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="184e9ef3-0854-4260-b7b2-679a4beefa29" path="/var/lib/kubelet/pods/184e9ef3-0854-4260-b7b2-679a4beefa29/volumes" Nov 21 10:17:56 crc kubenswrapper[4972]: I1121 10:17:56.179488 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:17:56 crc kubenswrapper[4972]: I1121 10:17:56.180259 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:18:26 crc kubenswrapper[4972]: I1121 10:18:26.178981 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:18:26 crc kubenswrapper[4972]: I1121 10:18:26.179737 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.556778 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hztxk"] Nov 21 10:18:27 crc kubenswrapper[4972]: E1121 10:18:27.557121 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184e9ef3-0854-4260-b7b2-679a4beefa29" containerName="registry-server" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.557136 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="184e9ef3-0854-4260-b7b2-679a4beefa29" containerName="registry-server" Nov 21 10:18:27 crc kubenswrapper[4972]: E1121 10:18:27.557172 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184e9ef3-0854-4260-b7b2-679a4beefa29" containerName="extract-utilities" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.557181 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="184e9ef3-0854-4260-b7b2-679a4beefa29" containerName="extract-utilities" Nov 21 10:18:27 crc kubenswrapper[4972]: E1121 10:18:27.557198 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184e9ef3-0854-4260-b7b2-679a4beefa29" containerName="extract-content" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.557206 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="184e9ef3-0854-4260-b7b2-679a4beefa29" containerName="extract-content" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.557379 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="184e9ef3-0854-4260-b7b2-679a4beefa29" containerName="registry-server" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.558548 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.574053 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hztxk"] Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.657587 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7rc\" (UniqueName: \"kubernetes.io/projected/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-kube-api-access-pp7rc\") pod \"certified-operators-hztxk\" (UID: \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\") " pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.658051 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-utilities\") pod \"certified-operators-hztxk\" (UID: \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\") " pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.658088 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-catalog-content\") pod \"certified-operators-hztxk\" (UID: \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\") " pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.760085 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp7rc\" (UniqueName: \"kubernetes.io/projected/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-kube-api-access-pp7rc\") pod \"certified-operators-hztxk\" (UID: \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\") " pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.760184 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-utilities\") pod \"certified-operators-hztxk\" (UID: \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\") " pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.760241 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-catalog-content\") pod \"certified-operators-hztxk\" (UID: \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\") " pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.760878 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-catalog-content\") pod \"certified-operators-hztxk\" (UID: \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\") " pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.760816 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-utilities\") pod \"certified-operators-hztxk\" (UID: \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\") " pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.783900 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp7rc\" (UniqueName: \"kubernetes.io/projected/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-kube-api-access-pp7rc\") pod \"certified-operators-hztxk\" (UID: \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\") " pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:27 crc kubenswrapper[4972]: I1121 10:18:27.878143 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:28 crc kubenswrapper[4972]: I1121 10:18:28.342906 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hztxk"] Nov 21 10:18:29 crc kubenswrapper[4972]: I1121 10:18:29.370899 4972 generic.go:334] "Generic (PLEG): container finished" podID="1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" containerID="5870e274995d43809fcc4097fb02181c0a145243057334dbf97d5da3f782f5b8" exitCode=0 Nov 21 10:18:29 crc kubenswrapper[4972]: I1121 10:18:29.370962 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztxk" event={"ID":"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301","Type":"ContainerDied","Data":"5870e274995d43809fcc4097fb02181c0a145243057334dbf97d5da3f782f5b8"} Nov 21 10:18:29 crc kubenswrapper[4972]: I1121 10:18:29.371001 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztxk" event={"ID":"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301","Type":"ContainerStarted","Data":"c1369bf19415844541e01a42fc01d0b9a6ee74b4ffc3c6c70d4bef950f77b876"} Nov 21 10:18:30 crc kubenswrapper[4972]: I1121 10:18:30.384438 4972 generic.go:334] "Generic (PLEG): container finished" podID="1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" containerID="49fb1846f7f86ebacffaaac8c8f85516445340581557e5533a0d87317a4c8d5b" exitCode=0 Nov 21 10:18:30 crc kubenswrapper[4972]: I1121 10:18:30.384517 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztxk" event={"ID":"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301","Type":"ContainerDied","Data":"49fb1846f7f86ebacffaaac8c8f85516445340581557e5533a0d87317a4c8d5b"} Nov 21 10:18:31 crc kubenswrapper[4972]: I1121 10:18:31.397123 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztxk" event={"ID":"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301","Type":"ContainerStarted","Data":"78c7381b88325d2362e0eaa16d881af1659c1682f5ee78011b593679487afa54"} Nov 21 10:18:31 crc kubenswrapper[4972]: I1121 10:18:31.420348 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hztxk" podStartSLOduration=2.946988548 podStartE2EDuration="4.420325372s" podCreationTimestamp="2025-11-21 10:18:27 +0000 UTC" firstStartedPulling="2025-11-21 10:18:29.374130196 +0000 UTC m=+2254.483272734" lastFinishedPulling="2025-11-21 10:18:30.84746702 +0000 UTC m=+2255.956609558" observedRunningTime="2025-11-21 10:18:31.417961329 +0000 UTC m=+2256.527103867" watchObservedRunningTime="2025-11-21 10:18:31.420325372 +0000 UTC m=+2256.529467910" Nov 21 10:18:37 crc kubenswrapper[4972]: I1121 10:18:37.879229 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:37 crc kubenswrapper[4972]: I1121 10:18:37.879988 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:37 crc kubenswrapper[4972]: I1121 10:18:37.951513 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:38 crc kubenswrapper[4972]: I1121 10:18:38.522359 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:38 crc kubenswrapper[4972]: I1121 10:18:38.589819 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hztxk"] Nov 21 10:18:40 crc kubenswrapper[4972]: I1121 10:18:40.477490 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hztxk" podUID="1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" containerName="registry-server" containerID="cri-o://78c7381b88325d2362e0eaa16d881af1659c1682f5ee78011b593679487afa54" gracePeriod=2 Nov 21 10:18:41 crc kubenswrapper[4972]: I1121 10:18:41.491629 4972 generic.go:334] "Generic (PLEG): container finished" podID="1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" containerID="78c7381b88325d2362e0eaa16d881af1659c1682f5ee78011b593679487afa54" exitCode=0 Nov 21 10:18:41 crc kubenswrapper[4972]: I1121 10:18:41.491695 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztxk" event={"ID":"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301","Type":"ContainerDied","Data":"78c7381b88325d2362e0eaa16d881af1659c1682f5ee78011b593679487afa54"} Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.135246 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.200595 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-catalog-content\") pod \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\" (UID: \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\") " Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.200697 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-utilities\") pod \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\" (UID: \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\") " Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.202521 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-utilities" (OuterVolumeSpecName: "utilities") pod "1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" (UID: "1fdf5d10-bbac-4bdf-86c4-6acef0f5e301"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.267867 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" (UID: "1fdf5d10-bbac-4bdf-86c4-6acef0f5e301"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.302107 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp7rc\" (UniqueName: \"kubernetes.io/projected/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-kube-api-access-pp7rc\") pod \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\" (UID: \"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301\") " Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.302604 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.302631 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.308276 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-kube-api-access-pp7rc" (OuterVolumeSpecName: "kube-api-access-pp7rc") pod "1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" (UID: "1fdf5d10-bbac-4bdf-86c4-6acef0f5e301"). InnerVolumeSpecName "kube-api-access-pp7rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.404867 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp7rc\" (UniqueName: \"kubernetes.io/projected/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301-kube-api-access-pp7rc\") on node \"crc\" DevicePath \"\"" Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.501828 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztxk" event={"ID":"1fdf5d10-bbac-4bdf-86c4-6acef0f5e301","Type":"ContainerDied","Data":"c1369bf19415844541e01a42fc01d0b9a6ee74b4ffc3c6c70d4bef950f77b876"} Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.501929 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hztxk" Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.501985 4972 scope.go:117] "RemoveContainer" containerID="78c7381b88325d2362e0eaa16d881af1659c1682f5ee78011b593679487afa54" Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.544045 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hztxk"] Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.547012 4972 scope.go:117] "RemoveContainer" containerID="49fb1846f7f86ebacffaaac8c8f85516445340581557e5533a0d87317a4c8d5b" Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.551830 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hztxk"] Nov 21 10:18:42 crc kubenswrapper[4972]: I1121 10:18:42.573694 4972 scope.go:117] "RemoveContainer" containerID="5870e274995d43809fcc4097fb02181c0a145243057334dbf97d5da3f782f5b8" Nov 21 10:18:43 crc kubenswrapper[4972]: I1121 10:18:43.774489 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" path="/var/lib/kubelet/pods/1fdf5d10-bbac-4bdf-86c4-6acef0f5e301/volumes" Nov 21 10:18:56 crc kubenswrapper[4972]: I1121 10:18:56.179134 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:18:56 crc kubenswrapper[4972]: I1121 10:18:56.179741 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:18:56 crc kubenswrapper[4972]: I1121 10:18:56.179794 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 10:18:56 crc kubenswrapper[4972]: I1121 10:18:56.180539 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 10:18:56 crc kubenswrapper[4972]: I1121 10:18:56.180609 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" gracePeriod=600 Nov 21 10:18:56 crc kubenswrapper[4972]: E1121 10:18:56.309582 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:18:56 crc kubenswrapper[4972]: I1121 10:18:56.627355 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" exitCode=0 Nov 21 10:18:56 crc kubenswrapper[4972]: I1121 10:18:56.627417 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a"} Nov 21 10:18:56 crc kubenswrapper[4972]: I1121 10:18:56.627933 4972 scope.go:117] "RemoveContainer" containerID="136a0fe52643dc5baace5594cd21942ff3034baa9018d09747515da442185ed0" Nov 21 10:18:56 crc kubenswrapper[4972]: I1121 10:18:56.629100 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:18:56 crc kubenswrapper[4972]: E1121 10:18:56.629484 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:19:08 crc kubenswrapper[4972]: I1121 10:19:08.759185 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:19:08 crc kubenswrapper[4972]: E1121 10:19:08.760620 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:19:21 crc kubenswrapper[4972]: I1121 10:19:21.760485 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:19:21 crc kubenswrapper[4972]: E1121 10:19:21.761921 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.352499 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s7bgv"] Nov 21 10:19:23 crc kubenswrapper[4972]: E1121 10:19:23.353329 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" containerName="registry-server" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.353352 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" containerName="registry-server" Nov 21 10:19:23 crc kubenswrapper[4972]: E1121 10:19:23.353375 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" containerName="extract-content" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.353386 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" containerName="extract-content" Nov 21 10:19:23 crc kubenswrapper[4972]: E1121 10:19:23.353419 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" containerName="extract-utilities" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.353431 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" containerName="extract-utilities" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.353634 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fdf5d10-bbac-4bdf-86c4-6acef0f5e301" containerName="registry-server" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.355164 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.373730 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7bgv"] Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.398766 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vpsg\" (UniqueName: \"kubernetes.io/projected/eaa4494c-12e4-4f9f-9b91-ca14280d235e-kube-api-access-9vpsg\") pod \"redhat-operators-s7bgv\" (UID: \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\") " pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.398819 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa4494c-12e4-4f9f-9b91-ca14280d235e-catalog-content\") pod \"redhat-operators-s7bgv\" (UID: \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\") " pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.398879 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa4494c-12e4-4f9f-9b91-ca14280d235e-utilities\") pod \"redhat-operators-s7bgv\" (UID: \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\") " pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.500083 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vpsg\" (UniqueName: \"kubernetes.io/projected/eaa4494c-12e4-4f9f-9b91-ca14280d235e-kube-api-access-9vpsg\") pod \"redhat-operators-s7bgv\" (UID: \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\") " pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.500151 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa4494c-12e4-4f9f-9b91-ca14280d235e-catalog-content\") pod \"redhat-operators-s7bgv\" (UID: \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\") " pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.500224 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa4494c-12e4-4f9f-9b91-ca14280d235e-utilities\") pod \"redhat-operators-s7bgv\" (UID: \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\") " pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.500823 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa4494c-12e4-4f9f-9b91-ca14280d235e-utilities\") pod \"redhat-operators-s7bgv\" (UID: \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\") " pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.500823 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa4494c-12e4-4f9f-9b91-ca14280d235e-catalog-content\") pod \"redhat-operators-s7bgv\" (UID: \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\") " pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.519664 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vpsg\" (UniqueName: \"kubernetes.io/projected/eaa4494c-12e4-4f9f-9b91-ca14280d235e-kube-api-access-9vpsg\") pod \"redhat-operators-s7bgv\" (UID: \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\") " pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:23 crc kubenswrapper[4972]: I1121 10:19:23.684930 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:24 crc kubenswrapper[4972]: I1121 10:19:24.141440 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7bgv"] Nov 21 10:19:24 crc kubenswrapper[4972]: I1121 10:19:24.926478 4972 generic.go:334] "Generic (PLEG): container finished" podID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" containerID="85a34e4dafabca97ebdb1f702f3035420bdf4a99442d363b4e8f73cf4c45cc45" exitCode=0 Nov 21 10:19:24 crc kubenswrapper[4972]: I1121 10:19:24.926558 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"eaa4494c-12e4-4f9f-9b91-ca14280d235e","Type":"ContainerDied","Data":"85a34e4dafabca97ebdb1f702f3035420bdf4a99442d363b4e8f73cf4c45cc45"} Nov 21 10:19:24 crc kubenswrapper[4972]: I1121 10:19:24.926819 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"eaa4494c-12e4-4f9f-9b91-ca14280d235e","Type":"ContainerStarted","Data":"6f3ba2093ed594e14e44ba28ddac0c7fe2218ace9841baf02e4fd6d3a04af561"} Nov 21 10:19:25 crc kubenswrapper[4972]: I1121 10:19:25.942712 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"eaa4494c-12e4-4f9f-9b91-ca14280d235e","Type":"ContainerStarted","Data":"524878008a6b71d731ba665c55e6cc38efafea84c74dc66ed0034473eddbffd9"} Nov 21 10:19:26 crc kubenswrapper[4972]: I1121 10:19:26.955597 4972 generic.go:334] "Generic (PLEG): container finished" podID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" containerID="524878008a6b71d731ba665c55e6cc38efafea84c74dc66ed0034473eddbffd9" exitCode=0 Nov 21 10:19:26 crc kubenswrapper[4972]: I1121 10:19:26.955671 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"eaa4494c-12e4-4f9f-9b91-ca14280d235e","Type":"ContainerDied","Data":"524878008a6b71d731ba665c55e6cc38efafea84c74dc66ed0034473eddbffd9"} Nov 21 10:19:27 crc kubenswrapper[4972]: I1121 10:19:27.966656 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"eaa4494c-12e4-4f9f-9b91-ca14280d235e","Type":"ContainerStarted","Data":"11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc"} Nov 21 10:19:27 crc kubenswrapper[4972]: I1121 10:19:27.992105 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s7bgv" podStartSLOduration=2.534348127 podStartE2EDuration="4.992066182s" podCreationTimestamp="2025-11-21 10:19:23 +0000 UTC" firstStartedPulling="2025-11-21 10:19:24.92831629 +0000 UTC m=+2310.037458818" lastFinishedPulling="2025-11-21 10:19:27.386034365 +0000 UTC m=+2312.495176873" observedRunningTime="2025-11-21 10:19:27.987626913 +0000 UTC m=+2313.096769471" watchObservedRunningTime="2025-11-21 10:19:27.992066182 +0000 UTC m=+2313.101208720" Nov 21 10:19:33 crc kubenswrapper[4972]: I1121 10:19:33.685967 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:33 crc kubenswrapper[4972]: I1121 10:19:33.688877 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:33 crc kubenswrapper[4972]: I1121 10:19:33.759723 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:19:33 crc kubenswrapper[4972]: E1121 10:19:33.760070 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:19:34 crc kubenswrapper[4972]: I1121 10:19:34.738234 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s7bgv" podUID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" containerName="registry-server" probeResult="failure" output=< Nov 21 10:19:34 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 10:19:34 crc kubenswrapper[4972]: > Nov 21 10:19:43 crc kubenswrapper[4972]: I1121 10:19:43.742387 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:43 crc kubenswrapper[4972]: I1121 10:19:43.799402 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:43 crc kubenswrapper[4972]: I1121 10:19:43.982639 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7bgv"] Nov 21 10:19:45 crc kubenswrapper[4972]: I1121 10:19:45.119964 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s7bgv" podUID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" containerName="registry-server" containerID="cri-o://11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc" gracePeriod=2 Nov 21 10:19:45 crc kubenswrapper[4972]: I1121 10:19:45.764689 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:19:45 crc kubenswrapper[4972]: E1121 10:19:45.765535 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.037634 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.130461 4972 generic.go:334] "Generic (PLEG): container finished" podID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" containerID="11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc" exitCode=0 Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.130528 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"eaa4494c-12e4-4f9f-9b91-ca14280d235e","Type":"ContainerDied","Data":"11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc"} Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.130559 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"eaa4494c-12e4-4f9f-9b91-ca14280d235e","Type":"ContainerDied","Data":"6f3ba2093ed594e14e44ba28ddac0c7fe2218ace9841baf02e4fd6d3a04af561"} Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.130567 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7bgv" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.130586 4972 scope.go:117] "RemoveContainer" containerID="11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.153120 4972 scope.go:117] "RemoveContainer" containerID="524878008a6b71d731ba665c55e6cc38efafea84c74dc66ed0034473eddbffd9" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.172841 4972 scope.go:117] "RemoveContainer" containerID="85a34e4dafabca97ebdb1f702f3035420bdf4a99442d363b4e8f73cf4c45cc45" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.198723 4972 scope.go:117] "RemoveContainer" containerID="11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc" Nov 21 10:19:46 crc kubenswrapper[4972]: E1121 10:19:46.199254 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc\": container with ID starting with 11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc not found: ID does not exist" containerID="11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.199319 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc"} err="failed to get container status \"11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc\": rpc error: code = NotFound desc = could not find container \"11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc\": container with ID starting with 11a0496ba3742db0ec53ca631739528bbfded3333e2632ec2ca1a6b85e98d3bc not found: ID does not exist" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.199359 4972 scope.go:117] "RemoveContainer" containerID="524878008a6b71d731ba665c55e6cc38efafea84c74dc66ed0034473eddbffd9" Nov 21 10:19:46 crc kubenswrapper[4972]: E1121 10:19:46.199751 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"524878008a6b71d731ba665c55e6cc38efafea84c74dc66ed0034473eddbffd9\": container with ID starting with 524878008a6b71d731ba665c55e6cc38efafea84c74dc66ed0034473eddbffd9 not found: ID does not exist" containerID="524878008a6b71d731ba665c55e6cc38efafea84c74dc66ed0034473eddbffd9" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.199789 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"524878008a6b71d731ba665c55e6cc38efafea84c74dc66ed0034473eddbffd9"} err="failed to get container status \"524878008a6b71d731ba665c55e6cc38efafea84c74dc66ed0034473eddbffd9\": rpc error: code = NotFound desc = could not find container \"524878008a6b71d731ba665c55e6cc38efafea84c74dc66ed0034473eddbffd9\": container with ID starting with 524878008a6b71d731ba665c55e6cc38efafea84c74dc66ed0034473eddbffd9 not found: ID does not exist" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.199812 4972 scope.go:117] "RemoveContainer" containerID="85a34e4dafabca97ebdb1f702f3035420bdf4a99442d363b4e8f73cf4c45cc45" Nov 21 10:19:46 crc kubenswrapper[4972]: E1121 10:19:46.200142 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85a34e4dafabca97ebdb1f702f3035420bdf4a99442d363b4e8f73cf4c45cc45\": container with ID starting with 85a34e4dafabca97ebdb1f702f3035420bdf4a99442d363b4e8f73cf4c45cc45 not found: ID does not exist" containerID="85a34e4dafabca97ebdb1f702f3035420bdf4a99442d363b4e8f73cf4c45cc45" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.200172 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a34e4dafabca97ebdb1f702f3035420bdf4a99442d363b4e8f73cf4c45cc45"} err="failed to get container status \"85a34e4dafabca97ebdb1f702f3035420bdf4a99442d363b4e8f73cf4c45cc45\": rpc error: code = NotFound desc = could not find container \"85a34e4dafabca97ebdb1f702f3035420bdf4a99442d363b4e8f73cf4c45cc45\": container with ID starting with 85a34e4dafabca97ebdb1f702f3035420bdf4a99442d363b4e8f73cf4c45cc45 not found: ID does not exist" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.209210 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa4494c-12e4-4f9f-9b91-ca14280d235e-catalog-content\") pod \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\" (UID: \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\") " Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.209283 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa4494c-12e4-4f9f-9b91-ca14280d235e-utilities\") pod \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\" (UID: \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\") " Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.209344 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vpsg\" (UniqueName: \"kubernetes.io/projected/eaa4494c-12e4-4f9f-9b91-ca14280d235e-kube-api-access-9vpsg\") pod \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\" (UID: \"eaa4494c-12e4-4f9f-9b91-ca14280d235e\") " Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.211021 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa4494c-12e4-4f9f-9b91-ca14280d235e-utilities" (OuterVolumeSpecName: "utilities") pod "eaa4494c-12e4-4f9f-9b91-ca14280d235e" (UID: "eaa4494c-12e4-4f9f-9b91-ca14280d235e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.216561 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa4494c-12e4-4f9f-9b91-ca14280d235e-kube-api-access-9vpsg" (OuterVolumeSpecName: "kube-api-access-9vpsg") pod "eaa4494c-12e4-4f9f-9b91-ca14280d235e" (UID: "eaa4494c-12e4-4f9f-9b91-ca14280d235e"). InnerVolumeSpecName "kube-api-access-9vpsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.307531 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa4494c-12e4-4f9f-9b91-ca14280d235e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eaa4494c-12e4-4f9f-9b91-ca14280d235e" (UID: "eaa4494c-12e4-4f9f-9b91-ca14280d235e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.311441 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vpsg\" (UniqueName: \"kubernetes.io/projected/eaa4494c-12e4-4f9f-9b91-ca14280d235e-kube-api-access-9vpsg\") on node \"crc\" DevicePath \"\"" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.311478 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa4494c-12e4-4f9f-9b91-ca14280d235e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.311493 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa4494c-12e4-4f9f-9b91-ca14280d235e-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.490678 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7bgv"] Nov 21 10:19:46 crc kubenswrapper[4972]: I1121 10:19:46.496392 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s7bgv"] Nov 21 10:19:47 crc kubenswrapper[4972]: I1121 10:19:47.773291 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" path="/var/lib/kubelet/pods/eaa4494c-12e4-4f9f-9b91-ca14280d235e/volumes" Nov 21 10:19:59 crc kubenswrapper[4972]: I1121 10:19:59.760476 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:19:59 crc kubenswrapper[4972]: E1121 10:19:59.761894 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:20:10 crc kubenswrapper[4972]: I1121 10:20:10.760208 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:20:10 crc kubenswrapper[4972]: E1121 10:20:10.761326 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:20:22 crc kubenswrapper[4972]: I1121 10:20:22.024512 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:20:22 crc kubenswrapper[4972]: E1121 10:20:22.025800 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:20:34 crc kubenswrapper[4972]: I1121 10:20:34.760439 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:20:34 crc kubenswrapper[4972]: E1121 10:20:34.761666 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:20:47 crc kubenswrapper[4972]: I1121 10:20:47.759255 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:20:47 crc kubenswrapper[4972]: E1121 10:20:47.760038 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:20:58 crc kubenswrapper[4972]: I1121 10:20:58.759677 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:20:58 crc kubenswrapper[4972]: E1121 10:20:58.760436 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:21:12 crc kubenswrapper[4972]: I1121 10:21:12.760106 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:21:12 crc kubenswrapper[4972]: E1121 10:21:12.760980 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:21:26 crc kubenswrapper[4972]: I1121 10:21:26.760540 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:21:26 crc kubenswrapper[4972]: E1121 10:21:26.761507 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:21:40 crc kubenswrapper[4972]: I1121 10:21:40.759399 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:21:40 crc kubenswrapper[4972]: E1121 10:21:40.761235 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:21:52 crc kubenswrapper[4972]: I1121 10:21:52.759947 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:21:52 crc kubenswrapper[4972]: E1121 10:21:52.760648 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:22:05 crc kubenswrapper[4972]: I1121 10:22:05.764871 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:22:05 crc kubenswrapper[4972]: E1121 10:22:05.765819 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:22:16 crc kubenswrapper[4972]: I1121 10:22:16.759398 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:22:16 crc kubenswrapper[4972]: E1121 10:22:16.763126 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:22:28 crc kubenswrapper[4972]: I1121 10:22:28.760747 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:22:28 crc kubenswrapper[4972]: E1121 10:22:28.762017 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:22:42 crc kubenswrapper[4972]: I1121 10:22:42.760155 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:22:42 crc kubenswrapper[4972]: E1121 10:22:42.761382 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:22:54 crc kubenswrapper[4972]: I1121 10:22:54.759575 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:22:54 crc kubenswrapper[4972]: E1121 10:22:54.760708 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:23:06 crc kubenswrapper[4972]: I1121 10:23:06.760165 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:23:06 crc kubenswrapper[4972]: E1121 10:23:06.762414 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:23:19 crc kubenswrapper[4972]: I1121 10:23:19.760032 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:23:19 crc kubenswrapper[4972]: E1121 10:23:19.760973 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:23:34 crc kubenswrapper[4972]: I1121 10:23:34.760420 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:23:34 crc kubenswrapper[4972]: E1121 10:23:34.761516 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:23:46 crc kubenswrapper[4972]: I1121 10:23:46.759592 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:23:46 crc kubenswrapper[4972]: E1121 10:23:46.760690 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:23:59 crc kubenswrapper[4972]: I1121 10:23:59.760124 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:24:00 crc kubenswrapper[4972]: I1121 10:24:00.295977 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"24d92b3bf92e9fc4d14b9d9ef3388e6944ce616d9fa1664affe8230a14779b65"} Nov 21 10:26:26 crc kubenswrapper[4972]: I1121 10:26:26.179468 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:26:26 crc kubenswrapper[4972]: I1121 10:26:26.180289 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:26:56 crc kubenswrapper[4972]: I1121 10:26:56.179077 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:26:56 crc kubenswrapper[4972]: I1121 10:26:56.180676 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.654609 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-52gq8"] Nov 21 10:27:03 crc kubenswrapper[4972]: E1121 10:27:03.655684 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" containerName="extract-content" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.655701 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" containerName="extract-content" Nov 21 10:27:03 crc kubenswrapper[4972]: E1121 10:27:03.655718 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" containerName="extract-utilities" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.655726 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" containerName="extract-utilities" Nov 21 10:27:03 crc kubenswrapper[4972]: E1121 10:27:03.655761 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" containerName="registry-server" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.655770 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" containerName="registry-server" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.656343 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa4494c-12e4-4f9f-9b91-ca14280d235e" containerName="registry-server" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.657950 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.678973 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52gq8"] Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.748255 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e8ea8d-f5e9-4573-899f-3701354f4850-catalog-content\") pod \"community-operators-52gq8\" (UID: \"34e8ea8d-f5e9-4573-899f-3701354f4850\") " pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.748581 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e8ea8d-f5e9-4573-899f-3701354f4850-utilities\") pod \"community-operators-52gq8\" (UID: \"34e8ea8d-f5e9-4573-899f-3701354f4850\") " pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.748629 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng9cq\" (UniqueName: \"kubernetes.io/projected/34e8ea8d-f5e9-4573-899f-3701354f4850-kube-api-access-ng9cq\") pod \"community-operators-52gq8\" (UID: \"34e8ea8d-f5e9-4573-899f-3701354f4850\") " pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.850478 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e8ea8d-f5e9-4573-899f-3701354f4850-catalog-content\") pod \"community-operators-52gq8\" (UID: \"34e8ea8d-f5e9-4573-899f-3701354f4850\") " pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.850562 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e8ea8d-f5e9-4573-899f-3701354f4850-utilities\") pod \"community-operators-52gq8\" (UID: \"34e8ea8d-f5e9-4573-899f-3701354f4850\") " pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.850643 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng9cq\" (UniqueName: \"kubernetes.io/projected/34e8ea8d-f5e9-4573-899f-3701354f4850-kube-api-access-ng9cq\") pod \"community-operators-52gq8\" (UID: \"34e8ea8d-f5e9-4573-899f-3701354f4850\") " pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.851468 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e8ea8d-f5e9-4573-899f-3701354f4850-catalog-content\") pod \"community-operators-52gq8\" (UID: \"34e8ea8d-f5e9-4573-899f-3701354f4850\") " pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.851502 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e8ea8d-f5e9-4573-899f-3701354f4850-utilities\") pod \"community-operators-52gq8\" (UID: \"34e8ea8d-f5e9-4573-899f-3701354f4850\") " pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:03 crc kubenswrapper[4972]: I1121 10:27:03.876896 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng9cq\" (UniqueName: \"kubernetes.io/projected/34e8ea8d-f5e9-4573-899f-3701354f4850-kube-api-access-ng9cq\") pod \"community-operators-52gq8\" (UID: \"34e8ea8d-f5e9-4573-899f-3701354f4850\") " pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:04 crc kubenswrapper[4972]: I1121 10:27:04.003275 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:04 crc kubenswrapper[4972]: I1121 10:27:04.293573 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52gq8"] Nov 21 10:27:05 crc kubenswrapper[4972]: I1121 10:27:05.102124 4972 generic.go:334] "Generic (PLEG): container finished" podID="34e8ea8d-f5e9-4573-899f-3701354f4850" containerID="f24d4b514277037513bb68c25455a75a451f50b5015ef27f4c661587404ba661" exitCode=0 Nov 21 10:27:05 crc kubenswrapper[4972]: I1121 10:27:05.102189 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52gq8" event={"ID":"34e8ea8d-f5e9-4573-899f-3701354f4850","Type":"ContainerDied","Data":"f24d4b514277037513bb68c25455a75a451f50b5015ef27f4c661587404ba661"} Nov 21 10:27:05 crc kubenswrapper[4972]: I1121 10:27:05.102226 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52gq8" event={"ID":"34e8ea8d-f5e9-4573-899f-3701354f4850","Type":"ContainerStarted","Data":"34242a4bed56e58e73c86d8dd2728366f64491a5123c6de626edfc51573d6a7a"} Nov 21 10:27:05 crc kubenswrapper[4972]: I1121 10:27:05.105254 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 10:27:07 crc kubenswrapper[4972]: I1121 10:27:07.117761 4972 generic.go:334] "Generic (PLEG): container finished" podID="34e8ea8d-f5e9-4573-899f-3701354f4850" containerID="96f9a25b5c765b738a8e7e1c5bd6eea067baca6460a1976086c0fe990a435b05" exitCode=0 Nov 21 10:27:07 crc kubenswrapper[4972]: I1121 10:27:07.117952 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52gq8" event={"ID":"34e8ea8d-f5e9-4573-899f-3701354f4850","Type":"ContainerDied","Data":"96f9a25b5c765b738a8e7e1c5bd6eea067baca6460a1976086c0fe990a435b05"} Nov 21 10:27:08 crc kubenswrapper[4972]: I1121 10:27:08.128610 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52gq8" event={"ID":"34e8ea8d-f5e9-4573-899f-3701354f4850","Type":"ContainerStarted","Data":"dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a"} Nov 21 10:27:08 crc kubenswrapper[4972]: I1121 10:27:08.145751 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-52gq8" podStartSLOduration=2.504030272 podStartE2EDuration="5.145729136s" podCreationTimestamp="2025-11-21 10:27:03 +0000 UTC" firstStartedPulling="2025-11-21 10:27:05.104714545 +0000 UTC m=+2770.213857083" lastFinishedPulling="2025-11-21 10:27:07.746413439 +0000 UTC m=+2772.855555947" observedRunningTime="2025-11-21 10:27:08.143774273 +0000 UTC m=+2773.252916811" watchObservedRunningTime="2025-11-21 10:27:08.145729136 +0000 UTC m=+2773.254871644" Nov 21 10:27:14 crc kubenswrapper[4972]: I1121 10:27:14.003557 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:14 crc kubenswrapper[4972]: I1121 10:27:14.004130 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:14 crc kubenswrapper[4972]: I1121 10:27:14.047866 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:14 crc kubenswrapper[4972]: I1121 10:27:14.234263 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:14 crc kubenswrapper[4972]: I1121 10:27:14.318121 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52gq8"] Nov 21 10:27:16 crc kubenswrapper[4972]: I1121 10:27:16.205064 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-52gq8" podUID="34e8ea8d-f5e9-4573-899f-3701354f4850" containerName="registry-server" containerID="cri-o://dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a" gracePeriod=2 Nov 21 10:27:16 crc kubenswrapper[4972]: I1121 10:27:16.597810 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:16 crc kubenswrapper[4972]: I1121 10:27:16.650364 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e8ea8d-f5e9-4573-899f-3701354f4850-utilities\") pod \"34e8ea8d-f5e9-4573-899f-3701354f4850\" (UID: \"34e8ea8d-f5e9-4573-899f-3701354f4850\") " Nov 21 10:27:16 crc kubenswrapper[4972]: I1121 10:27:16.650467 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e8ea8d-f5e9-4573-899f-3701354f4850-catalog-content\") pod \"34e8ea8d-f5e9-4573-899f-3701354f4850\" (UID: \"34e8ea8d-f5e9-4573-899f-3701354f4850\") " Nov 21 10:27:16 crc kubenswrapper[4972]: I1121 10:27:16.650505 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng9cq\" (UniqueName: \"kubernetes.io/projected/34e8ea8d-f5e9-4573-899f-3701354f4850-kube-api-access-ng9cq\") pod \"34e8ea8d-f5e9-4573-899f-3701354f4850\" (UID: \"34e8ea8d-f5e9-4573-899f-3701354f4850\") " Nov 21 10:27:16 crc kubenswrapper[4972]: I1121 10:27:16.651079 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34e8ea8d-f5e9-4573-899f-3701354f4850-utilities" (OuterVolumeSpecName: "utilities") pod "34e8ea8d-f5e9-4573-899f-3701354f4850" (UID: "34e8ea8d-f5e9-4573-899f-3701354f4850"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:27:16 crc kubenswrapper[4972]: I1121 10:27:16.655895 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e8ea8d-f5e9-4573-899f-3701354f4850-kube-api-access-ng9cq" (OuterVolumeSpecName: "kube-api-access-ng9cq") pod "34e8ea8d-f5e9-4573-899f-3701354f4850" (UID: "34e8ea8d-f5e9-4573-899f-3701354f4850"). InnerVolumeSpecName "kube-api-access-ng9cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:27:16 crc kubenswrapper[4972]: I1121 10:27:16.709708 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34e8ea8d-f5e9-4573-899f-3701354f4850-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34e8ea8d-f5e9-4573-899f-3701354f4850" (UID: "34e8ea8d-f5e9-4573-899f-3701354f4850"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:27:16 crc kubenswrapper[4972]: I1121 10:27:16.752067 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e8ea8d-f5e9-4573-899f-3701354f4850-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:27:16 crc kubenswrapper[4972]: I1121 10:27:16.752105 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e8ea8d-f5e9-4573-899f-3701354f4850-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:27:16 crc kubenswrapper[4972]: I1121 10:27:16.752124 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng9cq\" (UniqueName: \"kubernetes.io/projected/34e8ea8d-f5e9-4573-899f-3701354f4850-kube-api-access-ng9cq\") on node \"crc\" DevicePath \"\"" Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.219282 4972 generic.go:334] "Generic (PLEG): container finished" podID="34e8ea8d-f5e9-4573-899f-3701354f4850" containerID="dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a" exitCode=0 Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.219334 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52gq8" event={"ID":"34e8ea8d-f5e9-4573-899f-3701354f4850","Type":"ContainerDied","Data":"dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a"} Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.219369 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52gq8" event={"ID":"34e8ea8d-f5e9-4573-899f-3701354f4850","Type":"ContainerDied","Data":"34242a4bed56e58e73c86d8dd2728366f64491a5123c6de626edfc51573d6a7a"} Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.219371 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52gq8" Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.219391 4972 scope.go:117] "RemoveContainer" containerID="dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a" Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.237016 4972 scope.go:117] "RemoveContainer" containerID="96f9a25b5c765b738a8e7e1c5bd6eea067baca6460a1976086c0fe990a435b05" Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.251331 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52gq8"] Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.256450 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-52gq8"] Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.268919 4972 scope.go:117] "RemoveContainer" containerID="f24d4b514277037513bb68c25455a75a451f50b5015ef27f4c661587404ba661" Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.315590 4972 scope.go:117] "RemoveContainer" containerID="dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a" Nov 21 10:27:17 crc kubenswrapper[4972]: E1121 10:27:17.315933 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a\": container with ID starting with dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a not found: ID does not exist" containerID="dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a" Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.316114 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a"} err="failed to get container status \"dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a\": rpc error: code = NotFound desc = could not find container \"dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a\": container with ID starting with dbd99995e62aa5dd28f1d6614070f7c578eace0bb0ff1c55b620e46d44c8e52a not found: ID does not exist" Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.316138 4972 scope.go:117] "RemoveContainer" containerID="96f9a25b5c765b738a8e7e1c5bd6eea067baca6460a1976086c0fe990a435b05" Nov 21 10:27:17 crc kubenswrapper[4972]: E1121 10:27:17.316352 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f9a25b5c765b738a8e7e1c5bd6eea067baca6460a1976086c0fe990a435b05\": container with ID starting with 96f9a25b5c765b738a8e7e1c5bd6eea067baca6460a1976086c0fe990a435b05 not found: ID does not exist" containerID="96f9a25b5c765b738a8e7e1c5bd6eea067baca6460a1976086c0fe990a435b05" Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.316384 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f9a25b5c765b738a8e7e1c5bd6eea067baca6460a1976086c0fe990a435b05"} err="failed to get container status \"96f9a25b5c765b738a8e7e1c5bd6eea067baca6460a1976086c0fe990a435b05\": rpc error: code = NotFound desc = could not find container \"96f9a25b5c765b738a8e7e1c5bd6eea067baca6460a1976086c0fe990a435b05\": container with ID starting with 96f9a25b5c765b738a8e7e1c5bd6eea067baca6460a1976086c0fe990a435b05 not found: ID does not exist" Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.316400 4972 scope.go:117] "RemoveContainer" containerID="f24d4b514277037513bb68c25455a75a451f50b5015ef27f4c661587404ba661" Nov 21 10:27:17 crc kubenswrapper[4972]: E1121 10:27:17.316575 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f24d4b514277037513bb68c25455a75a451f50b5015ef27f4c661587404ba661\": container with ID starting with f24d4b514277037513bb68c25455a75a451f50b5015ef27f4c661587404ba661 not found: ID does not exist" containerID="f24d4b514277037513bb68c25455a75a451f50b5015ef27f4c661587404ba661" Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.316594 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f24d4b514277037513bb68c25455a75a451f50b5015ef27f4c661587404ba661"} err="failed to get container status \"f24d4b514277037513bb68c25455a75a451f50b5015ef27f4c661587404ba661\": rpc error: code = NotFound desc = could not find container \"f24d4b514277037513bb68c25455a75a451f50b5015ef27f4c661587404ba661\": container with ID starting with f24d4b514277037513bb68c25455a75a451f50b5015ef27f4c661587404ba661 not found: ID does not exist" Nov 21 10:27:17 crc kubenswrapper[4972]: I1121 10:27:17.769135 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e8ea8d-f5e9-4573-899f-3701354f4850" path="/var/lib/kubelet/pods/34e8ea8d-f5e9-4573-899f-3701354f4850/volumes" Nov 21 10:27:26 crc kubenswrapper[4972]: I1121 10:27:26.178705 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:27:26 crc kubenswrapper[4972]: I1121 10:27:26.179143 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:27:26 crc kubenswrapper[4972]: I1121 10:27:26.179199 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 10:27:26 crc kubenswrapper[4972]: I1121 10:27:26.179900 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24d92b3bf92e9fc4d14b9d9ef3388e6944ce616d9fa1664affe8230a14779b65"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 10:27:26 crc kubenswrapper[4972]: I1121 10:27:26.179953 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://24d92b3bf92e9fc4d14b9d9ef3388e6944ce616d9fa1664affe8230a14779b65" gracePeriod=600 Nov 21 10:27:27 crc kubenswrapper[4972]: I1121 10:27:27.320072 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="24d92b3bf92e9fc4d14b9d9ef3388e6944ce616d9fa1664affe8230a14779b65" exitCode=0 Nov 21 10:27:27 crc kubenswrapper[4972]: I1121 10:27:27.320394 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"24d92b3bf92e9fc4d14b9d9ef3388e6944ce616d9fa1664affe8230a14779b65"} Nov 21 10:27:27 crc kubenswrapper[4972]: I1121 10:27:27.320901 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3"} Nov 21 10:27:27 crc kubenswrapper[4972]: I1121 10:27:27.320955 4972 scope.go:117] "RemoveContainer" containerID="944c9fc9154a530c4a8122bd76fedde349e416d94dafc59053a2883754f56a4a" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.644234 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zg42f"] Nov 21 10:28:45 crc kubenswrapper[4972]: E1121 10:28:45.645197 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e8ea8d-f5e9-4573-899f-3701354f4850" containerName="registry-server" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.645213 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e8ea8d-f5e9-4573-899f-3701354f4850" containerName="registry-server" Nov 21 10:28:45 crc kubenswrapper[4972]: E1121 10:28:45.645228 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e8ea8d-f5e9-4573-899f-3701354f4850" containerName="extract-content" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.645235 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e8ea8d-f5e9-4573-899f-3701354f4850" containerName="extract-content" Nov 21 10:28:45 crc kubenswrapper[4972]: E1121 10:28:45.645244 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e8ea8d-f5e9-4573-899f-3701354f4850" containerName="extract-utilities" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.645252 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e8ea8d-f5e9-4573-899f-3701354f4850" containerName="extract-utilities" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.645445 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e8ea8d-f5e9-4573-899f-3701354f4850" containerName="registry-server" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.646646 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.656063 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zg42f"] Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.761680 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w697f\" (UniqueName: \"kubernetes.io/projected/5ee828b8-760c-424f-bb1f-2242ab12e86a-kube-api-access-w697f\") pod \"redhat-marketplace-zg42f\" (UID: \"5ee828b8-760c-424f-bb1f-2242ab12e86a\") " pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.761853 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee828b8-760c-424f-bb1f-2242ab12e86a-catalog-content\") pod \"redhat-marketplace-zg42f\" (UID: \"5ee828b8-760c-424f-bb1f-2242ab12e86a\") " pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.762019 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee828b8-760c-424f-bb1f-2242ab12e86a-utilities\") pod \"redhat-marketplace-zg42f\" (UID: \"5ee828b8-760c-424f-bb1f-2242ab12e86a\") " pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.863471 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w697f\" (UniqueName: \"kubernetes.io/projected/5ee828b8-760c-424f-bb1f-2242ab12e86a-kube-api-access-w697f\") pod \"redhat-marketplace-zg42f\" (UID: \"5ee828b8-760c-424f-bb1f-2242ab12e86a\") " pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.863545 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee828b8-760c-424f-bb1f-2242ab12e86a-catalog-content\") pod \"redhat-marketplace-zg42f\" (UID: \"5ee828b8-760c-424f-bb1f-2242ab12e86a\") " pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.863645 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee828b8-760c-424f-bb1f-2242ab12e86a-utilities\") pod \"redhat-marketplace-zg42f\" (UID: \"5ee828b8-760c-424f-bb1f-2242ab12e86a\") " pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.864083 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee828b8-760c-424f-bb1f-2242ab12e86a-catalog-content\") pod \"redhat-marketplace-zg42f\" (UID: \"5ee828b8-760c-424f-bb1f-2242ab12e86a\") " pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.864235 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee828b8-760c-424f-bb1f-2242ab12e86a-utilities\") pod \"redhat-marketplace-zg42f\" (UID: \"5ee828b8-760c-424f-bb1f-2242ab12e86a\") " pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.892336 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w697f\" (UniqueName: \"kubernetes.io/projected/5ee828b8-760c-424f-bb1f-2242ab12e86a-kube-api-access-w697f\") pod \"redhat-marketplace-zg42f\" (UID: \"5ee828b8-760c-424f-bb1f-2242ab12e86a\") " pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:45 crc kubenswrapper[4972]: I1121 10:28:45.969572 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:46 crc kubenswrapper[4972]: I1121 10:28:46.380402 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zg42f"] Nov 21 10:28:47 crc kubenswrapper[4972]: I1121 10:28:47.097028 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ee828b8-760c-424f-bb1f-2242ab12e86a" containerID="99be482e2add2c6ddcefadc163303efe5149361ae289217f878d3cbd9521f193" exitCode=0 Nov 21 10:28:47 crc kubenswrapper[4972]: I1121 10:28:47.097111 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zg42f" event={"ID":"5ee828b8-760c-424f-bb1f-2242ab12e86a","Type":"ContainerDied","Data":"99be482e2add2c6ddcefadc163303efe5149361ae289217f878d3cbd9521f193"} Nov 21 10:28:47 crc kubenswrapper[4972]: I1121 10:28:47.097425 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zg42f" event={"ID":"5ee828b8-760c-424f-bb1f-2242ab12e86a","Type":"ContainerStarted","Data":"3f3948c1f05d830a4f1215d09b542a7734e9a79b3aeb3472b39f5189b32b6b81"} Nov 21 10:28:48 crc kubenswrapper[4972]: I1121 10:28:48.108347 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ee828b8-760c-424f-bb1f-2242ab12e86a" containerID="6d6d8bf5128bb8d9840f3f3fc7998a37af33be460d64e0731634b7d51b0092e2" exitCode=0 Nov 21 10:28:48 crc kubenswrapper[4972]: I1121 10:28:48.108527 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zg42f" event={"ID":"5ee828b8-760c-424f-bb1f-2242ab12e86a","Type":"ContainerDied","Data":"6d6d8bf5128bb8d9840f3f3fc7998a37af33be460d64e0731634b7d51b0092e2"} Nov 21 10:28:49 crc kubenswrapper[4972]: I1121 10:28:49.122125 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zg42f" event={"ID":"5ee828b8-760c-424f-bb1f-2242ab12e86a","Type":"ContainerStarted","Data":"2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45"} Nov 21 10:28:49 crc kubenswrapper[4972]: I1121 10:28:49.153207 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zg42f" podStartSLOduration=2.761351043 podStartE2EDuration="4.153186153s" podCreationTimestamp="2025-11-21 10:28:45 +0000 UTC" firstStartedPulling="2025-11-21 10:28:47.099067245 +0000 UTC m=+2872.208209743" lastFinishedPulling="2025-11-21 10:28:48.490902315 +0000 UTC m=+2873.600044853" observedRunningTime="2025-11-21 10:28:49.150293356 +0000 UTC m=+2874.259435874" watchObservedRunningTime="2025-11-21 10:28:49.153186153 +0000 UTC m=+2874.262328671" Nov 21 10:28:55 crc kubenswrapper[4972]: I1121 10:28:55.971140 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:55 crc kubenswrapper[4972]: I1121 10:28:55.973167 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:56 crc kubenswrapper[4972]: I1121 10:28:56.033474 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:56 crc kubenswrapper[4972]: I1121 10:28:56.239470 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:56 crc kubenswrapper[4972]: I1121 10:28:56.287389 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zg42f"] Nov 21 10:28:58 crc kubenswrapper[4972]: I1121 10:28:58.210008 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zg42f" podUID="5ee828b8-760c-424f-bb1f-2242ab12e86a" containerName="registry-server" containerID="cri-o://2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45" gracePeriod=2 Nov 21 10:28:58 crc kubenswrapper[4972]: I1121 10:28:58.588559 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:58 crc kubenswrapper[4972]: I1121 10:28:58.754914 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee828b8-760c-424f-bb1f-2242ab12e86a-utilities\") pod \"5ee828b8-760c-424f-bb1f-2242ab12e86a\" (UID: \"5ee828b8-760c-424f-bb1f-2242ab12e86a\") " Nov 21 10:28:58 crc kubenswrapper[4972]: I1121 10:28:58.755006 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee828b8-760c-424f-bb1f-2242ab12e86a-catalog-content\") pod \"5ee828b8-760c-424f-bb1f-2242ab12e86a\" (UID: \"5ee828b8-760c-424f-bb1f-2242ab12e86a\") " Nov 21 10:28:58 crc kubenswrapper[4972]: I1121 10:28:58.755229 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w697f\" (UniqueName: \"kubernetes.io/projected/5ee828b8-760c-424f-bb1f-2242ab12e86a-kube-api-access-w697f\") pod \"5ee828b8-760c-424f-bb1f-2242ab12e86a\" (UID: \"5ee828b8-760c-424f-bb1f-2242ab12e86a\") " Nov 21 10:28:58 crc kubenswrapper[4972]: I1121 10:28:58.755979 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ee828b8-760c-424f-bb1f-2242ab12e86a-utilities" (OuterVolumeSpecName: "utilities") pod "5ee828b8-760c-424f-bb1f-2242ab12e86a" (UID: "5ee828b8-760c-424f-bb1f-2242ab12e86a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:28:58 crc kubenswrapper[4972]: I1121 10:28:58.765496 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee828b8-760c-424f-bb1f-2242ab12e86a-kube-api-access-w697f" (OuterVolumeSpecName: "kube-api-access-w697f") pod "5ee828b8-760c-424f-bb1f-2242ab12e86a" (UID: "5ee828b8-760c-424f-bb1f-2242ab12e86a"). InnerVolumeSpecName "kube-api-access-w697f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:28:58 crc kubenswrapper[4972]: I1121 10:28:58.792683 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ee828b8-760c-424f-bb1f-2242ab12e86a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ee828b8-760c-424f-bb1f-2242ab12e86a" (UID: "5ee828b8-760c-424f-bb1f-2242ab12e86a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:28:58 crc kubenswrapper[4972]: I1121 10:28:58.856969 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w697f\" (UniqueName: \"kubernetes.io/projected/5ee828b8-760c-424f-bb1f-2242ab12e86a-kube-api-access-w697f\") on node \"crc\" DevicePath \"\"" Nov 21 10:28:58 crc kubenswrapper[4972]: I1121 10:28:58.857034 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee828b8-760c-424f-bb1f-2242ab12e86a-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:28:58 crc kubenswrapper[4972]: I1121 10:28:58.857052 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee828b8-760c-424f-bb1f-2242ab12e86a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.220027 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ee828b8-760c-424f-bb1f-2242ab12e86a" containerID="2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45" exitCode=0 Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.220089 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zg42f" Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.220112 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zg42f" event={"ID":"5ee828b8-760c-424f-bb1f-2242ab12e86a","Type":"ContainerDied","Data":"2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45"} Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.220165 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zg42f" event={"ID":"5ee828b8-760c-424f-bb1f-2242ab12e86a","Type":"ContainerDied","Data":"3f3948c1f05d830a4f1215d09b542a7734e9a79b3aeb3472b39f5189b32b6b81"} Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.220193 4972 scope.go:117] "RemoveContainer" containerID="2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45" Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.248738 4972 scope.go:117] "RemoveContainer" containerID="6d6d8bf5128bb8d9840f3f3fc7998a37af33be460d64e0731634b7d51b0092e2" Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.261985 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zg42f"] Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.281026 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zg42f"] Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.296745 4972 scope.go:117] "RemoveContainer" containerID="99be482e2add2c6ddcefadc163303efe5149361ae289217f878d3cbd9521f193" Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.309562 4972 scope.go:117] "RemoveContainer" containerID="2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45" Nov 21 10:28:59 crc kubenswrapper[4972]: E1121 10:28:59.309890 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45\": container with ID starting with 2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45 not found: ID does not exist" containerID="2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45" Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.309941 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45"} err="failed to get container status \"2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45\": rpc error: code = NotFound desc = could not find container \"2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45\": container with ID starting with 2745125d2d157a4851b2b2392d846e063643ac3afead0d9e1df286edf5b87f45 not found: ID does not exist" Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.309969 4972 scope.go:117] "RemoveContainer" containerID="6d6d8bf5128bb8d9840f3f3fc7998a37af33be460d64e0731634b7d51b0092e2" Nov 21 10:28:59 crc kubenswrapper[4972]: E1121 10:28:59.310284 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6d8bf5128bb8d9840f3f3fc7998a37af33be460d64e0731634b7d51b0092e2\": container with ID starting with 6d6d8bf5128bb8d9840f3f3fc7998a37af33be460d64e0731634b7d51b0092e2 not found: ID does not exist" containerID="6d6d8bf5128bb8d9840f3f3fc7998a37af33be460d64e0731634b7d51b0092e2" Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.310305 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6d8bf5128bb8d9840f3f3fc7998a37af33be460d64e0731634b7d51b0092e2"} err="failed to get container status \"6d6d8bf5128bb8d9840f3f3fc7998a37af33be460d64e0731634b7d51b0092e2\": rpc error: code = NotFound desc = could not find container \"6d6d8bf5128bb8d9840f3f3fc7998a37af33be460d64e0731634b7d51b0092e2\": container with ID starting with 6d6d8bf5128bb8d9840f3f3fc7998a37af33be460d64e0731634b7d51b0092e2 not found: ID does not exist" Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.310322 4972 scope.go:117] "RemoveContainer" containerID="99be482e2add2c6ddcefadc163303efe5149361ae289217f878d3cbd9521f193" Nov 21 10:28:59 crc kubenswrapper[4972]: E1121 10:28:59.310618 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99be482e2add2c6ddcefadc163303efe5149361ae289217f878d3cbd9521f193\": container with ID starting with 99be482e2add2c6ddcefadc163303efe5149361ae289217f878d3cbd9521f193 not found: ID does not exist" containerID="99be482e2add2c6ddcefadc163303efe5149361ae289217f878d3cbd9521f193" Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.310651 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be482e2add2c6ddcefadc163303efe5149361ae289217f878d3cbd9521f193"} err="failed to get container status \"99be482e2add2c6ddcefadc163303efe5149361ae289217f878d3cbd9521f193\": rpc error: code = NotFound desc = could not find container \"99be482e2add2c6ddcefadc163303efe5149361ae289217f878d3cbd9521f193\": container with ID starting with 99be482e2add2c6ddcefadc163303efe5149361ae289217f878d3cbd9521f193 not found: ID does not exist" Nov 21 10:28:59 crc kubenswrapper[4972]: I1121 10:28:59.770874 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee828b8-760c-424f-bb1f-2242ab12e86a" path="/var/lib/kubelet/pods/5ee828b8-760c-424f-bb1f-2242ab12e86a/volumes" Nov 21 10:29:26 crc kubenswrapper[4972]: I1121 10:29:26.180484 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:29:26 crc kubenswrapper[4972]: I1121 10:29:26.181233 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.485044 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-284zp"] Nov 21 10:29:37 crc kubenswrapper[4972]: E1121 10:29:37.485925 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee828b8-760c-424f-bb1f-2242ab12e86a" containerName="registry-server" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.485940 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee828b8-760c-424f-bb1f-2242ab12e86a" containerName="registry-server" Nov 21 10:29:37 crc kubenswrapper[4972]: E1121 10:29:37.485956 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee828b8-760c-424f-bb1f-2242ab12e86a" containerName="extract-content" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.485993 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee828b8-760c-424f-bb1f-2242ab12e86a" containerName="extract-content" Nov 21 10:29:37 crc kubenswrapper[4972]: E1121 10:29:37.486027 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee828b8-760c-424f-bb1f-2242ab12e86a" containerName="extract-utilities" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.486034 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee828b8-760c-424f-bb1f-2242ab12e86a" containerName="extract-utilities" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.486183 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ee828b8-760c-424f-bb1f-2242ab12e86a" containerName="registry-server" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.487214 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.495228 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-284zp"] Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.639098 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7lxb\" (UniqueName: \"kubernetes.io/projected/79521035-8282-43fd-9325-623a9e8d0a5e-kube-api-access-g7lxb\") pod \"certified-operators-284zp\" (UID: \"79521035-8282-43fd-9325-623a9e8d0a5e\") " pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.639216 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79521035-8282-43fd-9325-623a9e8d0a5e-utilities\") pod \"certified-operators-284zp\" (UID: \"79521035-8282-43fd-9325-623a9e8d0a5e\") " pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.639288 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79521035-8282-43fd-9325-623a9e8d0a5e-catalog-content\") pod \"certified-operators-284zp\" (UID: \"79521035-8282-43fd-9325-623a9e8d0a5e\") " pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.741022 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7lxb\" (UniqueName: \"kubernetes.io/projected/79521035-8282-43fd-9325-623a9e8d0a5e-kube-api-access-g7lxb\") pod \"certified-operators-284zp\" (UID: \"79521035-8282-43fd-9325-623a9e8d0a5e\") " pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.741073 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79521035-8282-43fd-9325-623a9e8d0a5e-utilities\") pod \"certified-operators-284zp\" (UID: \"79521035-8282-43fd-9325-623a9e8d0a5e\") " pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.741109 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79521035-8282-43fd-9325-623a9e8d0a5e-catalog-content\") pod \"certified-operators-284zp\" (UID: \"79521035-8282-43fd-9325-623a9e8d0a5e\") " pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.741619 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79521035-8282-43fd-9325-623a9e8d0a5e-catalog-content\") pod \"certified-operators-284zp\" (UID: \"79521035-8282-43fd-9325-623a9e8d0a5e\") " pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.741619 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79521035-8282-43fd-9325-623a9e8d0a5e-utilities\") pod \"certified-operators-284zp\" (UID: \"79521035-8282-43fd-9325-623a9e8d0a5e\") " pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.759323 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7lxb\" (UniqueName: \"kubernetes.io/projected/79521035-8282-43fd-9325-623a9e8d0a5e-kube-api-access-g7lxb\") pod \"certified-operators-284zp\" (UID: \"79521035-8282-43fd-9325-623a9e8d0a5e\") " pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:37 crc kubenswrapper[4972]: I1121 10:29:37.816657 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:38 crc kubenswrapper[4972]: I1121 10:29:38.282794 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-284zp"] Nov 21 10:29:38 crc kubenswrapper[4972]: I1121 10:29:38.540443 4972 generic.go:334] "Generic (PLEG): container finished" podID="79521035-8282-43fd-9325-623a9e8d0a5e" containerID="7f5f2b3300da5cd220c95247f7e50e11a86b4081b3eb63630fbbfb3cd2d740c4" exitCode=0 Nov 21 10:29:38 crc kubenswrapper[4972]: I1121 10:29:38.540483 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-284zp" event={"ID":"79521035-8282-43fd-9325-623a9e8d0a5e","Type":"ContainerDied","Data":"7f5f2b3300da5cd220c95247f7e50e11a86b4081b3eb63630fbbfb3cd2d740c4"} Nov 21 10:29:38 crc kubenswrapper[4972]: I1121 10:29:38.540509 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-284zp" event={"ID":"79521035-8282-43fd-9325-623a9e8d0a5e","Type":"ContainerStarted","Data":"3d73a6957ba45ec6079ff3245aceb334b6567f30297c152fa87bef972be1ab22"} Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.488688 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fpppw"] Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.491360 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.502275 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fpppw"] Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.608742 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-catalog-content\") pod \"redhat-operators-fpppw\" (UID: \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\") " pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.608786 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcg2b\" (UniqueName: \"kubernetes.io/projected/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-kube-api-access-lcg2b\") pod \"redhat-operators-fpppw\" (UID: \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\") " pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.608878 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-utilities\") pod \"redhat-operators-fpppw\" (UID: \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\") " pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.709922 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-catalog-content\") pod \"redhat-operators-fpppw\" (UID: \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\") " pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.709967 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcg2b\" (UniqueName: \"kubernetes.io/projected/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-kube-api-access-lcg2b\") pod \"redhat-operators-fpppw\" (UID: \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\") " pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.710036 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-utilities\") pod \"redhat-operators-fpppw\" (UID: \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\") " pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.710554 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-utilities\") pod \"redhat-operators-fpppw\" (UID: \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\") " pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.710862 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-catalog-content\") pod \"redhat-operators-fpppw\" (UID: \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\") " pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.739401 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcg2b\" (UniqueName: \"kubernetes.io/projected/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-kube-api-access-lcg2b\") pod \"redhat-operators-fpppw\" (UID: \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\") " pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:42 crc kubenswrapper[4972]: I1121 10:29:42.818443 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:43 crc kubenswrapper[4972]: W1121 10:29:43.246067 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fd1cf9c_b97c_4739_a4d2_d289289ce97b.slice/crio-248e6c53001b2ae849c62ef8ae23394296bc6f56e9eef3fc01010c4a8d0b10d2 WatchSource:0}: Error finding container 248e6c53001b2ae849c62ef8ae23394296bc6f56e9eef3fc01010c4a8d0b10d2: Status 404 returned error can't find the container with id 248e6c53001b2ae849c62ef8ae23394296bc6f56e9eef3fc01010c4a8d0b10d2 Nov 21 10:29:43 crc kubenswrapper[4972]: I1121 10:29:43.255344 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fpppw"] Nov 21 10:29:43 crc kubenswrapper[4972]: I1121 10:29:43.584946 4972 generic.go:334] "Generic (PLEG): container finished" podID="2fd1cf9c-b97c-4739-a4d2-d289289ce97b" containerID="5f4270bac2e918e897296b19ca82b34be4818b1638a75592342485e54fa5af04" exitCode=0 Nov 21 10:29:43 crc kubenswrapper[4972]: I1121 10:29:43.585030 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpppw" event={"ID":"2fd1cf9c-b97c-4739-a4d2-d289289ce97b","Type":"ContainerDied","Data":"5f4270bac2e918e897296b19ca82b34be4818b1638a75592342485e54fa5af04"} Nov 21 10:29:43 crc kubenswrapper[4972]: I1121 10:29:43.585061 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpppw" event={"ID":"2fd1cf9c-b97c-4739-a4d2-d289289ce97b","Type":"ContainerStarted","Data":"248e6c53001b2ae849c62ef8ae23394296bc6f56e9eef3fc01010c4a8d0b10d2"} Nov 21 10:29:43 crc kubenswrapper[4972]: I1121 10:29:43.586918 4972 generic.go:334] "Generic (PLEG): container finished" podID="79521035-8282-43fd-9325-623a9e8d0a5e" containerID="12dc4deb7e035eea57def52a4af5e22d3e51a152f28dc97a9207ce763c37b8db" exitCode=0 Nov 21 10:29:43 crc kubenswrapper[4972]: I1121 10:29:43.586944 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-284zp" event={"ID":"79521035-8282-43fd-9325-623a9e8d0a5e","Type":"ContainerDied","Data":"12dc4deb7e035eea57def52a4af5e22d3e51a152f28dc97a9207ce763c37b8db"} Nov 21 10:29:44 crc kubenswrapper[4972]: I1121 10:29:44.596131 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpppw" event={"ID":"2fd1cf9c-b97c-4739-a4d2-d289289ce97b","Type":"ContainerStarted","Data":"4aded472205a3caf6d11ec4ee0fe8b074f1db5c45f4dd14000721ca562ada830"} Nov 21 10:29:44 crc kubenswrapper[4972]: I1121 10:29:44.599530 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-284zp" event={"ID":"79521035-8282-43fd-9325-623a9e8d0a5e","Type":"ContainerStarted","Data":"5e9b497987f3cb9303199fcb669f2a1fce4cf59cd24d9b9e42e2e8a17bc28dde"} Nov 21 10:29:44 crc kubenswrapper[4972]: I1121 10:29:44.633729 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-284zp" podStartSLOduration=2.156121213 podStartE2EDuration="7.633709831s" podCreationTimestamp="2025-11-21 10:29:37 +0000 UTC" firstStartedPulling="2025-11-21 10:29:38.544658968 +0000 UTC m=+2923.653801466" lastFinishedPulling="2025-11-21 10:29:44.022247566 +0000 UTC m=+2929.131390084" observedRunningTime="2025-11-21 10:29:44.629091658 +0000 UTC m=+2929.738234196" watchObservedRunningTime="2025-11-21 10:29:44.633709831 +0000 UTC m=+2929.742852329" Nov 21 10:29:45 crc kubenswrapper[4972]: I1121 10:29:45.608148 4972 generic.go:334] "Generic (PLEG): container finished" podID="2fd1cf9c-b97c-4739-a4d2-d289289ce97b" containerID="4aded472205a3caf6d11ec4ee0fe8b074f1db5c45f4dd14000721ca562ada830" exitCode=0 Nov 21 10:29:45 crc kubenswrapper[4972]: I1121 10:29:45.608235 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpppw" event={"ID":"2fd1cf9c-b97c-4739-a4d2-d289289ce97b","Type":"ContainerDied","Data":"4aded472205a3caf6d11ec4ee0fe8b074f1db5c45f4dd14000721ca562ada830"} Nov 21 10:29:46 crc kubenswrapper[4972]: I1121 10:29:46.618447 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpppw" event={"ID":"2fd1cf9c-b97c-4739-a4d2-d289289ce97b","Type":"ContainerStarted","Data":"29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94"} Nov 21 10:29:46 crc kubenswrapper[4972]: I1121 10:29:46.643810 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fpppw" podStartSLOduration=2.247646875 podStartE2EDuration="4.643786123s" podCreationTimestamp="2025-11-21 10:29:42 +0000 UTC" firstStartedPulling="2025-11-21 10:29:43.586597998 +0000 UTC m=+2928.695740496" lastFinishedPulling="2025-11-21 10:29:45.982737246 +0000 UTC m=+2931.091879744" observedRunningTime="2025-11-21 10:29:46.636307313 +0000 UTC m=+2931.745449821" watchObservedRunningTime="2025-11-21 10:29:46.643786123 +0000 UTC m=+2931.752928641" Nov 21 10:29:47 crc kubenswrapper[4972]: I1121 10:29:47.817708 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:47 crc kubenswrapper[4972]: I1121 10:29:47.818050 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:47 crc kubenswrapper[4972]: I1121 10:29:47.878043 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:52 crc kubenswrapper[4972]: I1121 10:29:52.819471 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:52 crc kubenswrapper[4972]: I1121 10:29:52.819732 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:52 crc kubenswrapper[4972]: I1121 10:29:52.897466 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:53 crc kubenswrapper[4972]: I1121 10:29:53.721542 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:53 crc kubenswrapper[4972]: I1121 10:29:53.876477 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fpppw"] Nov 21 10:29:55 crc kubenswrapper[4972]: I1121 10:29:55.688652 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fpppw" podUID="2fd1cf9c-b97c-4739-a4d2-d289289ce97b" containerName="registry-server" containerID="cri-o://29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94" gracePeriod=2 Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.179941 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.180369 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.195585 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.329366 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-utilities\") pod \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\" (UID: \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\") " Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.329463 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-catalog-content\") pod \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\" (UID: \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\") " Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.329540 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcg2b\" (UniqueName: \"kubernetes.io/projected/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-kube-api-access-lcg2b\") pod \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\" (UID: \"2fd1cf9c-b97c-4739-a4d2-d289289ce97b\") " Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.330605 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-utilities" (OuterVolumeSpecName: "utilities") pod "2fd1cf9c-b97c-4739-a4d2-d289289ce97b" (UID: "2fd1cf9c-b97c-4739-a4d2-d289289ce97b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.334038 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-kube-api-access-lcg2b" (OuterVolumeSpecName: "kube-api-access-lcg2b") pod "2fd1cf9c-b97c-4739-a4d2-d289289ce97b" (UID: "2fd1cf9c-b97c-4739-a4d2-d289289ce97b"). InnerVolumeSpecName "kube-api-access-lcg2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.430859 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcg2b\" (UniqueName: \"kubernetes.io/projected/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-kube-api-access-lcg2b\") on node \"crc\" DevicePath \"\"" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.430899 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.454078 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fd1cf9c-b97c-4739-a4d2-d289289ce97b" (UID: "2fd1cf9c-b97c-4739-a4d2-d289289ce97b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.532436 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd1cf9c-b97c-4739-a4d2-d289289ce97b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.703168 4972 generic.go:334] "Generic (PLEG): container finished" podID="2fd1cf9c-b97c-4739-a4d2-d289289ce97b" containerID="29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94" exitCode=0 Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.703240 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fpppw" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.703233 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpppw" event={"ID":"2fd1cf9c-b97c-4739-a4d2-d289289ce97b","Type":"ContainerDied","Data":"29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94"} Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.703399 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fpppw" event={"ID":"2fd1cf9c-b97c-4739-a4d2-d289289ce97b","Type":"ContainerDied","Data":"248e6c53001b2ae849c62ef8ae23394296bc6f56e9eef3fc01010c4a8d0b10d2"} Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.703432 4972 scope.go:117] "RemoveContainer" containerID="29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.746367 4972 scope.go:117] "RemoveContainer" containerID="4aded472205a3caf6d11ec4ee0fe8b074f1db5c45f4dd14000721ca562ada830" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.749193 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fpppw"] Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.760176 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fpppw"] Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.773940 4972 scope.go:117] "RemoveContainer" containerID="5f4270bac2e918e897296b19ca82b34be4818b1638a75592342485e54fa5af04" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.796333 4972 scope.go:117] "RemoveContainer" containerID="29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94" Nov 21 10:29:56 crc kubenswrapper[4972]: E1121 10:29:56.797088 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94\": container with ID starting with 29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94 not found: ID does not exist" containerID="29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.797149 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94"} err="failed to get container status \"29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94\": rpc error: code = NotFound desc = could not find container \"29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94\": container with ID starting with 29b8ba20649a54470d9ac97804e5bdebad3e90fad13d21ffe6dc8b7f2945bb94 not found: ID does not exist" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.797184 4972 scope.go:117] "RemoveContainer" containerID="4aded472205a3caf6d11ec4ee0fe8b074f1db5c45f4dd14000721ca562ada830" Nov 21 10:29:56 crc kubenswrapper[4972]: E1121 10:29:56.798026 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aded472205a3caf6d11ec4ee0fe8b074f1db5c45f4dd14000721ca562ada830\": container with ID starting with 4aded472205a3caf6d11ec4ee0fe8b074f1db5c45f4dd14000721ca562ada830 not found: ID does not exist" containerID="4aded472205a3caf6d11ec4ee0fe8b074f1db5c45f4dd14000721ca562ada830" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.798083 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aded472205a3caf6d11ec4ee0fe8b074f1db5c45f4dd14000721ca562ada830"} err="failed to get container status \"4aded472205a3caf6d11ec4ee0fe8b074f1db5c45f4dd14000721ca562ada830\": rpc error: code = NotFound desc = could not find container \"4aded472205a3caf6d11ec4ee0fe8b074f1db5c45f4dd14000721ca562ada830\": container with ID starting with 4aded472205a3caf6d11ec4ee0fe8b074f1db5c45f4dd14000721ca562ada830 not found: ID does not exist" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.798120 4972 scope.go:117] "RemoveContainer" containerID="5f4270bac2e918e897296b19ca82b34be4818b1638a75592342485e54fa5af04" Nov 21 10:29:56 crc kubenswrapper[4972]: E1121 10:29:56.798621 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f4270bac2e918e897296b19ca82b34be4818b1638a75592342485e54fa5af04\": container with ID starting with 5f4270bac2e918e897296b19ca82b34be4818b1638a75592342485e54fa5af04 not found: ID does not exist" containerID="5f4270bac2e918e897296b19ca82b34be4818b1638a75592342485e54fa5af04" Nov 21 10:29:56 crc kubenswrapper[4972]: I1121 10:29:56.798652 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f4270bac2e918e897296b19ca82b34be4818b1638a75592342485e54fa5af04"} err="failed to get container status \"5f4270bac2e918e897296b19ca82b34be4818b1638a75592342485e54fa5af04\": rpc error: code = NotFound desc = could not find container \"5f4270bac2e918e897296b19ca82b34be4818b1638a75592342485e54fa5af04\": container with ID starting with 5f4270bac2e918e897296b19ca82b34be4818b1638a75592342485e54fa5af04 not found: ID does not exist" Nov 21 10:29:57 crc kubenswrapper[4972]: I1121 10:29:57.774220 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd1cf9c-b97c-4739-a4d2-d289289ce97b" path="/var/lib/kubelet/pods/2fd1cf9c-b97c-4739-a4d2-d289289ce97b/volumes" Nov 21 10:29:57 crc kubenswrapper[4972]: I1121 10:29:57.878660 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-284zp" Nov 21 10:29:59 crc kubenswrapper[4972]: I1121 10:29:59.123252 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-284zp"] Nov 21 10:29:59 crc kubenswrapper[4972]: I1121 10:29:59.276118 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-npssb"] Nov 21 10:29:59 crc kubenswrapper[4972]: I1121 10:29:59.276400 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-npssb" podUID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" containerName="registry-server" containerID="cri-o://83c67644502ce19d54478a37737d508376785bcebc1aec6a5cd2f67119daca33" gracePeriod=2 Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.189083 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw"] Nov 21 10:30:00 crc kubenswrapper[4972]: E1121 10:30:00.190748 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd1cf9c-b97c-4739-a4d2-d289289ce97b" containerName="extract-content" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.190959 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd1cf9c-b97c-4739-a4d2-d289289ce97b" containerName="extract-content" Nov 21 10:30:00 crc kubenswrapper[4972]: E1121 10:30:00.191085 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd1cf9c-b97c-4739-a4d2-d289289ce97b" containerName="registry-server" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.191208 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd1cf9c-b97c-4739-a4d2-d289289ce97b" containerName="registry-server" Nov 21 10:30:00 crc kubenswrapper[4972]: E1121 10:30:00.191378 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd1cf9c-b97c-4739-a4d2-d289289ce97b" containerName="extract-utilities" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.191483 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd1cf9c-b97c-4739-a4d2-d289289ce97b" containerName="extract-utilities" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.191889 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd1cf9c-b97c-4739-a4d2-d289289ce97b" containerName="registry-server" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.192736 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.195346 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.195653 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.203529 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw"] Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.286616 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc98d\" (UniqueName: \"kubernetes.io/projected/dba89cec-c76c-4040-8da1-81f2a55f0332-kube-api-access-kc98d\") pod \"collect-profiles-29395350-m99lw\" (UID: \"dba89cec-c76c-4040-8da1-81f2a55f0332\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.286941 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dba89cec-c76c-4040-8da1-81f2a55f0332-config-volume\") pod \"collect-profiles-29395350-m99lw\" (UID: \"dba89cec-c76c-4040-8da1-81f2a55f0332\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.287051 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dba89cec-c76c-4040-8da1-81f2a55f0332-secret-volume\") pod \"collect-profiles-29395350-m99lw\" (UID: \"dba89cec-c76c-4040-8da1-81f2a55f0332\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.388211 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc98d\" (UniqueName: \"kubernetes.io/projected/dba89cec-c76c-4040-8da1-81f2a55f0332-kube-api-access-kc98d\") pod \"collect-profiles-29395350-m99lw\" (UID: \"dba89cec-c76c-4040-8da1-81f2a55f0332\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.388259 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dba89cec-c76c-4040-8da1-81f2a55f0332-config-volume\") pod \"collect-profiles-29395350-m99lw\" (UID: \"dba89cec-c76c-4040-8da1-81f2a55f0332\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.388289 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dba89cec-c76c-4040-8da1-81f2a55f0332-secret-volume\") pod \"collect-profiles-29395350-m99lw\" (UID: \"dba89cec-c76c-4040-8da1-81f2a55f0332\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.390360 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dba89cec-c76c-4040-8da1-81f2a55f0332-config-volume\") pod \"collect-profiles-29395350-m99lw\" (UID: \"dba89cec-c76c-4040-8da1-81f2a55f0332\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.396029 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dba89cec-c76c-4040-8da1-81f2a55f0332-secret-volume\") pod \"collect-profiles-29395350-m99lw\" (UID: \"dba89cec-c76c-4040-8da1-81f2a55f0332\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.419229 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc98d\" (UniqueName: \"kubernetes.io/projected/dba89cec-c76c-4040-8da1-81f2a55f0332-kube-api-access-kc98d\") pod \"collect-profiles-29395350-m99lw\" (UID: \"dba89cec-c76c-4040-8da1-81f2a55f0332\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.519489 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.734140 4972 generic.go:334] "Generic (PLEG): container finished" podID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" containerID="83c67644502ce19d54478a37737d508376785bcebc1aec6a5cd2f67119daca33" exitCode=0 Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.734198 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npssb" event={"ID":"ad421b56-53fa-4ad2-9233-eb8e2d72be57","Type":"ContainerDied","Data":"83c67644502ce19d54478a37737d508376785bcebc1aec6a5cd2f67119daca33"} Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.842485 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npssb" Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.999350 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad421b56-53fa-4ad2-9233-eb8e2d72be57-utilities\") pod \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\" (UID: \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\") " Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.999423 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad421b56-53fa-4ad2-9233-eb8e2d72be57-catalog-content\") pod \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\" (UID: \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\") " Nov 21 10:30:00 crc kubenswrapper[4972]: I1121 10:30:00.999469 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ndtx\" (UniqueName: \"kubernetes.io/projected/ad421b56-53fa-4ad2-9233-eb8e2d72be57-kube-api-access-5ndtx\") pod \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\" (UID: \"ad421b56-53fa-4ad2-9233-eb8e2d72be57\") " Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.000101 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad421b56-53fa-4ad2-9233-eb8e2d72be57-utilities" (OuterVolumeSpecName: "utilities") pod "ad421b56-53fa-4ad2-9233-eb8e2d72be57" (UID: "ad421b56-53fa-4ad2-9233-eb8e2d72be57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.004484 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad421b56-53fa-4ad2-9233-eb8e2d72be57-kube-api-access-5ndtx" (OuterVolumeSpecName: "kube-api-access-5ndtx") pod "ad421b56-53fa-4ad2-9233-eb8e2d72be57" (UID: "ad421b56-53fa-4ad2-9233-eb8e2d72be57"). InnerVolumeSpecName "kube-api-access-5ndtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.040266 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad421b56-53fa-4ad2-9233-eb8e2d72be57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad421b56-53fa-4ad2-9233-eb8e2d72be57" (UID: "ad421b56-53fa-4ad2-9233-eb8e2d72be57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.101255 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ndtx\" (UniqueName: \"kubernetes.io/projected/ad421b56-53fa-4ad2-9233-eb8e2d72be57-kube-api-access-5ndtx\") on node \"crc\" DevicePath \"\"" Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.101300 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad421b56-53fa-4ad2-9233-eb8e2d72be57-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.101314 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad421b56-53fa-4ad2-9233-eb8e2d72be57-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.104280 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw"] Nov 21 10:30:01 crc kubenswrapper[4972]: W1121 10:30:01.108200 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddba89cec_c76c_4040_8da1_81f2a55f0332.slice/crio-0699fbee5c03e0b06f46a27ba8781fcedd827a8dbac073e1f99a284fb2d51323 WatchSource:0}: Error finding container 0699fbee5c03e0b06f46a27ba8781fcedd827a8dbac073e1f99a284fb2d51323: Status 404 returned error can't find the container with id 0699fbee5c03e0b06f46a27ba8781fcedd827a8dbac073e1f99a284fb2d51323 Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.745778 4972 generic.go:334] "Generic (PLEG): container finished" podID="dba89cec-c76c-4040-8da1-81f2a55f0332" containerID="64a2fa7533a43efce4a82578aae37efa3366ee75d91fab451c5dbdfc68b0033c" exitCode=0 Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.745906 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" event={"ID":"dba89cec-c76c-4040-8da1-81f2a55f0332","Type":"ContainerDied","Data":"64a2fa7533a43efce4a82578aae37efa3366ee75d91fab451c5dbdfc68b0033c"} Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.746364 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" event={"ID":"dba89cec-c76c-4040-8da1-81f2a55f0332","Type":"ContainerStarted","Data":"0699fbee5c03e0b06f46a27ba8781fcedd827a8dbac073e1f99a284fb2d51323"} Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.749566 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npssb" event={"ID":"ad421b56-53fa-4ad2-9233-eb8e2d72be57","Type":"ContainerDied","Data":"69f71dc141ff7a0bd6e131b72d0fca0d4a435bb7950a121639584d685d1e7c47"} Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.749622 4972 scope.go:117] "RemoveContainer" containerID="83c67644502ce19d54478a37737d508376785bcebc1aec6a5cd2f67119daca33" Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.749667 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npssb" Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.778650 4972 scope.go:117] "RemoveContainer" containerID="5ca26177fe1524348c54e8be338d8198dc6dd2f1d3f688f8d5d326c4d0ad0085" Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.793350 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-npssb"] Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.798318 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-npssb"] Nov 21 10:30:01 crc kubenswrapper[4972]: I1121 10:30:01.824403 4972 scope.go:117] "RemoveContainer" containerID="27ab6a653d3711c9a9fa1d4befc39f2511aac3649025dd5b8b0c856d4587774f" Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.040177 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.132155 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc98d\" (UniqueName: \"kubernetes.io/projected/dba89cec-c76c-4040-8da1-81f2a55f0332-kube-api-access-kc98d\") pod \"dba89cec-c76c-4040-8da1-81f2a55f0332\" (UID: \"dba89cec-c76c-4040-8da1-81f2a55f0332\") " Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.132336 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dba89cec-c76c-4040-8da1-81f2a55f0332-secret-volume\") pod \"dba89cec-c76c-4040-8da1-81f2a55f0332\" (UID: \"dba89cec-c76c-4040-8da1-81f2a55f0332\") " Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.132375 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dba89cec-c76c-4040-8da1-81f2a55f0332-config-volume\") pod \"dba89cec-c76c-4040-8da1-81f2a55f0332\" (UID: \"dba89cec-c76c-4040-8da1-81f2a55f0332\") " Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.133142 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dba89cec-c76c-4040-8da1-81f2a55f0332-config-volume" (OuterVolumeSpecName: "config-volume") pod "dba89cec-c76c-4040-8da1-81f2a55f0332" (UID: "dba89cec-c76c-4040-8da1-81f2a55f0332"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.137589 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dba89cec-c76c-4040-8da1-81f2a55f0332-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dba89cec-c76c-4040-8da1-81f2a55f0332" (UID: "dba89cec-c76c-4040-8da1-81f2a55f0332"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.137958 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dba89cec-c76c-4040-8da1-81f2a55f0332-kube-api-access-kc98d" (OuterVolumeSpecName: "kube-api-access-kc98d") pod "dba89cec-c76c-4040-8da1-81f2a55f0332" (UID: "dba89cec-c76c-4040-8da1-81f2a55f0332"). InnerVolumeSpecName "kube-api-access-kc98d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.233982 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dba89cec-c76c-4040-8da1-81f2a55f0332-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.234041 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dba89cec-c76c-4040-8da1-81f2a55f0332-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.234058 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc98d\" (UniqueName: \"kubernetes.io/projected/dba89cec-c76c-4040-8da1-81f2a55f0332-kube-api-access-kc98d\") on node \"crc\" DevicePath \"\"" Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.771033 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.776813 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" path="/var/lib/kubelet/pods/ad421b56-53fa-4ad2-9233-eb8e2d72be57/volumes" Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.778955 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw" event={"ID":"dba89cec-c76c-4040-8da1-81f2a55f0332","Type":"ContainerDied","Data":"0699fbee5c03e0b06f46a27ba8781fcedd827a8dbac073e1f99a284fb2d51323"} Nov 21 10:30:03 crc kubenswrapper[4972]: I1121 10:30:03.779014 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0699fbee5c03e0b06f46a27ba8781fcedd827a8dbac073e1f99a284fb2d51323" Nov 21 10:30:04 crc kubenswrapper[4972]: I1121 10:30:04.135018 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5"] Nov 21 10:30:04 crc kubenswrapper[4972]: I1121 10:30:04.142950 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395305-8qkq5"] Nov 21 10:30:05 crc kubenswrapper[4972]: I1121 10:30:05.772290 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46165fda-5884-4b43-b8fd-533eec95f753" path="/var/lib/kubelet/pods/46165fda-5884-4b43-b8fd-533eec95f753/volumes" Nov 21 10:30:24 crc kubenswrapper[4972]: I1121 10:30:24.548513 4972 scope.go:117] "RemoveContainer" containerID="3fd33b2e5d75e40284819912341bede747a8a0b6687db9b6f0b96da3ad485d06" Nov 21 10:30:26 crc kubenswrapper[4972]: I1121 10:30:26.179105 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:30:26 crc kubenswrapper[4972]: I1121 10:30:26.179504 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:30:26 crc kubenswrapper[4972]: I1121 10:30:26.179573 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 10:30:26 crc kubenswrapper[4972]: I1121 10:30:26.180411 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 10:30:26 crc kubenswrapper[4972]: I1121 10:30:26.180511 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" gracePeriod=600 Nov 21 10:30:26 crc kubenswrapper[4972]: E1121 10:30:26.302140 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:30:27 crc kubenswrapper[4972]: I1121 10:30:27.291130 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" exitCode=0 Nov 21 10:30:27 crc kubenswrapper[4972]: I1121 10:30:27.291197 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3"} Nov 21 10:30:27 crc kubenswrapper[4972]: I1121 10:30:27.291238 4972 scope.go:117] "RemoveContainer" containerID="24d92b3bf92e9fc4d14b9d9ef3388e6944ce616d9fa1664affe8230a14779b65" Nov 21 10:30:27 crc kubenswrapper[4972]: I1121 10:30:27.291908 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:30:27 crc kubenswrapper[4972]: E1121 10:30:27.292481 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:30:39 crc kubenswrapper[4972]: I1121 10:30:39.760283 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:30:39 crc kubenswrapper[4972]: E1121 10:30:39.761424 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:30:53 crc kubenswrapper[4972]: I1121 10:30:53.760101 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:30:53 crc kubenswrapper[4972]: E1121 10:30:53.761133 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:31:06 crc kubenswrapper[4972]: I1121 10:31:06.759801 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:31:06 crc kubenswrapper[4972]: E1121 10:31:06.760734 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:31:18 crc kubenswrapper[4972]: I1121 10:31:18.760322 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:31:18 crc kubenswrapper[4972]: E1121 10:31:18.762069 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:31:32 crc kubenswrapper[4972]: I1121 10:31:32.759273 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:31:32 crc kubenswrapper[4972]: E1121 10:31:32.760479 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:31:46 crc kubenswrapper[4972]: I1121 10:31:46.763064 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:31:46 crc kubenswrapper[4972]: E1121 10:31:46.764405 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:31:59 crc kubenswrapper[4972]: I1121 10:31:59.760116 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:31:59 crc kubenswrapper[4972]: E1121 10:31:59.761368 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:32:14 crc kubenswrapper[4972]: I1121 10:32:14.765218 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:32:14 crc kubenswrapper[4972]: E1121 10:32:14.766603 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:32:25 crc kubenswrapper[4972]: I1121 10:32:25.773380 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:32:25 crc kubenswrapper[4972]: E1121 10:32:25.774672 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:32:37 crc kubenswrapper[4972]: I1121 10:32:37.760193 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:32:37 crc kubenswrapper[4972]: E1121 10:32:37.780369 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:32:49 crc kubenswrapper[4972]: I1121 10:32:49.759627 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:32:49 crc kubenswrapper[4972]: E1121 10:32:49.760479 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:33:00 crc kubenswrapper[4972]: I1121 10:33:00.758964 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:33:00 crc kubenswrapper[4972]: E1121 10:33:00.759933 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:33:14 crc kubenswrapper[4972]: I1121 10:33:14.759618 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:33:14 crc kubenswrapper[4972]: E1121 10:33:14.762007 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:33:25 crc kubenswrapper[4972]: I1121 10:33:25.768210 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:33:25 crc kubenswrapper[4972]: E1121 10:33:25.769015 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:33:40 crc kubenswrapper[4972]: I1121 10:33:40.759632 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:33:40 crc kubenswrapper[4972]: E1121 10:33:40.762189 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:33:53 crc kubenswrapper[4972]: I1121 10:33:53.762475 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:33:53 crc kubenswrapper[4972]: E1121 10:33:53.763780 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:34:05 crc kubenswrapper[4972]: I1121 10:34:05.776986 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:34:05 crc kubenswrapper[4972]: E1121 10:34:05.778123 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:34:19 crc kubenswrapper[4972]: I1121 10:34:19.759642 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:34:19 crc kubenswrapper[4972]: E1121 10:34:19.762793 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:34:33 crc kubenswrapper[4972]: I1121 10:34:33.760583 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:34:33 crc kubenswrapper[4972]: E1121 10:34:33.762148 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:34:47 crc kubenswrapper[4972]: I1121 10:34:47.759859 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:34:47 crc kubenswrapper[4972]: E1121 10:34:47.760713 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:34:58 crc kubenswrapper[4972]: I1121 10:34:58.759362 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:34:58 crc kubenswrapper[4972]: E1121 10:34:58.760531 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:35:10 crc kubenswrapper[4972]: I1121 10:35:10.760444 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:35:10 crc kubenswrapper[4972]: E1121 10:35:10.761346 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:35:22 crc kubenswrapper[4972]: I1121 10:35:22.760113 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:35:22 crc kubenswrapper[4972]: E1121 10:35:22.760904 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:35:34 crc kubenswrapper[4972]: I1121 10:35:34.760506 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:35:35 crc kubenswrapper[4972]: I1121 10:35:35.813276 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"225c6fb8621cb392ec0e598de3ee96cb68c85779dcaa25a6d6bc0a8f25492454"} Nov 21 10:37:56 crc kubenswrapper[4972]: I1121 10:37:56.179050 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:37:56 crc kubenswrapper[4972]: I1121 10:37:56.179792 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:38:26 crc kubenswrapper[4972]: I1121 10:38:26.179709 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:38:26 crc kubenswrapper[4972]: I1121 10:38:26.180685 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:38:56 crc kubenswrapper[4972]: I1121 10:38:56.178991 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:38:56 crc kubenswrapper[4972]: I1121 10:38:56.180011 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:38:56 crc kubenswrapper[4972]: I1121 10:38:56.180092 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 10:38:56 crc kubenswrapper[4972]: I1121 10:38:56.180920 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"225c6fb8621cb392ec0e598de3ee96cb68c85779dcaa25a6d6bc0a8f25492454"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 10:38:56 crc kubenswrapper[4972]: I1121 10:38:56.181007 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://225c6fb8621cb392ec0e598de3ee96cb68c85779dcaa25a6d6bc0a8f25492454" gracePeriod=600 Nov 21 10:38:56 crc kubenswrapper[4972]: I1121 10:38:56.828296 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="225c6fb8621cb392ec0e598de3ee96cb68c85779dcaa25a6d6bc0a8f25492454" exitCode=0 Nov 21 10:38:56 crc kubenswrapper[4972]: I1121 10:38:56.828380 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"225c6fb8621cb392ec0e598de3ee96cb68c85779dcaa25a6d6bc0a8f25492454"} Nov 21 10:38:56 crc kubenswrapper[4972]: I1121 10:38:56.828711 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc"} Nov 21 10:38:56 crc kubenswrapper[4972]: I1121 10:38:56.828769 4972 scope.go:117] "RemoveContainer" containerID="5bc2420af75d9674af983db0d64dc7c36a5928b9cf578480dde028d1a11e78e3" Nov 21 10:40:56 crc kubenswrapper[4972]: I1121 10:40:56.179095 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:40:56 crc kubenswrapper[4972]: I1121 10:40:56.181033 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:41:26 crc kubenswrapper[4972]: I1121 10:41:26.179452 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:41:26 crc kubenswrapper[4972]: I1121 10:41:26.180135 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:41:56 crc kubenswrapper[4972]: I1121 10:41:56.179221 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:41:56 crc kubenswrapper[4972]: I1121 10:41:56.180063 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:41:56 crc kubenswrapper[4972]: I1121 10:41:56.180138 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 10:41:56 crc kubenswrapper[4972]: I1121 10:41:56.180957 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 10:41:56 crc kubenswrapper[4972]: I1121 10:41:56.181042 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" gracePeriod=600 Nov 21 10:41:56 crc kubenswrapper[4972]: E1121 10:41:56.319420 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:41:56 crc kubenswrapper[4972]: I1121 10:41:56.610955 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" exitCode=0 Nov 21 10:41:56 crc kubenswrapper[4972]: I1121 10:41:56.611020 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc"} Nov 21 10:41:56 crc kubenswrapper[4972]: I1121 10:41:56.611068 4972 scope.go:117] "RemoveContainer" containerID="225c6fb8621cb392ec0e598de3ee96cb68c85779dcaa25a6d6bc0a8f25492454" Nov 21 10:41:56 crc kubenswrapper[4972]: I1121 10:41:56.611905 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:41:56 crc kubenswrapper[4972]: E1121 10:41:56.612389 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:42:06 crc kubenswrapper[4972]: I1121 10:42:06.759997 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:42:06 crc kubenswrapper[4972]: E1121 10:42:06.761165 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:42:21 crc kubenswrapper[4972]: I1121 10:42:21.760142 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:42:21 crc kubenswrapper[4972]: E1121 10:42:21.761986 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:42:36 crc kubenswrapper[4972]: I1121 10:42:36.759409 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:42:36 crc kubenswrapper[4972]: E1121 10:42:36.760431 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:42:48 crc kubenswrapper[4972]: I1121 10:42:48.759950 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:42:48 crc kubenswrapper[4972]: E1121 10:42:48.761175 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:42:59 crc kubenswrapper[4972]: I1121 10:42:59.759363 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:42:59 crc kubenswrapper[4972]: E1121 10:42:59.760182 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:43:14 crc kubenswrapper[4972]: I1121 10:43:14.759352 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:43:14 crc kubenswrapper[4972]: E1121 10:43:14.760273 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:43:27 crc kubenswrapper[4972]: I1121 10:43:27.759856 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:43:27 crc kubenswrapper[4972]: E1121 10:43:27.760631 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:43:42 crc kubenswrapper[4972]: I1121 10:43:42.759267 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:43:42 crc kubenswrapper[4972]: E1121 10:43:42.759950 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:43:57 crc kubenswrapper[4972]: I1121 10:43:57.760243 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:43:57 crc kubenswrapper[4972]: E1121 10:43:57.761423 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:44:12 crc kubenswrapper[4972]: I1121 10:44:12.761274 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:44:12 crc kubenswrapper[4972]: E1121 10:44:12.762766 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:44:26 crc kubenswrapper[4972]: I1121 10:44:26.759885 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:44:26 crc kubenswrapper[4972]: E1121 10:44:26.761685 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:44:36 crc kubenswrapper[4972]: I1121 10:44:36.858221 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gp7s5"] Nov 21 10:44:36 crc kubenswrapper[4972]: E1121 10:44:36.859294 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" containerName="extract-content" Nov 21 10:44:36 crc kubenswrapper[4972]: I1121 10:44:36.859314 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" containerName="extract-content" Nov 21 10:44:36 crc kubenswrapper[4972]: E1121 10:44:36.859340 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" containerName="extract-utilities" Nov 21 10:44:36 crc kubenswrapper[4972]: I1121 10:44:36.859352 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" containerName="extract-utilities" Nov 21 10:44:36 crc kubenswrapper[4972]: E1121 10:44:36.859384 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dba89cec-c76c-4040-8da1-81f2a55f0332" containerName="collect-profiles" Nov 21 10:44:36 crc kubenswrapper[4972]: I1121 10:44:36.859397 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="dba89cec-c76c-4040-8da1-81f2a55f0332" containerName="collect-profiles" Nov 21 10:44:36 crc kubenswrapper[4972]: E1121 10:44:36.859416 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" containerName="registry-server" Nov 21 10:44:36 crc kubenswrapper[4972]: I1121 10:44:36.859428 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" containerName="registry-server" Nov 21 10:44:36 crc kubenswrapper[4972]: I1121 10:44:36.859672 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="dba89cec-c76c-4040-8da1-81f2a55f0332" containerName="collect-profiles" Nov 21 10:44:36 crc kubenswrapper[4972]: I1121 10:44:36.859700 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad421b56-53fa-4ad2-9233-eb8e2d72be57" containerName="registry-server" Nov 21 10:44:36 crc kubenswrapper[4972]: I1121 10:44:36.861418 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:36 crc kubenswrapper[4972]: I1121 10:44:36.890528 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gp7s5"] Nov 21 10:44:36 crc kubenswrapper[4972]: I1121 10:44:36.958753 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38ace700-905e-467b-b2c0-42b2f62a15b4-utilities\") pod \"certified-operators-gp7s5\" (UID: \"38ace700-905e-467b-b2c0-42b2f62a15b4\") " pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:36 crc kubenswrapper[4972]: I1121 10:44:36.958847 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38ace700-905e-467b-b2c0-42b2f62a15b4-catalog-content\") pod \"certified-operators-gp7s5\" (UID: \"38ace700-905e-467b-b2c0-42b2f62a15b4\") " pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:36 crc kubenswrapper[4972]: I1121 10:44:36.958893 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vq7h\" (UniqueName: \"kubernetes.io/projected/38ace700-905e-467b-b2c0-42b2f62a15b4-kube-api-access-6vq7h\") pod \"certified-operators-gp7s5\" (UID: \"38ace700-905e-467b-b2c0-42b2f62a15b4\") " pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.060385 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38ace700-905e-467b-b2c0-42b2f62a15b4-utilities\") pod \"certified-operators-gp7s5\" (UID: \"38ace700-905e-467b-b2c0-42b2f62a15b4\") " pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.060450 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38ace700-905e-467b-b2c0-42b2f62a15b4-catalog-content\") pod \"certified-operators-gp7s5\" (UID: \"38ace700-905e-467b-b2c0-42b2f62a15b4\") " pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.060491 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vq7h\" (UniqueName: \"kubernetes.io/projected/38ace700-905e-467b-b2c0-42b2f62a15b4-kube-api-access-6vq7h\") pod \"certified-operators-gp7s5\" (UID: \"38ace700-905e-467b-b2c0-42b2f62a15b4\") " pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.061301 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38ace700-905e-467b-b2c0-42b2f62a15b4-utilities\") pod \"certified-operators-gp7s5\" (UID: \"38ace700-905e-467b-b2c0-42b2f62a15b4\") " pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.061328 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38ace700-905e-467b-b2c0-42b2f62a15b4-catalog-content\") pod \"certified-operators-gp7s5\" (UID: \"38ace700-905e-467b-b2c0-42b2f62a15b4\") " pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.098686 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vq7h\" (UniqueName: \"kubernetes.io/projected/38ace700-905e-467b-b2c0-42b2f62a15b4-kube-api-access-6vq7h\") pod \"certified-operators-gp7s5\" (UID: \"38ace700-905e-467b-b2c0-42b2f62a15b4\") " pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.192951 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.446130 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fhnp6"] Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.466900 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.480070 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhnp6"] Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.568863 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/319a09ca-a083-4445-b93d-033057a1949e-catalog-content\") pod \"community-operators-fhnp6\" (UID: \"319a09ca-a083-4445-b93d-033057a1949e\") " pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.568942 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnc2g\" (UniqueName: \"kubernetes.io/projected/319a09ca-a083-4445-b93d-033057a1949e-kube-api-access-fnc2g\") pod \"community-operators-fhnp6\" (UID: \"319a09ca-a083-4445-b93d-033057a1949e\") " pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.568978 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/319a09ca-a083-4445-b93d-033057a1949e-utilities\") pod \"community-operators-fhnp6\" (UID: \"319a09ca-a083-4445-b93d-033057a1949e\") " pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.670055 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/319a09ca-a083-4445-b93d-033057a1949e-catalog-content\") pod \"community-operators-fhnp6\" (UID: \"319a09ca-a083-4445-b93d-033057a1949e\") " pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.670138 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnc2g\" (UniqueName: \"kubernetes.io/projected/319a09ca-a083-4445-b93d-033057a1949e-kube-api-access-fnc2g\") pod \"community-operators-fhnp6\" (UID: \"319a09ca-a083-4445-b93d-033057a1949e\") " pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.670170 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/319a09ca-a083-4445-b93d-033057a1949e-utilities\") pod \"community-operators-fhnp6\" (UID: \"319a09ca-a083-4445-b93d-033057a1949e\") " pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.670764 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/319a09ca-a083-4445-b93d-033057a1949e-catalog-content\") pod \"community-operators-fhnp6\" (UID: \"319a09ca-a083-4445-b93d-033057a1949e\") " pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.670770 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/319a09ca-a083-4445-b93d-033057a1949e-utilities\") pod \"community-operators-fhnp6\" (UID: \"319a09ca-a083-4445-b93d-033057a1949e\") " pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.688653 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnc2g\" (UniqueName: \"kubernetes.io/projected/319a09ca-a083-4445-b93d-033057a1949e-kube-api-access-fnc2g\") pod \"community-operators-fhnp6\" (UID: \"319a09ca-a083-4445-b93d-033057a1949e\") " pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.741266 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gp7s5"] Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.762083 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:44:37 crc kubenswrapper[4972]: E1121 10:44:37.762262 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:44:37 crc kubenswrapper[4972]: I1121 10:44:37.862138 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:38 crc kubenswrapper[4972]: I1121 10:44:38.235498 4972 generic.go:334] "Generic (PLEG): container finished" podID="38ace700-905e-467b-b2c0-42b2f62a15b4" containerID="8d0adfd17c2f1d5a6d889696c4955bbe4a754abff084dd80747f797d8d0cb980" exitCode=0 Nov 21 10:44:38 crc kubenswrapper[4972]: I1121 10:44:38.235589 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp7s5" event={"ID":"38ace700-905e-467b-b2c0-42b2f62a15b4","Type":"ContainerDied","Data":"8d0adfd17c2f1d5a6d889696c4955bbe4a754abff084dd80747f797d8d0cb980"} Nov 21 10:44:38 crc kubenswrapper[4972]: I1121 10:44:38.235763 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp7s5" event={"ID":"38ace700-905e-467b-b2c0-42b2f62a15b4","Type":"ContainerStarted","Data":"051314517b2c468e0dfc8a0f207be3332902fb3d2c32cf278fb5cd86b57fbdcd"} Nov 21 10:44:38 crc kubenswrapper[4972]: I1121 10:44:38.238988 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 10:44:38 crc kubenswrapper[4972]: I1121 10:44:38.398815 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhnp6"] Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.246017 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-57ckb"] Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.249702 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp7s5" event={"ID":"38ace700-905e-467b-b2c0-42b2f62a15b4","Type":"ContainerStarted","Data":"0fb21a3907c0d613222ddae676f9b235393108543ee00b479dbc87da3427ef83"} Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.249880 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.255705 4972 generic.go:334] "Generic (PLEG): container finished" podID="319a09ca-a083-4445-b93d-033057a1949e" containerID="e73b2362159569cf46930ad4f71f521d96abcdaaabc3f46a03e0a76e2370a021" exitCode=0 Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.255765 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhnp6" event={"ID":"319a09ca-a083-4445-b93d-033057a1949e","Type":"ContainerDied","Data":"e73b2362159569cf46930ad4f71f521d96abcdaaabc3f46a03e0a76e2370a021"} Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.255801 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhnp6" event={"ID":"319a09ca-a083-4445-b93d-033057a1949e","Type":"ContainerStarted","Data":"34d7bbfe2a62a5502a6fe5b9589d7fa8687add3de38901dddec613245d32c32c"} Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.265596 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57ckb"] Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.400255 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffsg4\" (UniqueName: \"kubernetes.io/projected/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-kube-api-access-ffsg4\") pod \"redhat-operators-57ckb\" (UID: \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\") " pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.400336 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-utilities\") pod \"redhat-operators-57ckb\" (UID: \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\") " pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.400506 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-catalog-content\") pod \"redhat-operators-57ckb\" (UID: \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\") " pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.502355 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffsg4\" (UniqueName: \"kubernetes.io/projected/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-kube-api-access-ffsg4\") pod \"redhat-operators-57ckb\" (UID: \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\") " pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.502688 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-utilities\") pod \"redhat-operators-57ckb\" (UID: \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\") " pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.502719 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-catalog-content\") pod \"redhat-operators-57ckb\" (UID: \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\") " pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.503218 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-catalog-content\") pod \"redhat-operators-57ckb\" (UID: \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\") " pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.503365 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-utilities\") pod \"redhat-operators-57ckb\" (UID: \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\") " pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.524666 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffsg4\" (UniqueName: \"kubernetes.io/projected/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-kube-api-access-ffsg4\") pod \"redhat-operators-57ckb\" (UID: \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\") " pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.580132 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.827662 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57ckb"] Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.847372 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bdc5f"] Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.848908 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:39 crc kubenswrapper[4972]: I1121 10:44:39.861548 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bdc5f"] Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.012550 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-catalog-content\") pod \"redhat-marketplace-bdc5f\" (UID: \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\") " pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.012592 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-utilities\") pod \"redhat-marketplace-bdc5f\" (UID: \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\") " pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.012740 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr5fw\" (UniqueName: \"kubernetes.io/projected/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-kube-api-access-nr5fw\") pod \"redhat-marketplace-bdc5f\" (UID: \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\") " pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.113668 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr5fw\" (UniqueName: \"kubernetes.io/projected/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-kube-api-access-nr5fw\") pod \"redhat-marketplace-bdc5f\" (UID: \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\") " pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.114129 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-catalog-content\") pod \"redhat-marketplace-bdc5f\" (UID: \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\") " pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.114159 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-utilities\") pod \"redhat-marketplace-bdc5f\" (UID: \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\") " pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.115002 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-utilities\") pod \"redhat-marketplace-bdc5f\" (UID: \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\") " pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.115040 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-catalog-content\") pod \"redhat-marketplace-bdc5f\" (UID: \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\") " pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.134956 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr5fw\" (UniqueName: \"kubernetes.io/projected/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-kube-api-access-nr5fw\") pod \"redhat-marketplace-bdc5f\" (UID: \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\") " pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.234328 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.262870 4972 generic.go:334] "Generic (PLEG): container finished" podID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" containerID="049f1a384bb1d3ba8a25775234a93dc2b1c29187283a71b21a5fe46d4b41f2c2" exitCode=0 Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.262955 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57ckb" event={"ID":"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4","Type":"ContainerDied","Data":"049f1a384bb1d3ba8a25775234a93dc2b1c29187283a71b21a5fe46d4b41f2c2"} Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.262986 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57ckb" event={"ID":"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4","Type":"ContainerStarted","Data":"52e904e0383f19b78d180923c4cbc1d51b7ce98f5ec716a597f6218fd1b7c756"} Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.266690 4972 generic.go:334] "Generic (PLEG): container finished" podID="38ace700-905e-467b-b2c0-42b2f62a15b4" containerID="0fb21a3907c0d613222ddae676f9b235393108543ee00b479dbc87da3427ef83" exitCode=0 Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.266730 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp7s5" event={"ID":"38ace700-905e-467b-b2c0-42b2f62a15b4","Type":"ContainerDied","Data":"0fb21a3907c0d613222ddae676f9b235393108543ee00b479dbc87da3427ef83"} Nov 21 10:44:40 crc kubenswrapper[4972]: I1121 10:44:40.677096 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bdc5f"] Nov 21 10:44:41 crc kubenswrapper[4972]: I1121 10:44:41.281479 4972 generic.go:334] "Generic (PLEG): container finished" podID="ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" containerID="89c2d447c1e77b17ac9ae53ddc67a664001c31bd604d887c6209a9ac949d8082" exitCode=0 Nov 21 10:44:41 crc kubenswrapper[4972]: I1121 10:44:41.281557 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdc5f" event={"ID":"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51","Type":"ContainerDied","Data":"89c2d447c1e77b17ac9ae53ddc67a664001c31bd604d887c6209a9ac949d8082"} Nov 21 10:44:41 crc kubenswrapper[4972]: I1121 10:44:41.281817 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdc5f" event={"ID":"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51","Type":"ContainerStarted","Data":"73bff018b238125969d26d23b227190d6aa113003640fb563a553b646ec76c9e"} Nov 21 10:44:41 crc kubenswrapper[4972]: I1121 10:44:41.287700 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp7s5" event={"ID":"38ace700-905e-467b-b2c0-42b2f62a15b4","Type":"ContainerStarted","Data":"4cd6f8739419aaae8f54075c83393ae5f576f0f34e4761d0c1b4c58337343d56"} Nov 21 10:44:41 crc kubenswrapper[4972]: I1121 10:44:41.289424 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57ckb" event={"ID":"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4","Type":"ContainerStarted","Data":"a63a05a11d9e2fe393c49d972cd15ec5844a8e86cfde2d03da9070229e5a50b4"} Nov 21 10:44:41 crc kubenswrapper[4972]: I1121 10:44:41.321162 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gp7s5" podStartSLOduration=2.87441485 podStartE2EDuration="5.321138922s" podCreationTimestamp="2025-11-21 10:44:36 +0000 UTC" firstStartedPulling="2025-11-21 10:44:38.238726726 +0000 UTC m=+3823.347869234" lastFinishedPulling="2025-11-21 10:44:40.685450808 +0000 UTC m=+3825.794593306" observedRunningTime="2025-11-21 10:44:41.319239422 +0000 UTC m=+3826.428381920" watchObservedRunningTime="2025-11-21 10:44:41.321138922 +0000 UTC m=+3826.430281410" Nov 21 10:44:42 crc kubenswrapper[4972]: I1121 10:44:42.298498 4972 generic.go:334] "Generic (PLEG): container finished" podID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" containerID="a63a05a11d9e2fe393c49d972cd15ec5844a8e86cfde2d03da9070229e5a50b4" exitCode=0 Nov 21 10:44:42 crc kubenswrapper[4972]: I1121 10:44:42.298953 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57ckb" event={"ID":"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4","Type":"ContainerDied","Data":"a63a05a11d9e2fe393c49d972cd15ec5844a8e86cfde2d03da9070229e5a50b4"} Nov 21 10:44:42 crc kubenswrapper[4972]: I1121 10:44:42.301869 4972 generic.go:334] "Generic (PLEG): container finished" podID="ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" containerID="24e6f4b9332b928e08790944126488f3787ce2d9696ecbc57c64de3104c51764" exitCode=0 Nov 21 10:44:42 crc kubenswrapper[4972]: I1121 10:44:42.303410 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdc5f" event={"ID":"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51","Type":"ContainerDied","Data":"24e6f4b9332b928e08790944126488f3787ce2d9696ecbc57c64de3104c51764"} Nov 21 10:44:44 crc kubenswrapper[4972]: I1121 10:44:44.325296 4972 generic.go:334] "Generic (PLEG): container finished" podID="319a09ca-a083-4445-b93d-033057a1949e" containerID="9e666e3098e7a6f5a7b8cd7306af434ea9439a752cbe520649baa9451d67fab7" exitCode=0 Nov 21 10:44:44 crc kubenswrapper[4972]: I1121 10:44:44.325421 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhnp6" event={"ID":"319a09ca-a083-4445-b93d-033057a1949e","Type":"ContainerDied","Data":"9e666e3098e7a6f5a7b8cd7306af434ea9439a752cbe520649baa9451d67fab7"} Nov 21 10:44:44 crc kubenswrapper[4972]: I1121 10:44:44.336569 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57ckb" event={"ID":"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4","Type":"ContainerStarted","Data":"f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918"} Nov 21 10:44:44 crc kubenswrapper[4972]: I1121 10:44:44.339580 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdc5f" event={"ID":"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51","Type":"ContainerStarted","Data":"9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d"} Nov 21 10:44:44 crc kubenswrapper[4972]: I1121 10:44:44.390621 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-57ckb" podStartSLOduration=1.830069438 podStartE2EDuration="5.390593493s" podCreationTimestamp="2025-11-21 10:44:39 +0000 UTC" firstStartedPulling="2025-11-21 10:44:40.263977916 +0000 UTC m=+3825.373120414" lastFinishedPulling="2025-11-21 10:44:43.824501931 +0000 UTC m=+3828.933644469" observedRunningTime="2025-11-21 10:44:44.383152725 +0000 UTC m=+3829.492295233" watchObservedRunningTime="2025-11-21 10:44:44.390593493 +0000 UTC m=+3829.499736031" Nov 21 10:44:44 crc kubenswrapper[4972]: I1121 10:44:44.410930 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bdc5f" podStartSLOduration=2.75825003 podStartE2EDuration="5.410904334s" podCreationTimestamp="2025-11-21 10:44:39 +0000 UTC" firstStartedPulling="2025-11-21 10:44:41.284927468 +0000 UTC m=+3826.394069966" lastFinishedPulling="2025-11-21 10:44:43.937581732 +0000 UTC m=+3829.046724270" observedRunningTime="2025-11-21 10:44:44.403301301 +0000 UTC m=+3829.512443839" watchObservedRunningTime="2025-11-21 10:44:44.410904334 +0000 UTC m=+3829.520046842" Nov 21 10:44:45 crc kubenswrapper[4972]: I1121 10:44:45.351428 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhnp6" event={"ID":"319a09ca-a083-4445-b93d-033057a1949e","Type":"ContainerStarted","Data":"1e4de74027efaf4eced0b851a04c9100c38280c487b6f25f94737d73d44f5edc"} Nov 21 10:44:45 crc kubenswrapper[4972]: I1121 10:44:45.373317 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fhnp6" podStartSLOduration=2.818803603 podStartE2EDuration="8.373267265s" podCreationTimestamp="2025-11-21 10:44:37 +0000 UTC" firstStartedPulling="2025-11-21 10:44:39.25752411 +0000 UTC m=+3824.366666618" lastFinishedPulling="2025-11-21 10:44:44.811987752 +0000 UTC m=+3829.921130280" observedRunningTime="2025-11-21 10:44:45.365078697 +0000 UTC m=+3830.474221255" watchObservedRunningTime="2025-11-21 10:44:45.373267265 +0000 UTC m=+3830.482409803" Nov 21 10:44:47 crc kubenswrapper[4972]: I1121 10:44:47.193971 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:47 crc kubenswrapper[4972]: I1121 10:44:47.194466 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:47 crc kubenswrapper[4972]: I1121 10:44:47.261659 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:47 crc kubenswrapper[4972]: I1121 10:44:47.440889 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:47 crc kubenswrapper[4972]: I1121 10:44:47.862940 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:47 crc kubenswrapper[4972]: I1121 10:44:47.863001 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:47 crc kubenswrapper[4972]: I1121 10:44:47.910783 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:49 crc kubenswrapper[4972]: I1121 10:44:49.581366 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:49 crc kubenswrapper[4972]: I1121 10:44:49.581712 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:44:50 crc kubenswrapper[4972]: I1121 10:44:50.235970 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:50 crc kubenswrapper[4972]: I1121 10:44:50.236029 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:50 crc kubenswrapper[4972]: I1121 10:44:50.246019 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gp7s5"] Nov 21 10:44:50 crc kubenswrapper[4972]: I1121 10:44:50.246677 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gp7s5" podUID="38ace700-905e-467b-b2c0-42b2f62a15b4" containerName="registry-server" containerID="cri-o://4cd6f8739419aaae8f54075c83393ae5f576f0f34e4761d0c1b4c58337343d56" gracePeriod=2 Nov 21 10:44:50 crc kubenswrapper[4972]: I1121 10:44:50.313943 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:50 crc kubenswrapper[4972]: I1121 10:44:50.467180 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:50 crc kubenswrapper[4972]: I1121 10:44:50.654809 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-57ckb" podUID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" containerName="registry-server" probeResult="failure" output=< Nov 21 10:44:50 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 10:44:50 crc kubenswrapper[4972]: > Nov 21 10:44:50 crc kubenswrapper[4972]: I1121 10:44:50.761657 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:44:50 crc kubenswrapper[4972]: E1121 10:44:50.762198 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:44:51 crc kubenswrapper[4972]: I1121 10:44:51.401796 4972 generic.go:334] "Generic (PLEG): container finished" podID="38ace700-905e-467b-b2c0-42b2f62a15b4" containerID="4cd6f8739419aaae8f54075c83393ae5f576f0f34e4761d0c1b4c58337343d56" exitCode=0 Nov 21 10:44:51 crc kubenswrapper[4972]: I1121 10:44:51.402529 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp7s5" event={"ID":"38ace700-905e-467b-b2c0-42b2f62a15b4","Type":"ContainerDied","Data":"4cd6f8739419aaae8f54075c83393ae5f576f0f34e4761d0c1b4c58337343d56"} Nov 21 10:44:51 crc kubenswrapper[4972]: I1121 10:44:51.739104 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:51 crc kubenswrapper[4972]: I1121 10:44:51.897715 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38ace700-905e-467b-b2c0-42b2f62a15b4-utilities\") pod \"38ace700-905e-467b-b2c0-42b2f62a15b4\" (UID: \"38ace700-905e-467b-b2c0-42b2f62a15b4\") " Nov 21 10:44:51 crc kubenswrapper[4972]: I1121 10:44:51.897845 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38ace700-905e-467b-b2c0-42b2f62a15b4-catalog-content\") pod \"38ace700-905e-467b-b2c0-42b2f62a15b4\" (UID: \"38ace700-905e-467b-b2c0-42b2f62a15b4\") " Nov 21 10:44:51 crc kubenswrapper[4972]: I1121 10:44:51.897992 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vq7h\" (UniqueName: \"kubernetes.io/projected/38ace700-905e-467b-b2c0-42b2f62a15b4-kube-api-access-6vq7h\") pod \"38ace700-905e-467b-b2c0-42b2f62a15b4\" (UID: \"38ace700-905e-467b-b2c0-42b2f62a15b4\") " Nov 21 10:44:51 crc kubenswrapper[4972]: I1121 10:44:51.898732 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38ace700-905e-467b-b2c0-42b2f62a15b4-utilities" (OuterVolumeSpecName: "utilities") pod "38ace700-905e-467b-b2c0-42b2f62a15b4" (UID: "38ace700-905e-467b-b2c0-42b2f62a15b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:44:51 crc kubenswrapper[4972]: I1121 10:44:51.906740 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ace700-905e-467b-b2c0-42b2f62a15b4-kube-api-access-6vq7h" (OuterVolumeSpecName: "kube-api-access-6vq7h") pod "38ace700-905e-467b-b2c0-42b2f62a15b4" (UID: "38ace700-905e-467b-b2c0-42b2f62a15b4"). InnerVolumeSpecName "kube-api-access-6vq7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:44:51 crc kubenswrapper[4972]: I1121 10:44:51.951583 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38ace700-905e-467b-b2c0-42b2f62a15b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38ace700-905e-467b-b2c0-42b2f62a15b4" (UID: "38ace700-905e-467b-b2c0-42b2f62a15b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:44:51 crc kubenswrapper[4972]: I1121 10:44:51.999516 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38ace700-905e-467b-b2c0-42b2f62a15b4-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:44:51 crc kubenswrapper[4972]: I1121 10:44:51.999566 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38ace700-905e-467b-b2c0-42b2f62a15b4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:44:51 crc kubenswrapper[4972]: I1121 10:44:51.999586 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vq7h\" (UniqueName: \"kubernetes.io/projected/38ace700-905e-467b-b2c0-42b2f62a15b4-kube-api-access-6vq7h\") on node \"crc\" DevicePath \"\"" Nov 21 10:44:52 crc kubenswrapper[4972]: I1121 10:44:52.410462 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp7s5" event={"ID":"38ace700-905e-467b-b2c0-42b2f62a15b4","Type":"ContainerDied","Data":"051314517b2c468e0dfc8a0f207be3332902fb3d2c32cf278fb5cd86b57fbdcd"} Nov 21 10:44:52 crc kubenswrapper[4972]: I1121 10:44:52.410558 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp7s5" Nov 21 10:44:52 crc kubenswrapper[4972]: I1121 10:44:52.410842 4972 scope.go:117] "RemoveContainer" containerID="4cd6f8739419aaae8f54075c83393ae5f576f0f34e4761d0c1b4c58337343d56" Nov 21 10:44:52 crc kubenswrapper[4972]: I1121 10:44:52.429669 4972 scope.go:117] "RemoveContainer" containerID="0fb21a3907c0d613222ddae676f9b235393108543ee00b479dbc87da3427ef83" Nov 21 10:44:52 crc kubenswrapper[4972]: I1121 10:44:52.446418 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gp7s5"] Nov 21 10:44:52 crc kubenswrapper[4972]: I1121 10:44:52.451466 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gp7s5"] Nov 21 10:44:52 crc kubenswrapper[4972]: I1121 10:44:52.457442 4972 scope.go:117] "RemoveContainer" containerID="8d0adfd17c2f1d5a6d889696c4955bbe4a754abff084dd80747f797d8d0cb980" Nov 21 10:44:53 crc kubenswrapper[4972]: I1121 10:44:53.769695 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38ace700-905e-467b-b2c0-42b2f62a15b4" path="/var/lib/kubelet/pods/38ace700-905e-467b-b2c0-42b2f62a15b4/volumes" Nov 21 10:44:56 crc kubenswrapper[4972]: I1121 10:44:56.038514 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bdc5f"] Nov 21 10:44:56 crc kubenswrapper[4972]: I1121 10:44:56.039153 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bdc5f" podUID="ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" containerName="registry-server" containerID="cri-o://9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d" gracePeriod=2 Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.411283 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.472588 4972 generic.go:334] "Generic (PLEG): container finished" podID="ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" containerID="9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d" exitCode=0 Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.472651 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdc5f" event={"ID":"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51","Type":"ContainerDied","Data":"9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d"} Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.472750 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdc5f" event={"ID":"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51","Type":"ContainerDied","Data":"73bff018b238125969d26d23b227190d6aa113003640fb563a553b646ec76c9e"} Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.472787 4972 scope.go:117] "RemoveContainer" containerID="9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.472807 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bdc5f" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.495752 4972 scope.go:117] "RemoveContainer" containerID="24e6f4b9332b928e08790944126488f3787ce2d9696ecbc57c64de3104c51764" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.497734 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-utilities\") pod \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\" (UID: \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\") " Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.497801 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr5fw\" (UniqueName: \"kubernetes.io/projected/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-kube-api-access-nr5fw\") pod \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\" (UID: \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\") " Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.497966 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-catalog-content\") pod \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\" (UID: \"ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51\") " Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.498983 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-utilities" (OuterVolumeSpecName: "utilities") pod "ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" (UID: "ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.504823 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-kube-api-access-nr5fw" (OuterVolumeSpecName: "kube-api-access-nr5fw") pod "ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" (UID: "ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51"). InnerVolumeSpecName "kube-api-access-nr5fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.528999 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" (UID: "ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.544130 4972 scope.go:117] "RemoveContainer" containerID="89c2d447c1e77b17ac9ae53ddc67a664001c31bd604d887c6209a9ac949d8082" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.576729 4972 scope.go:117] "RemoveContainer" containerID="9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d" Nov 21 10:44:57 crc kubenswrapper[4972]: E1121 10:44:57.577175 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d\": container with ID starting with 9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d not found: ID does not exist" containerID="9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.577238 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d"} err="failed to get container status \"9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d\": rpc error: code = NotFound desc = could not find container \"9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d\": container with ID starting with 9515fae07ee5ed294a31418f5f26f328fb58fa98482e4a921188af6b1d03e53d not found: ID does not exist" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.577267 4972 scope.go:117] "RemoveContainer" containerID="24e6f4b9332b928e08790944126488f3787ce2d9696ecbc57c64de3104c51764" Nov 21 10:44:57 crc kubenswrapper[4972]: E1121 10:44:57.577594 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e6f4b9332b928e08790944126488f3787ce2d9696ecbc57c64de3104c51764\": container with ID starting with 24e6f4b9332b928e08790944126488f3787ce2d9696ecbc57c64de3104c51764 not found: ID does not exist" containerID="24e6f4b9332b928e08790944126488f3787ce2d9696ecbc57c64de3104c51764" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.577758 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e6f4b9332b928e08790944126488f3787ce2d9696ecbc57c64de3104c51764"} err="failed to get container status \"24e6f4b9332b928e08790944126488f3787ce2d9696ecbc57c64de3104c51764\": rpc error: code = NotFound desc = could not find container \"24e6f4b9332b928e08790944126488f3787ce2d9696ecbc57c64de3104c51764\": container with ID starting with 24e6f4b9332b928e08790944126488f3787ce2d9696ecbc57c64de3104c51764 not found: ID does not exist" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.577811 4972 scope.go:117] "RemoveContainer" containerID="89c2d447c1e77b17ac9ae53ddc67a664001c31bd604d887c6209a9ac949d8082" Nov 21 10:44:57 crc kubenswrapper[4972]: E1121 10:44:57.578429 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89c2d447c1e77b17ac9ae53ddc67a664001c31bd604d887c6209a9ac949d8082\": container with ID starting with 89c2d447c1e77b17ac9ae53ddc67a664001c31bd604d887c6209a9ac949d8082 not found: ID does not exist" containerID="89c2d447c1e77b17ac9ae53ddc67a664001c31bd604d887c6209a9ac949d8082" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.578462 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c2d447c1e77b17ac9ae53ddc67a664001c31bd604d887c6209a9ac949d8082"} err="failed to get container status \"89c2d447c1e77b17ac9ae53ddc67a664001c31bd604d887c6209a9ac949d8082\": rpc error: code = NotFound desc = could not find container \"89c2d447c1e77b17ac9ae53ddc67a664001c31bd604d887c6209a9ac949d8082\": container with ID starting with 89c2d447c1e77b17ac9ae53ddc67a664001c31bd604d887c6209a9ac949d8082 not found: ID does not exist" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.599807 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.599900 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr5fw\" (UniqueName: \"kubernetes.io/projected/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-kube-api-access-nr5fw\") on node \"crc\" DevicePath \"\"" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.599921 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.810577 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bdc5f"] Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.820638 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bdc5f"] Nov 21 10:44:57 crc kubenswrapper[4972]: I1121 10:44:57.918203 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fhnp6" Nov 21 10:44:59 crc kubenswrapper[4972]: I1121 10:44:59.775723 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" path="/var/lib/kubelet/pods/ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51/volumes" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.098856 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.157542 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q"] Nov 21 10:45:00 crc kubenswrapper[4972]: E1121 10:45:00.157853 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ace700-905e-467b-b2c0-42b2f62a15b4" containerName="extract-utilities" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.157865 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ace700-905e-467b-b2c0-42b2f62a15b4" containerName="extract-utilities" Nov 21 10:45:00 crc kubenswrapper[4972]: E1121 10:45:00.157878 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" containerName="extract-content" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.157883 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" containerName="extract-content" Nov 21 10:45:00 crc kubenswrapper[4972]: E1121 10:45:00.157896 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" containerName="registry-server" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.157902 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" containerName="registry-server" Nov 21 10:45:00 crc kubenswrapper[4972]: E1121 10:45:00.157909 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ace700-905e-467b-b2c0-42b2f62a15b4" containerName="extract-content" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.157914 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ace700-905e-467b-b2c0-42b2f62a15b4" containerName="extract-content" Nov 21 10:45:00 crc kubenswrapper[4972]: E1121 10:45:00.157926 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" containerName="extract-utilities" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.157932 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" containerName="extract-utilities" Nov 21 10:45:00 crc kubenswrapper[4972]: E1121 10:45:00.157941 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ace700-905e-467b-b2c0-42b2f62a15b4" containerName="registry-server" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.157946 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ace700-905e-467b-b2c0-42b2f62a15b4" containerName="registry-server" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.158077 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9f61d9-7aa8-4bc1-a9a3-6cef99787a51" containerName="registry-server" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.158093 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="38ace700-905e-467b-b2c0-42b2f62a15b4" containerName="registry-server" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.158528 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.160990 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.161148 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.179579 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q"] Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.192234 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.248272 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvz24\" (UniqueName: \"kubernetes.io/projected/df320d5f-4313-4611-a7e6-4ec305b881d4-kube-api-access-bvz24\") pod \"collect-profiles-29395365-nnj4q\" (UID: \"df320d5f-4313-4611-a7e6-4ec305b881d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.248338 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df320d5f-4313-4611-a7e6-4ec305b881d4-secret-volume\") pod \"collect-profiles-29395365-nnj4q\" (UID: \"df320d5f-4313-4611-a7e6-4ec305b881d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.248396 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df320d5f-4313-4611-a7e6-4ec305b881d4-config-volume\") pod \"collect-profiles-29395365-nnj4q\" (UID: \"df320d5f-4313-4611-a7e6-4ec305b881d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.349898 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvz24\" (UniqueName: \"kubernetes.io/projected/df320d5f-4313-4611-a7e6-4ec305b881d4-kube-api-access-bvz24\") pod \"collect-profiles-29395365-nnj4q\" (UID: \"df320d5f-4313-4611-a7e6-4ec305b881d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.349954 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df320d5f-4313-4611-a7e6-4ec305b881d4-secret-volume\") pod \"collect-profiles-29395365-nnj4q\" (UID: \"df320d5f-4313-4611-a7e6-4ec305b881d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.349996 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df320d5f-4313-4611-a7e6-4ec305b881d4-config-volume\") pod \"collect-profiles-29395365-nnj4q\" (UID: \"df320d5f-4313-4611-a7e6-4ec305b881d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.350928 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df320d5f-4313-4611-a7e6-4ec305b881d4-config-volume\") pod \"collect-profiles-29395365-nnj4q\" (UID: \"df320d5f-4313-4611-a7e6-4ec305b881d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.358183 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df320d5f-4313-4611-a7e6-4ec305b881d4-secret-volume\") pod \"collect-profiles-29395365-nnj4q\" (UID: \"df320d5f-4313-4611-a7e6-4ec305b881d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.371588 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvz24\" (UniqueName: \"kubernetes.io/projected/df320d5f-4313-4611-a7e6-4ec305b881d4-kube-api-access-bvz24\") pod \"collect-profiles-29395365-nnj4q\" (UID: \"df320d5f-4313-4611-a7e6-4ec305b881d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.475124 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:00 crc kubenswrapper[4972]: I1121 10:45:00.733583 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q"] Nov 21 10:45:00 crc kubenswrapper[4972]: W1121 10:45:00.740996 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf320d5f_4313_4611_a7e6_4ec305b881d4.slice/crio-a30f7e8832eb79ff7e3aad2900e4a5d4099327790d920b0fd7428a82173998a3 WatchSource:0}: Error finding container a30f7e8832eb79ff7e3aad2900e4a5d4099327790d920b0fd7428a82173998a3: Status 404 returned error can't find the container with id a30f7e8832eb79ff7e3aad2900e4a5d4099327790d920b0fd7428a82173998a3 Nov 21 10:45:01 crc kubenswrapper[4972]: I1121 10:45:01.055379 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhnp6"] Nov 21 10:45:01 crc kubenswrapper[4972]: I1121 10:45:01.524450 4972 generic.go:334] "Generic (PLEG): container finished" podID="df320d5f-4313-4611-a7e6-4ec305b881d4" containerID="5b877c344544496ab31c15d7c7648fedd4f867113b79eca457f207908cf93842" exitCode=0 Nov 21 10:45:01 crc kubenswrapper[4972]: I1121 10:45:01.524532 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" event={"ID":"df320d5f-4313-4611-a7e6-4ec305b881d4","Type":"ContainerDied","Data":"5b877c344544496ab31c15d7c7648fedd4f867113b79eca457f207908cf93842"} Nov 21 10:45:01 crc kubenswrapper[4972]: I1121 10:45:01.524570 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" event={"ID":"df320d5f-4313-4611-a7e6-4ec305b881d4","Type":"ContainerStarted","Data":"a30f7e8832eb79ff7e3aad2900e4a5d4099327790d920b0fd7428a82173998a3"} Nov 21 10:45:01 crc kubenswrapper[4972]: I1121 10:45:01.760453 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:45:01 crc kubenswrapper[4972]: E1121 10:45:01.761089 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:45:01 crc kubenswrapper[4972]: I1121 10:45:01.836509 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rttlt"] Nov 21 10:45:01 crc kubenswrapper[4972]: I1121 10:45:01.836723 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rttlt" podUID="561f007e-bddb-4c6f-83a3-d9052a392b37" containerName="registry-server" containerID="cri-o://0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d" gracePeriod=2 Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.352994 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rttlt" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.484231 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/561f007e-bddb-4c6f-83a3-d9052a392b37-catalog-content\") pod \"561f007e-bddb-4c6f-83a3-d9052a392b37\" (UID: \"561f007e-bddb-4c6f-83a3-d9052a392b37\") " Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.484327 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/561f007e-bddb-4c6f-83a3-d9052a392b37-utilities\") pod \"561f007e-bddb-4c6f-83a3-d9052a392b37\" (UID: \"561f007e-bddb-4c6f-83a3-d9052a392b37\") " Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.484356 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fsrq\" (UniqueName: \"kubernetes.io/projected/561f007e-bddb-4c6f-83a3-d9052a392b37-kube-api-access-2fsrq\") pod \"561f007e-bddb-4c6f-83a3-d9052a392b37\" (UID: \"561f007e-bddb-4c6f-83a3-d9052a392b37\") " Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.485028 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/561f007e-bddb-4c6f-83a3-d9052a392b37-utilities" (OuterVolumeSpecName: "utilities") pod "561f007e-bddb-4c6f-83a3-d9052a392b37" (UID: "561f007e-bddb-4c6f-83a3-d9052a392b37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.494338 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/561f007e-bddb-4c6f-83a3-d9052a392b37-kube-api-access-2fsrq" (OuterVolumeSpecName: "kube-api-access-2fsrq") pod "561f007e-bddb-4c6f-83a3-d9052a392b37" (UID: "561f007e-bddb-4c6f-83a3-d9052a392b37"). InnerVolumeSpecName "kube-api-access-2fsrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.529743 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/561f007e-bddb-4c6f-83a3-d9052a392b37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "561f007e-bddb-4c6f-83a3-d9052a392b37" (UID: "561f007e-bddb-4c6f-83a3-d9052a392b37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.532604 4972 generic.go:334] "Generic (PLEG): container finished" podID="561f007e-bddb-4c6f-83a3-d9052a392b37" containerID="0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d" exitCode=0 Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.532663 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rttlt" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.532694 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttlt" event={"ID":"561f007e-bddb-4c6f-83a3-d9052a392b37","Type":"ContainerDied","Data":"0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d"} Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.532724 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rttlt" event={"ID":"561f007e-bddb-4c6f-83a3-d9052a392b37","Type":"ContainerDied","Data":"4e51c398b8fe7badc8e4b146e5170bf9ee12ddc44b2f4189346ade96e5f443fd"} Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.532741 4972 scope.go:117] "RemoveContainer" containerID="0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.561235 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rttlt"] Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.561525 4972 scope.go:117] "RemoveContainer" containerID="eaa4054b6eaddf5fadeb51c6e2b4e14ed6cb2373f83aff013c149445101af7b1" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.566733 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rttlt"] Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.586669 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fsrq\" (UniqueName: \"kubernetes.io/projected/561f007e-bddb-4c6f-83a3-d9052a392b37-kube-api-access-2fsrq\") on node \"crc\" DevicePath \"\"" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.586727 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/561f007e-bddb-4c6f-83a3-d9052a392b37-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.586741 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/561f007e-bddb-4c6f-83a3-d9052a392b37-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.601702 4972 scope.go:117] "RemoveContainer" containerID="9b983f4231335def6ac2eaea1f95dce0966ff5e336e598d0652119c40b759dc4" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.619745 4972 scope.go:117] "RemoveContainer" containerID="0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d" Nov 21 10:45:02 crc kubenswrapper[4972]: E1121 10:45:02.620109 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d\": container with ID starting with 0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d not found: ID does not exist" containerID="0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.620146 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d"} err="failed to get container status \"0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d\": rpc error: code = NotFound desc = could not find container \"0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d\": container with ID starting with 0c6621f3dfd949ae5837eba592e917f39ecc38691e6ceabc113ff3597473511d not found: ID does not exist" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.620170 4972 scope.go:117] "RemoveContainer" containerID="eaa4054b6eaddf5fadeb51c6e2b4e14ed6cb2373f83aff013c149445101af7b1" Nov 21 10:45:02 crc kubenswrapper[4972]: E1121 10:45:02.620460 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaa4054b6eaddf5fadeb51c6e2b4e14ed6cb2373f83aff013c149445101af7b1\": container with ID starting with eaa4054b6eaddf5fadeb51c6e2b4e14ed6cb2373f83aff013c149445101af7b1 not found: ID does not exist" containerID="eaa4054b6eaddf5fadeb51c6e2b4e14ed6cb2373f83aff013c149445101af7b1" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.620482 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaa4054b6eaddf5fadeb51c6e2b4e14ed6cb2373f83aff013c149445101af7b1"} err="failed to get container status \"eaa4054b6eaddf5fadeb51c6e2b4e14ed6cb2373f83aff013c149445101af7b1\": rpc error: code = NotFound desc = could not find container \"eaa4054b6eaddf5fadeb51c6e2b4e14ed6cb2373f83aff013c149445101af7b1\": container with ID starting with eaa4054b6eaddf5fadeb51c6e2b4e14ed6cb2373f83aff013c149445101af7b1 not found: ID does not exist" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.620494 4972 scope.go:117] "RemoveContainer" containerID="9b983f4231335def6ac2eaea1f95dce0966ff5e336e598d0652119c40b759dc4" Nov 21 10:45:02 crc kubenswrapper[4972]: E1121 10:45:02.620901 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b983f4231335def6ac2eaea1f95dce0966ff5e336e598d0652119c40b759dc4\": container with ID starting with 9b983f4231335def6ac2eaea1f95dce0966ff5e336e598d0652119c40b759dc4 not found: ID does not exist" containerID="9b983f4231335def6ac2eaea1f95dce0966ff5e336e598d0652119c40b759dc4" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.620933 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b983f4231335def6ac2eaea1f95dce0966ff5e336e598d0652119c40b759dc4"} err="failed to get container status \"9b983f4231335def6ac2eaea1f95dce0966ff5e336e598d0652119c40b759dc4\": rpc error: code = NotFound desc = could not find container \"9b983f4231335def6ac2eaea1f95dce0966ff5e336e598d0652119c40b759dc4\": container with ID starting with 9b983f4231335def6ac2eaea1f95dce0966ff5e336e598d0652119c40b759dc4 not found: ID does not exist" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.838910 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.993026 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvz24\" (UniqueName: \"kubernetes.io/projected/df320d5f-4313-4611-a7e6-4ec305b881d4-kube-api-access-bvz24\") pod \"df320d5f-4313-4611-a7e6-4ec305b881d4\" (UID: \"df320d5f-4313-4611-a7e6-4ec305b881d4\") " Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.993567 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df320d5f-4313-4611-a7e6-4ec305b881d4-secret-volume\") pod \"df320d5f-4313-4611-a7e6-4ec305b881d4\" (UID: \"df320d5f-4313-4611-a7e6-4ec305b881d4\") " Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.993938 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df320d5f-4313-4611-a7e6-4ec305b881d4-config-volume\") pod \"df320d5f-4313-4611-a7e6-4ec305b881d4\" (UID: \"df320d5f-4313-4611-a7e6-4ec305b881d4\") " Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.995202 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df320d5f-4313-4611-a7e6-4ec305b881d4-config-volume" (OuterVolumeSpecName: "config-volume") pod "df320d5f-4313-4611-a7e6-4ec305b881d4" (UID: "df320d5f-4313-4611-a7e6-4ec305b881d4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.998171 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df320d5f-4313-4611-a7e6-4ec305b881d4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df320d5f-4313-4611-a7e6-4ec305b881d4" (UID: "df320d5f-4313-4611-a7e6-4ec305b881d4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 10:45:02 crc kubenswrapper[4972]: I1121 10:45:02.998541 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df320d5f-4313-4611-a7e6-4ec305b881d4-kube-api-access-bvz24" (OuterVolumeSpecName: "kube-api-access-bvz24") pod "df320d5f-4313-4611-a7e6-4ec305b881d4" (UID: "df320d5f-4313-4611-a7e6-4ec305b881d4"). InnerVolumeSpecName "kube-api-access-bvz24". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:45:03 crc kubenswrapper[4972]: I1121 10:45:03.095900 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvz24\" (UniqueName: \"kubernetes.io/projected/df320d5f-4313-4611-a7e6-4ec305b881d4-kube-api-access-bvz24\") on node \"crc\" DevicePath \"\"" Nov 21 10:45:03 crc kubenswrapper[4972]: I1121 10:45:03.095956 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df320d5f-4313-4611-a7e6-4ec305b881d4-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 10:45:03 crc kubenswrapper[4972]: I1121 10:45:03.095976 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df320d5f-4313-4611-a7e6-4ec305b881d4-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 10:45:03 crc kubenswrapper[4972]: I1121 10:45:03.550153 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" event={"ID":"df320d5f-4313-4611-a7e6-4ec305b881d4","Type":"ContainerDied","Data":"a30f7e8832eb79ff7e3aad2900e4a5d4099327790d920b0fd7428a82173998a3"} Nov 21 10:45:03 crc kubenswrapper[4972]: I1121 10:45:03.550198 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q" Nov 21 10:45:03 crc kubenswrapper[4972]: I1121 10:45:03.550256 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a30f7e8832eb79ff7e3aad2900e4a5d4099327790d920b0fd7428a82173998a3" Nov 21 10:45:03 crc kubenswrapper[4972]: I1121 10:45:03.769222 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="561f007e-bddb-4c6f-83a3-d9052a392b37" path="/var/lib/kubelet/pods/561f007e-bddb-4c6f-83a3-d9052a392b37/volumes" Nov 21 10:45:03 crc kubenswrapper[4972]: I1121 10:45:03.925722 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4"] Nov 21 10:45:03 crc kubenswrapper[4972]: I1121 10:45:03.931875 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395320-cr8b4"] Nov 21 10:45:05 crc kubenswrapper[4972]: I1121 10:45:05.774082 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a0e96b-d215-4176-8d40-d32016e09f67" path="/var/lib/kubelet/pods/63a0e96b-d215-4176-8d40-d32016e09f67/volumes" Nov 21 10:45:06 crc kubenswrapper[4972]: I1121 10:45:06.649441 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-57ckb"] Nov 21 10:45:06 crc kubenswrapper[4972]: I1121 10:45:06.650178 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-57ckb" podUID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" containerName="registry-server" containerID="cri-o://f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918" gracePeriod=2 Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.069585 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.173366 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-catalog-content\") pod \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\" (UID: \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\") " Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.173536 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-utilities\") pod \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\" (UID: \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\") " Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.173582 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffsg4\" (UniqueName: \"kubernetes.io/projected/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-kube-api-access-ffsg4\") pod \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\" (UID: \"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4\") " Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.174519 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-utilities" (OuterVolumeSpecName: "utilities") pod "7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" (UID: "7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.180177 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-kube-api-access-ffsg4" (OuterVolumeSpecName: "kube-api-access-ffsg4") pod "7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" (UID: "7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4"). InnerVolumeSpecName "kube-api-access-ffsg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.257067 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" (UID: "7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.275140 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.275167 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffsg4\" (UniqueName: \"kubernetes.io/projected/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-kube-api-access-ffsg4\") on node \"crc\" DevicePath \"\"" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.275177 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.587747 4972 generic.go:334] "Generic (PLEG): container finished" podID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" containerID="f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918" exitCode=0 Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.587797 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57ckb" event={"ID":"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4","Type":"ContainerDied","Data":"f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918"} Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.587847 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57ckb" event={"ID":"7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4","Type":"ContainerDied","Data":"52e904e0383f19b78d180923c4cbc1d51b7ce98f5ec716a597f6218fd1b7c756"} Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.587869 4972 scope.go:117] "RemoveContainer" containerID="f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.587925 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57ckb" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.605134 4972 scope.go:117] "RemoveContainer" containerID="a63a05a11d9e2fe393c49d972cd15ec5844a8e86cfde2d03da9070229e5a50b4" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.638039 4972 scope.go:117] "RemoveContainer" containerID="049f1a384bb1d3ba8a25775234a93dc2b1c29187283a71b21a5fe46d4b41f2c2" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.661891 4972 scope.go:117] "RemoveContainer" containerID="f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918" Nov 21 10:45:07 crc kubenswrapper[4972]: E1121 10:45:07.666324 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918\": container with ID starting with f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918 not found: ID does not exist" containerID="f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.666387 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918"} err="failed to get container status \"f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918\": rpc error: code = NotFound desc = could not find container \"f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918\": container with ID starting with f45923dd8798a6d8a70ff5bd375ea11db37c2140262155b4bfb8e30844499918 not found: ID does not exist" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.666416 4972 scope.go:117] "RemoveContainer" containerID="a63a05a11d9e2fe393c49d972cd15ec5844a8e86cfde2d03da9070229e5a50b4" Nov 21 10:45:07 crc kubenswrapper[4972]: E1121 10:45:07.667039 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a63a05a11d9e2fe393c49d972cd15ec5844a8e86cfde2d03da9070229e5a50b4\": container with ID starting with a63a05a11d9e2fe393c49d972cd15ec5844a8e86cfde2d03da9070229e5a50b4 not found: ID does not exist" containerID="a63a05a11d9e2fe393c49d972cd15ec5844a8e86cfde2d03da9070229e5a50b4" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.667069 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63a05a11d9e2fe393c49d972cd15ec5844a8e86cfde2d03da9070229e5a50b4"} err="failed to get container status \"a63a05a11d9e2fe393c49d972cd15ec5844a8e86cfde2d03da9070229e5a50b4\": rpc error: code = NotFound desc = could not find container \"a63a05a11d9e2fe393c49d972cd15ec5844a8e86cfde2d03da9070229e5a50b4\": container with ID starting with a63a05a11d9e2fe393c49d972cd15ec5844a8e86cfde2d03da9070229e5a50b4 not found: ID does not exist" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.667089 4972 scope.go:117] "RemoveContainer" containerID="049f1a384bb1d3ba8a25775234a93dc2b1c29187283a71b21a5fe46d4b41f2c2" Nov 21 10:45:07 crc kubenswrapper[4972]: E1121 10:45:07.667540 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"049f1a384bb1d3ba8a25775234a93dc2b1c29187283a71b21a5fe46d4b41f2c2\": container with ID starting with 049f1a384bb1d3ba8a25775234a93dc2b1c29187283a71b21a5fe46d4b41f2c2 not found: ID does not exist" containerID="049f1a384bb1d3ba8a25775234a93dc2b1c29187283a71b21a5fe46d4b41f2c2" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.667572 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"049f1a384bb1d3ba8a25775234a93dc2b1c29187283a71b21a5fe46d4b41f2c2"} err="failed to get container status \"049f1a384bb1d3ba8a25775234a93dc2b1c29187283a71b21a5fe46d4b41f2c2\": rpc error: code = NotFound desc = could not find container \"049f1a384bb1d3ba8a25775234a93dc2b1c29187283a71b21a5fe46d4b41f2c2\": container with ID starting with 049f1a384bb1d3ba8a25775234a93dc2b1c29187283a71b21a5fe46d4b41f2c2 not found: ID does not exist" Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.674218 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-57ckb"] Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.680856 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-57ckb"] Nov 21 10:45:07 crc kubenswrapper[4972]: I1121 10:45:07.774625 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" path="/var/lib/kubelet/pods/7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4/volumes" Nov 21 10:45:14 crc kubenswrapper[4972]: I1121 10:45:14.758809 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:45:14 crc kubenswrapper[4972]: E1121 10:45:14.759617 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:45:24 crc kubenswrapper[4972]: I1121 10:45:24.880742 4972 scope.go:117] "RemoveContainer" containerID="19f19f652cafc2db3eca0d7c69e847fb2e27cb86eea736cbc5e10c4405d79d8a" Nov 21 10:45:25 crc kubenswrapper[4972]: I1121 10:45:25.769495 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:45:25 crc kubenswrapper[4972]: E1121 10:45:25.770049 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:45:36 crc kubenswrapper[4972]: I1121 10:45:36.760066 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:45:36 crc kubenswrapper[4972]: E1121 10:45:36.761183 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:45:51 crc kubenswrapper[4972]: I1121 10:45:51.759779 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:45:51 crc kubenswrapper[4972]: E1121 10:45:51.760799 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:46:03 crc kubenswrapper[4972]: I1121 10:46:03.759984 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:46:03 crc kubenswrapper[4972]: E1121 10:46:03.761140 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:46:15 crc kubenswrapper[4972]: I1121 10:46:15.784919 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:46:15 crc kubenswrapper[4972]: E1121 10:46:15.786065 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:46:27 crc kubenswrapper[4972]: I1121 10:46:27.759478 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:46:27 crc kubenswrapper[4972]: E1121 10:46:27.760521 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:46:38 crc kubenswrapper[4972]: I1121 10:46:38.760267 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:46:38 crc kubenswrapper[4972]: E1121 10:46:38.761636 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:46:50 crc kubenswrapper[4972]: I1121 10:46:50.759949 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:46:50 crc kubenswrapper[4972]: E1121 10:46:50.760919 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:47:01 crc kubenswrapper[4972]: I1121 10:47:01.759539 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:47:02 crc kubenswrapper[4972]: I1121 10:47:02.695340 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"675753297b1dc37f3fe500047774435f5bbeab2abab2b43b4148e1949121b0f3"} Nov 21 10:49:26 crc kubenswrapper[4972]: I1121 10:49:26.179339 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:49:26 crc kubenswrapper[4972]: I1121 10:49:26.181577 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:49:56 crc kubenswrapper[4972]: I1121 10:49:56.179616 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:49:56 crc kubenswrapper[4972]: I1121 10:49:56.180394 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:50:26 crc kubenswrapper[4972]: I1121 10:50:26.179534 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:50:26 crc kubenswrapper[4972]: I1121 10:50:26.180398 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:50:26 crc kubenswrapper[4972]: I1121 10:50:26.180463 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 10:50:26 crc kubenswrapper[4972]: I1121 10:50:26.181231 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"675753297b1dc37f3fe500047774435f5bbeab2abab2b43b4148e1949121b0f3"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 10:50:26 crc kubenswrapper[4972]: I1121 10:50:26.181334 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://675753297b1dc37f3fe500047774435f5bbeab2abab2b43b4148e1949121b0f3" gracePeriod=600 Nov 21 10:50:27 crc kubenswrapper[4972]: I1121 10:50:27.260176 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="675753297b1dc37f3fe500047774435f5bbeab2abab2b43b4148e1949121b0f3" exitCode=0 Nov 21 10:50:27 crc kubenswrapper[4972]: I1121 10:50:27.260230 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"675753297b1dc37f3fe500047774435f5bbeab2abab2b43b4148e1949121b0f3"} Nov 21 10:50:27 crc kubenswrapper[4972]: I1121 10:50:27.260679 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b"} Nov 21 10:50:27 crc kubenswrapper[4972]: I1121 10:50:27.260709 4972 scope.go:117] "RemoveContainer" containerID="04057cf1230747987f6798a7e85c82c0ba977b3f03324cb0223aeabdcf59c7dc" Nov 21 10:52:26 crc kubenswrapper[4972]: I1121 10:52:26.178908 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:52:26 crc kubenswrapper[4972]: I1121 10:52:26.180016 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:52:56 crc kubenswrapper[4972]: I1121 10:52:56.179551 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:52:56 crc kubenswrapper[4972]: I1121 10:52:56.180426 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:53:26 crc kubenswrapper[4972]: I1121 10:53:26.179319 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 10:53:26 crc kubenswrapper[4972]: I1121 10:53:26.180266 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 10:53:26 crc kubenswrapper[4972]: I1121 10:53:26.180362 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 10:53:26 crc kubenswrapper[4972]: I1121 10:53:26.181236 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 10:53:26 crc kubenswrapper[4972]: I1121 10:53:26.181357 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" gracePeriod=600 Nov 21 10:53:26 crc kubenswrapper[4972]: E1121 10:53:26.313714 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:53:26 crc kubenswrapper[4972]: I1121 10:53:26.942200 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" exitCode=0 Nov 21 10:53:26 crc kubenswrapper[4972]: I1121 10:53:26.942703 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b"} Nov 21 10:53:26 crc kubenswrapper[4972]: I1121 10:53:26.942767 4972 scope.go:117] "RemoveContainer" containerID="675753297b1dc37f3fe500047774435f5bbeab2abab2b43b4148e1949121b0f3" Nov 21 10:53:26 crc kubenswrapper[4972]: I1121 10:53:26.943697 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:53:26 crc kubenswrapper[4972]: E1121 10:53:26.944230 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:53:38 crc kubenswrapper[4972]: I1121 10:53:38.759717 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:53:38 crc kubenswrapper[4972]: E1121 10:53:38.761126 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:53:49 crc kubenswrapper[4972]: I1121 10:53:49.760206 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:53:49 crc kubenswrapper[4972]: E1121 10:53:49.761397 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:54:01 crc kubenswrapper[4972]: I1121 10:54:01.760522 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:54:01 crc kubenswrapper[4972]: E1121 10:54:01.761694 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:54:15 crc kubenswrapper[4972]: I1121 10:54:15.772320 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:54:15 crc kubenswrapper[4972]: E1121 10:54:15.773713 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:54:27 crc kubenswrapper[4972]: I1121 10:54:27.759193 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:54:27 crc kubenswrapper[4972]: E1121 10:54:27.760365 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:54:38 crc kubenswrapper[4972]: I1121 10:54:38.760142 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:54:38 crc kubenswrapper[4972]: E1121 10:54:38.761225 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:54:49 crc kubenswrapper[4972]: I1121 10:54:49.760292 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:54:49 crc kubenswrapper[4972]: E1121 10:54:49.761267 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:55:00 crc kubenswrapper[4972]: I1121 10:55:00.759773 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:55:00 crc kubenswrapper[4972]: E1121 10:55:00.761264 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:55:11 crc kubenswrapper[4972]: I1121 10:55:11.760068 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:55:11 crc kubenswrapper[4972]: E1121 10:55:11.761248 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:55:22 crc kubenswrapper[4972]: I1121 10:55:22.759535 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:55:22 crc kubenswrapper[4972]: E1121 10:55:22.760738 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.176749 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qk5x9"] Nov 21 10:55:29 crc kubenswrapper[4972]: E1121 10:55:29.177896 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561f007e-bddb-4c6f-83a3-d9052a392b37" containerName="extract-utilities" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.177920 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="561f007e-bddb-4c6f-83a3-d9052a392b37" containerName="extract-utilities" Nov 21 10:55:29 crc kubenswrapper[4972]: E1121 10:55:29.177945 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" containerName="extract-utilities" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.177957 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" containerName="extract-utilities" Nov 21 10:55:29 crc kubenswrapper[4972]: E1121 10:55:29.177985 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" containerName="extract-content" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.177997 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" containerName="extract-content" Nov 21 10:55:29 crc kubenswrapper[4972]: E1121 10:55:29.178019 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" containerName="registry-server" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.178031 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" containerName="registry-server" Nov 21 10:55:29 crc kubenswrapper[4972]: E1121 10:55:29.178050 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df320d5f-4313-4611-a7e6-4ec305b881d4" containerName="collect-profiles" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.178063 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="df320d5f-4313-4611-a7e6-4ec305b881d4" containerName="collect-profiles" Nov 21 10:55:29 crc kubenswrapper[4972]: E1121 10:55:29.178105 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561f007e-bddb-4c6f-83a3-d9052a392b37" containerName="registry-server" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.178121 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="561f007e-bddb-4c6f-83a3-d9052a392b37" containerName="registry-server" Nov 21 10:55:29 crc kubenswrapper[4972]: E1121 10:55:29.178150 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561f007e-bddb-4c6f-83a3-d9052a392b37" containerName="extract-content" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.178168 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="561f007e-bddb-4c6f-83a3-d9052a392b37" containerName="extract-content" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.178477 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="561f007e-bddb-4c6f-83a3-d9052a392b37" containerName="registry-server" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.178509 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f918b7f-03f4-4ab8-b5b9-97a208ef5dd4" containerName="registry-server" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.178537 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="df320d5f-4313-4611-a7e6-4ec305b881d4" containerName="collect-profiles" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.183673 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.183970 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qk5x9"] Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.226897 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75hk2\" (UniqueName: \"kubernetes.io/projected/d29278ff-4fc1-4f4d-aeac-6903aef5c675-kube-api-access-75hk2\") pod \"community-operators-qk5x9\" (UID: \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\") " pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.226952 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d29278ff-4fc1-4f4d-aeac-6903aef5c675-utilities\") pod \"community-operators-qk5x9\" (UID: \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\") " pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.226989 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d29278ff-4fc1-4f4d-aeac-6903aef5c675-catalog-content\") pod \"community-operators-qk5x9\" (UID: \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\") " pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.328036 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75hk2\" (UniqueName: \"kubernetes.io/projected/d29278ff-4fc1-4f4d-aeac-6903aef5c675-kube-api-access-75hk2\") pod \"community-operators-qk5x9\" (UID: \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\") " pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.328096 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d29278ff-4fc1-4f4d-aeac-6903aef5c675-utilities\") pod \"community-operators-qk5x9\" (UID: \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\") " pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.328158 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d29278ff-4fc1-4f4d-aeac-6903aef5c675-catalog-content\") pod \"community-operators-qk5x9\" (UID: \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\") " pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.328720 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d29278ff-4fc1-4f4d-aeac-6903aef5c675-catalog-content\") pod \"community-operators-qk5x9\" (UID: \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\") " pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.328821 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d29278ff-4fc1-4f4d-aeac-6903aef5c675-utilities\") pod \"community-operators-qk5x9\" (UID: \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\") " pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.544945 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75hk2\" (UniqueName: \"kubernetes.io/projected/d29278ff-4fc1-4f4d-aeac-6903aef5c675-kube-api-access-75hk2\") pod \"community-operators-qk5x9\" (UID: \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\") " pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:29 crc kubenswrapper[4972]: I1121 10:55:29.817466 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:30 crc kubenswrapper[4972]: I1121 10:55:30.337531 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qk5x9"] Nov 21 10:55:31 crc kubenswrapper[4972]: I1121 10:55:31.192951 4972 generic.go:334] "Generic (PLEG): container finished" podID="d29278ff-4fc1-4f4d-aeac-6903aef5c675" containerID="bb4bcbf5625688ca932cfbb190bbc74f0bd93aa1a3fe52f7e4d2890385ce318e" exitCode=0 Nov 21 10:55:31 crc kubenswrapper[4972]: I1121 10:55:31.193035 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk5x9" event={"ID":"d29278ff-4fc1-4f4d-aeac-6903aef5c675","Type":"ContainerDied","Data":"bb4bcbf5625688ca932cfbb190bbc74f0bd93aa1a3fe52f7e4d2890385ce318e"} Nov 21 10:55:31 crc kubenswrapper[4972]: I1121 10:55:31.193421 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk5x9" event={"ID":"d29278ff-4fc1-4f4d-aeac-6903aef5c675","Type":"ContainerStarted","Data":"f80f131f9ca50006a0f38068ac9c45e447bd9f096725b38d51e83f24f6090faf"} Nov 21 10:55:31 crc kubenswrapper[4972]: I1121 10:55:31.197604 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 10:55:33 crc kubenswrapper[4972]: I1121 10:55:33.223092 4972 generic.go:334] "Generic (PLEG): container finished" podID="d29278ff-4fc1-4f4d-aeac-6903aef5c675" containerID="7880ba82d498d07ef81ae7072067f3ff505cb4c1126f10fe43ef96b8ea39a192" exitCode=0 Nov 21 10:55:33 crc kubenswrapper[4972]: I1121 10:55:33.224018 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk5x9" event={"ID":"d29278ff-4fc1-4f4d-aeac-6903aef5c675","Type":"ContainerDied","Data":"7880ba82d498d07ef81ae7072067f3ff505cb4c1126f10fe43ef96b8ea39a192"} Nov 21 10:55:34 crc kubenswrapper[4972]: I1121 10:55:34.237664 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk5x9" event={"ID":"d29278ff-4fc1-4f4d-aeac-6903aef5c675","Type":"ContainerStarted","Data":"71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd"} Nov 21 10:55:34 crc kubenswrapper[4972]: I1121 10:55:34.269476 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qk5x9" podStartSLOduration=2.847509331 podStartE2EDuration="5.269448331s" podCreationTimestamp="2025-11-21 10:55:29 +0000 UTC" firstStartedPulling="2025-11-21 10:55:31.197140092 +0000 UTC m=+4476.306282630" lastFinishedPulling="2025-11-21 10:55:33.619079122 +0000 UTC m=+4478.728221630" observedRunningTime="2025-11-21 10:55:34.268355842 +0000 UTC m=+4479.377498370" watchObservedRunningTime="2025-11-21 10:55:34.269448331 +0000 UTC m=+4479.378590839" Nov 21 10:55:36 crc kubenswrapper[4972]: I1121 10:55:36.761071 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:55:36 crc kubenswrapper[4972]: E1121 10:55:36.762178 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:55:39 crc kubenswrapper[4972]: I1121 10:55:39.818972 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:39 crc kubenswrapper[4972]: I1121 10:55:39.819376 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:39 crc kubenswrapper[4972]: I1121 10:55:39.860602 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:40 crc kubenswrapper[4972]: I1121 10:55:40.385473 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:40 crc kubenswrapper[4972]: I1121 10:55:40.447420 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qk5x9"] Nov 21 10:55:42 crc kubenswrapper[4972]: I1121 10:55:42.330308 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qk5x9" podUID="d29278ff-4fc1-4f4d-aeac-6903aef5c675" containerName="registry-server" containerID="cri-o://71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd" gracePeriod=2 Nov 21 10:55:42 crc kubenswrapper[4972]: I1121 10:55:42.503086 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kg5gq"] Nov 21 10:55:42 crc kubenswrapper[4972]: I1121 10:55:42.505205 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:42 crc kubenswrapper[4972]: I1121 10:55:42.514031 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kg5gq"] Nov 21 10:55:42 crc kubenswrapper[4972]: I1121 10:55:42.653581 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxppg\" (UniqueName: \"kubernetes.io/projected/bdd8de76-e082-454c-b49b-fe425adb2b03-kube-api-access-cxppg\") pod \"redhat-operators-kg5gq\" (UID: \"bdd8de76-e082-454c-b49b-fe425adb2b03\") " pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:42 crc kubenswrapper[4972]: I1121 10:55:42.653647 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd8de76-e082-454c-b49b-fe425adb2b03-utilities\") pod \"redhat-operators-kg5gq\" (UID: \"bdd8de76-e082-454c-b49b-fe425adb2b03\") " pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:42 crc kubenswrapper[4972]: I1121 10:55:42.653729 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd8de76-e082-454c-b49b-fe425adb2b03-catalog-content\") pod \"redhat-operators-kg5gq\" (UID: \"bdd8de76-e082-454c-b49b-fe425adb2b03\") " pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:42 crc kubenswrapper[4972]: I1121 10:55:42.755133 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd8de76-e082-454c-b49b-fe425adb2b03-catalog-content\") pod \"redhat-operators-kg5gq\" (UID: \"bdd8de76-e082-454c-b49b-fe425adb2b03\") " pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:42 crc kubenswrapper[4972]: I1121 10:55:42.755256 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxppg\" (UniqueName: \"kubernetes.io/projected/bdd8de76-e082-454c-b49b-fe425adb2b03-kube-api-access-cxppg\") pod \"redhat-operators-kg5gq\" (UID: \"bdd8de76-e082-454c-b49b-fe425adb2b03\") " pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:42 crc kubenswrapper[4972]: I1121 10:55:42.755288 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd8de76-e082-454c-b49b-fe425adb2b03-utilities\") pod \"redhat-operators-kg5gq\" (UID: \"bdd8de76-e082-454c-b49b-fe425adb2b03\") " pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:42 crc kubenswrapper[4972]: I1121 10:55:42.755764 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd8de76-e082-454c-b49b-fe425adb2b03-catalog-content\") pod \"redhat-operators-kg5gq\" (UID: \"bdd8de76-e082-454c-b49b-fe425adb2b03\") " pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:42 crc kubenswrapper[4972]: I1121 10:55:42.755797 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd8de76-e082-454c-b49b-fe425adb2b03-utilities\") pod \"redhat-operators-kg5gq\" (UID: \"bdd8de76-e082-454c-b49b-fe425adb2b03\") " pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.045387 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxppg\" (UniqueName: \"kubernetes.io/projected/bdd8de76-e082-454c-b49b-fe425adb2b03-kube-api-access-cxppg\") pod \"redhat-operators-kg5gq\" (UID: \"bdd8de76-e082-454c-b49b-fe425adb2b03\") " pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.095470 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.123117 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.161418 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d29278ff-4fc1-4f4d-aeac-6903aef5c675-catalog-content\") pod \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\" (UID: \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\") " Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.161676 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75hk2\" (UniqueName: \"kubernetes.io/projected/d29278ff-4fc1-4f4d-aeac-6903aef5c675-kube-api-access-75hk2\") pod \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\" (UID: \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\") " Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.168048 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d29278ff-4fc1-4f4d-aeac-6903aef5c675-kube-api-access-75hk2" (OuterVolumeSpecName: "kube-api-access-75hk2") pod "d29278ff-4fc1-4f4d-aeac-6903aef5c675" (UID: "d29278ff-4fc1-4f4d-aeac-6903aef5c675"). InnerVolumeSpecName "kube-api-access-75hk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.263041 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d29278ff-4fc1-4f4d-aeac-6903aef5c675-utilities\") pod \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\" (UID: \"d29278ff-4fc1-4f4d-aeac-6903aef5c675\") " Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.263691 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75hk2\" (UniqueName: \"kubernetes.io/projected/d29278ff-4fc1-4f4d-aeac-6903aef5c675-kube-api-access-75hk2\") on node \"crc\" DevicePath \"\"" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.267480 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d29278ff-4fc1-4f4d-aeac-6903aef5c675-utilities" (OuterVolumeSpecName: "utilities") pod "d29278ff-4fc1-4f4d-aeac-6903aef5c675" (UID: "d29278ff-4fc1-4f4d-aeac-6903aef5c675"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.340016 4972 generic.go:334] "Generic (PLEG): container finished" podID="d29278ff-4fc1-4f4d-aeac-6903aef5c675" containerID="71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd" exitCode=0 Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.340056 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk5x9" event={"ID":"d29278ff-4fc1-4f4d-aeac-6903aef5c675","Type":"ContainerDied","Data":"71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd"} Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.340089 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk5x9" event={"ID":"d29278ff-4fc1-4f4d-aeac-6903aef5c675","Type":"ContainerDied","Data":"f80f131f9ca50006a0f38068ac9c45e447bd9f096725b38d51e83f24f6090faf"} Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.340110 4972 scope.go:117] "RemoveContainer" containerID="71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.340150 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qk5x9" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.361766 4972 scope.go:117] "RemoveContainer" containerID="7880ba82d498d07ef81ae7072067f3ff505cb4c1126f10fe43ef96b8ea39a192" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.366582 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d29278ff-4fc1-4f4d-aeac-6903aef5c675-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.391129 4972 scope.go:117] "RemoveContainer" containerID="bb4bcbf5625688ca932cfbb190bbc74f0bd93aa1a3fe52f7e4d2890385ce318e" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.415120 4972 scope.go:117] "RemoveContainer" containerID="71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd" Nov 21 10:55:43 crc kubenswrapper[4972]: E1121 10:55:43.415555 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd\": container with ID starting with 71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd not found: ID does not exist" containerID="71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.415610 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd"} err="failed to get container status \"71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd\": rpc error: code = NotFound desc = could not find container \"71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd\": container with ID starting with 71e8f33954fd8dff496ea05fb3551db0514b3ca9bff95b541186e2bd24d9f4bd not found: ID does not exist" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.415646 4972 scope.go:117] "RemoveContainer" containerID="7880ba82d498d07ef81ae7072067f3ff505cb4c1126f10fe43ef96b8ea39a192" Nov 21 10:55:43 crc kubenswrapper[4972]: E1121 10:55:43.416142 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7880ba82d498d07ef81ae7072067f3ff505cb4c1126f10fe43ef96b8ea39a192\": container with ID starting with 7880ba82d498d07ef81ae7072067f3ff505cb4c1126f10fe43ef96b8ea39a192 not found: ID does not exist" containerID="7880ba82d498d07ef81ae7072067f3ff505cb4c1126f10fe43ef96b8ea39a192" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.416177 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7880ba82d498d07ef81ae7072067f3ff505cb4c1126f10fe43ef96b8ea39a192"} err="failed to get container status \"7880ba82d498d07ef81ae7072067f3ff505cb4c1126f10fe43ef96b8ea39a192\": rpc error: code = NotFound desc = could not find container \"7880ba82d498d07ef81ae7072067f3ff505cb4c1126f10fe43ef96b8ea39a192\": container with ID starting with 7880ba82d498d07ef81ae7072067f3ff505cb4c1126f10fe43ef96b8ea39a192 not found: ID does not exist" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.416206 4972 scope.go:117] "RemoveContainer" containerID="bb4bcbf5625688ca932cfbb190bbc74f0bd93aa1a3fe52f7e4d2890385ce318e" Nov 21 10:55:43 crc kubenswrapper[4972]: E1121 10:55:43.416420 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4bcbf5625688ca932cfbb190bbc74f0bd93aa1a3fe52f7e4d2890385ce318e\": container with ID starting with bb4bcbf5625688ca932cfbb190bbc74f0bd93aa1a3fe52f7e4d2890385ce318e not found: ID does not exist" containerID="bb4bcbf5625688ca932cfbb190bbc74f0bd93aa1a3fe52f7e4d2890385ce318e" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.416446 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4bcbf5625688ca932cfbb190bbc74f0bd93aa1a3fe52f7e4d2890385ce318e"} err="failed to get container status \"bb4bcbf5625688ca932cfbb190bbc74f0bd93aa1a3fe52f7e4d2890385ce318e\": rpc error: code = NotFound desc = could not find container \"bb4bcbf5625688ca932cfbb190bbc74f0bd93aa1a3fe52f7e4d2890385ce318e\": container with ID starting with bb4bcbf5625688ca932cfbb190bbc74f0bd93aa1a3fe52f7e4d2890385ce318e not found: ID does not exist" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.681266 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kg5gq"] Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.691287 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d29278ff-4fc1-4f4d-aeac-6903aef5c675-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d29278ff-4fc1-4f4d-aeac-6903aef5c675" (UID: "d29278ff-4fc1-4f4d-aeac-6903aef5c675"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.771367 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d29278ff-4fc1-4f4d-aeac-6903aef5c675-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:55:43 crc kubenswrapper[4972]: I1121 10:55:43.992667 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qk5x9"] Nov 21 10:55:44 crc kubenswrapper[4972]: I1121 10:55:44.001311 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qk5x9"] Nov 21 10:55:44 crc kubenswrapper[4972]: I1121 10:55:44.351605 4972 generic.go:334] "Generic (PLEG): container finished" podID="bdd8de76-e082-454c-b49b-fe425adb2b03" containerID="8ed9ed0799b93ad517b1cb4cbc425f2e0756ee8c4e6a30c73d48750daa998129" exitCode=0 Nov 21 10:55:44 crc kubenswrapper[4972]: I1121 10:55:44.351691 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg5gq" event={"ID":"bdd8de76-e082-454c-b49b-fe425adb2b03","Type":"ContainerDied","Data":"8ed9ed0799b93ad517b1cb4cbc425f2e0756ee8c4e6a30c73d48750daa998129"} Nov 21 10:55:44 crc kubenswrapper[4972]: I1121 10:55:44.351725 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg5gq" event={"ID":"bdd8de76-e082-454c-b49b-fe425adb2b03","Type":"ContainerStarted","Data":"1e9f260c57a9f75e9d1701fb51d23582b3a353ab8fe3b7a81b72ced863561bba"} Nov 21 10:55:45 crc kubenswrapper[4972]: I1121 10:55:45.778992 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d29278ff-4fc1-4f4d-aeac-6903aef5c675" path="/var/lib/kubelet/pods/d29278ff-4fc1-4f4d-aeac-6903aef5c675/volumes" Nov 21 10:55:46 crc kubenswrapper[4972]: I1121 10:55:46.375590 4972 generic.go:334] "Generic (PLEG): container finished" podID="bdd8de76-e082-454c-b49b-fe425adb2b03" containerID="1b70a106d4d16cd7d1706fc5f2033fb68b358c42f47f017a49aa0efa94a2fe1c" exitCode=0 Nov 21 10:55:46 crc kubenswrapper[4972]: I1121 10:55:46.375633 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg5gq" event={"ID":"bdd8de76-e082-454c-b49b-fe425adb2b03","Type":"ContainerDied","Data":"1b70a106d4d16cd7d1706fc5f2033fb68b358c42f47f017a49aa0efa94a2fe1c"} Nov 21 10:55:47 crc kubenswrapper[4972]: I1121 10:55:47.760476 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:55:47 crc kubenswrapper[4972]: E1121 10:55:47.761378 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:55:48 crc kubenswrapper[4972]: I1121 10:55:48.389172 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg5gq" event={"ID":"bdd8de76-e082-454c-b49b-fe425adb2b03","Type":"ContainerStarted","Data":"f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64"} Nov 21 10:55:48 crc kubenswrapper[4972]: I1121 10:55:48.406298 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kg5gq" podStartSLOduration=2.976280961 podStartE2EDuration="6.406279628s" podCreationTimestamp="2025-11-21 10:55:42 +0000 UTC" firstStartedPulling="2025-11-21 10:55:44.354252066 +0000 UTC m=+4489.463394564" lastFinishedPulling="2025-11-21 10:55:47.784250693 +0000 UTC m=+4492.893393231" observedRunningTime="2025-11-21 10:55:48.406113153 +0000 UTC m=+4493.515255681" watchObservedRunningTime="2025-11-21 10:55:48.406279628 +0000 UTC m=+4493.515422126" Nov 21 10:55:49 crc kubenswrapper[4972]: I1121 10:55:49.911867 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4k9ng"] Nov 21 10:55:49 crc kubenswrapper[4972]: E1121 10:55:49.912596 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d29278ff-4fc1-4f4d-aeac-6903aef5c675" containerName="extract-content" Nov 21 10:55:49 crc kubenswrapper[4972]: I1121 10:55:49.912632 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d29278ff-4fc1-4f4d-aeac-6903aef5c675" containerName="extract-content" Nov 21 10:55:49 crc kubenswrapper[4972]: E1121 10:55:49.912675 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d29278ff-4fc1-4f4d-aeac-6903aef5c675" containerName="extract-utilities" Nov 21 10:55:49 crc kubenswrapper[4972]: I1121 10:55:49.912694 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d29278ff-4fc1-4f4d-aeac-6903aef5c675" containerName="extract-utilities" Nov 21 10:55:49 crc kubenswrapper[4972]: E1121 10:55:49.912743 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d29278ff-4fc1-4f4d-aeac-6903aef5c675" containerName="registry-server" Nov 21 10:55:49 crc kubenswrapper[4972]: I1121 10:55:49.912763 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d29278ff-4fc1-4f4d-aeac-6903aef5c675" containerName="registry-server" Nov 21 10:55:49 crc kubenswrapper[4972]: I1121 10:55:49.913207 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d29278ff-4fc1-4f4d-aeac-6903aef5c675" containerName="registry-server" Nov 21 10:55:49 crc kubenswrapper[4972]: I1121 10:55:49.915885 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:55:49 crc kubenswrapper[4972]: I1121 10:55:49.936732 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4k9ng"] Nov 21 10:55:50 crc kubenswrapper[4972]: I1121 10:55:50.011296 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abc2448-97d4-420f-af4c-6c2d490eb3a6-utilities\") pod \"redhat-marketplace-4k9ng\" (UID: \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\") " pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:55:50 crc kubenswrapper[4972]: I1121 10:55:50.011421 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abc2448-97d4-420f-af4c-6c2d490eb3a6-catalog-content\") pod \"redhat-marketplace-4k9ng\" (UID: \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\") " pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:55:50 crc kubenswrapper[4972]: I1121 10:55:50.011459 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzpdr\" (UniqueName: \"kubernetes.io/projected/9abc2448-97d4-420f-af4c-6c2d490eb3a6-kube-api-access-dzpdr\") pod \"redhat-marketplace-4k9ng\" (UID: \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\") " pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:55:50 crc kubenswrapper[4972]: I1121 10:55:50.112781 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abc2448-97d4-420f-af4c-6c2d490eb3a6-catalog-content\") pod \"redhat-marketplace-4k9ng\" (UID: \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\") " pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:55:50 crc kubenswrapper[4972]: I1121 10:55:50.112857 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpdr\" (UniqueName: \"kubernetes.io/projected/9abc2448-97d4-420f-af4c-6c2d490eb3a6-kube-api-access-dzpdr\") pod \"redhat-marketplace-4k9ng\" (UID: \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\") " pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:55:50 crc kubenswrapper[4972]: I1121 10:55:50.112909 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abc2448-97d4-420f-af4c-6c2d490eb3a6-utilities\") pod \"redhat-marketplace-4k9ng\" (UID: \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\") " pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:55:50 crc kubenswrapper[4972]: I1121 10:55:50.113485 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abc2448-97d4-420f-af4c-6c2d490eb3a6-catalog-content\") pod \"redhat-marketplace-4k9ng\" (UID: \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\") " pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:55:50 crc kubenswrapper[4972]: I1121 10:55:50.113497 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abc2448-97d4-420f-af4c-6c2d490eb3a6-utilities\") pod \"redhat-marketplace-4k9ng\" (UID: \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\") " pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:55:50 crc kubenswrapper[4972]: I1121 10:55:50.135693 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzpdr\" (UniqueName: \"kubernetes.io/projected/9abc2448-97d4-420f-af4c-6c2d490eb3a6-kube-api-access-dzpdr\") pod \"redhat-marketplace-4k9ng\" (UID: \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\") " pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:55:50 crc kubenswrapper[4972]: I1121 10:55:50.256546 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:55:50 crc kubenswrapper[4972]: I1121 10:55:50.727671 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4k9ng"] Nov 21 10:55:50 crc kubenswrapper[4972]: W1121 10:55:50.728918 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9abc2448_97d4_420f_af4c_6c2d490eb3a6.slice/crio-414a6398e9331d604af0bfa11f4276fa7f42e29235a8bdb0ea8b57c9a17fd5ae WatchSource:0}: Error finding container 414a6398e9331d604af0bfa11f4276fa7f42e29235a8bdb0ea8b57c9a17fd5ae: Status 404 returned error can't find the container with id 414a6398e9331d604af0bfa11f4276fa7f42e29235a8bdb0ea8b57c9a17fd5ae Nov 21 10:55:51 crc kubenswrapper[4972]: I1121 10:55:51.420759 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4k9ng" event={"ID":"9abc2448-97d4-420f-af4c-6c2d490eb3a6","Type":"ContainerStarted","Data":"87cb990666c8538d2a74196f01a2280dca9000ee9d15d702d6b57ac23f27deb5"} Nov 21 10:55:51 crc kubenswrapper[4972]: I1121 10:55:51.420808 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4k9ng" event={"ID":"9abc2448-97d4-420f-af4c-6c2d490eb3a6","Type":"ContainerStarted","Data":"414a6398e9331d604af0bfa11f4276fa7f42e29235a8bdb0ea8b57c9a17fd5ae"} Nov 21 10:55:52 crc kubenswrapper[4972]: I1121 10:55:52.431730 4972 generic.go:334] "Generic (PLEG): container finished" podID="9abc2448-97d4-420f-af4c-6c2d490eb3a6" containerID="87cb990666c8538d2a74196f01a2280dca9000ee9d15d702d6b57ac23f27deb5" exitCode=0 Nov 21 10:55:52 crc kubenswrapper[4972]: I1121 10:55:52.431790 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4k9ng" event={"ID":"9abc2448-97d4-420f-af4c-6c2d490eb3a6","Type":"ContainerDied","Data":"87cb990666c8538d2a74196f01a2280dca9000ee9d15d702d6b57ac23f27deb5"} Nov 21 10:55:53 crc kubenswrapper[4972]: I1121 10:55:53.123660 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:53 crc kubenswrapper[4972]: I1121 10:55:53.124160 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:55:54 crc kubenswrapper[4972]: I1121 10:55:54.292808 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kg5gq" podUID="bdd8de76-e082-454c-b49b-fe425adb2b03" containerName="registry-server" probeResult="failure" output=< Nov 21 10:55:54 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 10:55:54 crc kubenswrapper[4972]: > Nov 21 10:55:54 crc kubenswrapper[4972]: I1121 10:55:54.451291 4972 generic.go:334] "Generic (PLEG): container finished" podID="9abc2448-97d4-420f-af4c-6c2d490eb3a6" containerID="ff958f1ed9d597179118e3b6515d4c87b0821e103aa92a3cb75fbdc5e716dea2" exitCode=0 Nov 21 10:55:54 crc kubenswrapper[4972]: I1121 10:55:54.451353 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4k9ng" event={"ID":"9abc2448-97d4-420f-af4c-6c2d490eb3a6","Type":"ContainerDied","Data":"ff958f1ed9d597179118e3b6515d4c87b0821e103aa92a3cb75fbdc5e716dea2"} Nov 21 10:55:55 crc kubenswrapper[4972]: I1121 10:55:55.466345 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4k9ng" event={"ID":"9abc2448-97d4-420f-af4c-6c2d490eb3a6","Type":"ContainerStarted","Data":"2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934"} Nov 21 10:55:55 crc kubenswrapper[4972]: I1121 10:55:55.502578 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4k9ng" podStartSLOduration=3.999332182 podStartE2EDuration="6.502550752s" podCreationTimestamp="2025-11-21 10:55:49 +0000 UTC" firstStartedPulling="2025-11-21 10:55:52.43348758 +0000 UTC m=+4497.542630118" lastFinishedPulling="2025-11-21 10:55:54.93670615 +0000 UTC m=+4500.045848688" observedRunningTime="2025-11-21 10:55:55.498626667 +0000 UTC m=+4500.607769245" watchObservedRunningTime="2025-11-21 10:55:55.502550752 +0000 UTC m=+4500.611693280" Nov 21 10:56:00 crc kubenswrapper[4972]: I1121 10:56:00.257335 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:56:00 crc kubenswrapper[4972]: I1121 10:56:00.257904 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:56:00 crc kubenswrapper[4972]: I1121 10:56:00.334967 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:56:00 crc kubenswrapper[4972]: I1121 10:56:00.592365 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:56:00 crc kubenswrapper[4972]: I1121 10:56:00.650863 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4k9ng"] Nov 21 10:56:01 crc kubenswrapper[4972]: I1121 10:56:01.760128 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:56:01 crc kubenswrapper[4972]: E1121 10:56:01.760434 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:56:02 crc kubenswrapper[4972]: I1121 10:56:02.546782 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4k9ng" podUID="9abc2448-97d4-420f-af4c-6c2d490eb3a6" containerName="registry-server" containerID="cri-o://2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934" gracePeriod=2 Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.052679 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.167577 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.215850 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzpdr\" (UniqueName: \"kubernetes.io/projected/9abc2448-97d4-420f-af4c-6c2d490eb3a6-kube-api-access-dzpdr\") pod \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\" (UID: \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\") " Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.215921 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abc2448-97d4-420f-af4c-6c2d490eb3a6-catalog-content\") pod \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\" (UID: \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\") " Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.215972 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abc2448-97d4-420f-af4c-6c2d490eb3a6-utilities\") pod \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\" (UID: \"9abc2448-97d4-420f-af4c-6c2d490eb3a6\") " Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.216965 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9abc2448-97d4-420f-af4c-6c2d490eb3a6-utilities" (OuterVolumeSpecName: "utilities") pod "9abc2448-97d4-420f-af4c-6c2d490eb3a6" (UID: "9abc2448-97d4-420f-af4c-6c2d490eb3a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.217217 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.226553 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9abc2448-97d4-420f-af4c-6c2d490eb3a6-kube-api-access-dzpdr" (OuterVolumeSpecName: "kube-api-access-dzpdr") pod "9abc2448-97d4-420f-af4c-6c2d490eb3a6" (UID: "9abc2448-97d4-420f-af4c-6c2d490eb3a6"). InnerVolumeSpecName "kube-api-access-dzpdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.242727 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9abc2448-97d4-420f-af4c-6c2d490eb3a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9abc2448-97d4-420f-af4c-6c2d490eb3a6" (UID: "9abc2448-97d4-420f-af4c-6c2d490eb3a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.317715 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzpdr\" (UniqueName: \"kubernetes.io/projected/9abc2448-97d4-420f-af4c-6c2d490eb3a6-kube-api-access-dzpdr\") on node \"crc\" DevicePath \"\"" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.317784 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abc2448-97d4-420f-af4c-6c2d490eb3a6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.317803 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abc2448-97d4-420f-af4c-6c2d490eb3a6-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.557615 4972 generic.go:334] "Generic (PLEG): container finished" podID="9abc2448-97d4-420f-af4c-6c2d490eb3a6" containerID="2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934" exitCode=0 Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.557685 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4k9ng" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.557686 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4k9ng" event={"ID":"9abc2448-97d4-420f-af4c-6c2d490eb3a6","Type":"ContainerDied","Data":"2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934"} Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.557744 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4k9ng" event={"ID":"9abc2448-97d4-420f-af4c-6c2d490eb3a6","Type":"ContainerDied","Data":"414a6398e9331d604af0bfa11f4276fa7f42e29235a8bdb0ea8b57c9a17fd5ae"} Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.557781 4972 scope.go:117] "RemoveContainer" containerID="2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.582189 4972 scope.go:117] "RemoveContainer" containerID="ff958f1ed9d597179118e3b6515d4c87b0821e103aa92a3cb75fbdc5e716dea2" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.596763 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4k9ng"] Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.601170 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4k9ng"] Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.622719 4972 scope.go:117] "RemoveContainer" containerID="87cb990666c8538d2a74196f01a2280dca9000ee9d15d702d6b57ac23f27deb5" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.651580 4972 scope.go:117] "RemoveContainer" containerID="2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934" Nov 21 10:56:03 crc kubenswrapper[4972]: E1121 10:56:03.652219 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934\": container with ID starting with 2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934 not found: ID does not exist" containerID="2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.652283 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934"} err="failed to get container status \"2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934\": rpc error: code = NotFound desc = could not find container \"2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934\": container with ID starting with 2de013a44cd56274bb247ccb4e3cb7cb8e70bc9709cf3129975fc7e2be98d934 not found: ID does not exist" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.652321 4972 scope.go:117] "RemoveContainer" containerID="ff958f1ed9d597179118e3b6515d4c87b0821e103aa92a3cb75fbdc5e716dea2" Nov 21 10:56:03 crc kubenswrapper[4972]: E1121 10:56:03.652888 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff958f1ed9d597179118e3b6515d4c87b0821e103aa92a3cb75fbdc5e716dea2\": container with ID starting with ff958f1ed9d597179118e3b6515d4c87b0821e103aa92a3cb75fbdc5e716dea2 not found: ID does not exist" containerID="ff958f1ed9d597179118e3b6515d4c87b0821e103aa92a3cb75fbdc5e716dea2" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.652958 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff958f1ed9d597179118e3b6515d4c87b0821e103aa92a3cb75fbdc5e716dea2"} err="failed to get container status \"ff958f1ed9d597179118e3b6515d4c87b0821e103aa92a3cb75fbdc5e716dea2\": rpc error: code = NotFound desc = could not find container \"ff958f1ed9d597179118e3b6515d4c87b0821e103aa92a3cb75fbdc5e716dea2\": container with ID starting with ff958f1ed9d597179118e3b6515d4c87b0821e103aa92a3cb75fbdc5e716dea2 not found: ID does not exist" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.652991 4972 scope.go:117] "RemoveContainer" containerID="87cb990666c8538d2a74196f01a2280dca9000ee9d15d702d6b57ac23f27deb5" Nov 21 10:56:03 crc kubenswrapper[4972]: E1121 10:56:03.653587 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87cb990666c8538d2a74196f01a2280dca9000ee9d15d702d6b57ac23f27deb5\": container with ID starting with 87cb990666c8538d2a74196f01a2280dca9000ee9d15d702d6b57ac23f27deb5 not found: ID does not exist" containerID="87cb990666c8538d2a74196f01a2280dca9000ee9d15d702d6b57ac23f27deb5" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.653623 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87cb990666c8538d2a74196f01a2280dca9000ee9d15d702d6b57ac23f27deb5"} err="failed to get container status \"87cb990666c8538d2a74196f01a2280dca9000ee9d15d702d6b57ac23f27deb5\": rpc error: code = NotFound desc = could not find container \"87cb990666c8538d2a74196f01a2280dca9000ee9d15d702d6b57ac23f27deb5\": container with ID starting with 87cb990666c8538d2a74196f01a2280dca9000ee9d15d702d6b57ac23f27deb5 not found: ID does not exist" Nov 21 10:56:03 crc kubenswrapper[4972]: I1121 10:56:03.777307 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9abc2448-97d4-420f-af4c-6c2d490eb3a6" path="/var/lib/kubelet/pods/9abc2448-97d4-420f-af4c-6c2d490eb3a6/volumes" Nov 21 10:56:04 crc kubenswrapper[4972]: I1121 10:56:04.182609 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kg5gq"] Nov 21 10:56:04 crc kubenswrapper[4972]: I1121 10:56:04.568178 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kg5gq" podUID="bdd8de76-e082-454c-b49b-fe425adb2b03" containerName="registry-server" containerID="cri-o://f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64" gracePeriod=2 Nov 21 10:56:04 crc kubenswrapper[4972]: I1121 10:56:04.998755 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.143487 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd8de76-e082-454c-b49b-fe425adb2b03-utilities\") pod \"bdd8de76-e082-454c-b49b-fe425adb2b03\" (UID: \"bdd8de76-e082-454c-b49b-fe425adb2b03\") " Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.143555 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd8de76-e082-454c-b49b-fe425adb2b03-catalog-content\") pod \"bdd8de76-e082-454c-b49b-fe425adb2b03\" (UID: \"bdd8de76-e082-454c-b49b-fe425adb2b03\") " Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.143595 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxppg\" (UniqueName: \"kubernetes.io/projected/bdd8de76-e082-454c-b49b-fe425adb2b03-kube-api-access-cxppg\") pod \"bdd8de76-e082-454c-b49b-fe425adb2b03\" (UID: \"bdd8de76-e082-454c-b49b-fe425adb2b03\") " Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.144979 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdd8de76-e082-454c-b49b-fe425adb2b03-utilities" (OuterVolumeSpecName: "utilities") pod "bdd8de76-e082-454c-b49b-fe425adb2b03" (UID: "bdd8de76-e082-454c-b49b-fe425adb2b03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.150737 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd8de76-e082-454c-b49b-fe425adb2b03-kube-api-access-cxppg" (OuterVolumeSpecName: "kube-api-access-cxppg") pod "bdd8de76-e082-454c-b49b-fe425adb2b03" (UID: "bdd8de76-e082-454c-b49b-fe425adb2b03"). InnerVolumeSpecName "kube-api-access-cxppg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.240118 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdd8de76-e082-454c-b49b-fe425adb2b03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdd8de76-e082-454c-b49b-fe425adb2b03" (UID: "bdd8de76-e082-454c-b49b-fe425adb2b03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.245326 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd8de76-e082-454c-b49b-fe425adb2b03-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.245489 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd8de76-e082-454c-b49b-fe425adb2b03-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.245512 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxppg\" (UniqueName: \"kubernetes.io/projected/bdd8de76-e082-454c-b49b-fe425adb2b03-kube-api-access-cxppg\") on node \"crc\" DevicePath \"\"" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.578660 4972 generic.go:334] "Generic (PLEG): container finished" podID="bdd8de76-e082-454c-b49b-fe425adb2b03" containerID="f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64" exitCode=0 Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.578728 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg5gq" event={"ID":"bdd8de76-e082-454c-b49b-fe425adb2b03","Type":"ContainerDied","Data":"f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64"} Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.578769 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg5gq" event={"ID":"bdd8de76-e082-454c-b49b-fe425adb2b03","Type":"ContainerDied","Data":"1e9f260c57a9f75e9d1701fb51d23582b3a353ab8fe3b7a81b72ced863561bba"} Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.578768 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kg5gq" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.578793 4972 scope.go:117] "RemoveContainer" containerID="f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.606610 4972 scope.go:117] "RemoveContainer" containerID="1b70a106d4d16cd7d1706fc5f2033fb68b358c42f47f017a49aa0efa94a2fe1c" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.638888 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kg5gq"] Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.651290 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kg5gq"] Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.658977 4972 scope.go:117] "RemoveContainer" containerID="8ed9ed0799b93ad517b1cb4cbc425f2e0756ee8c4e6a30c73d48750daa998129" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.682195 4972 scope.go:117] "RemoveContainer" containerID="f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64" Nov 21 10:56:05 crc kubenswrapper[4972]: E1121 10:56:05.693183 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64\": container with ID starting with f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64 not found: ID does not exist" containerID="f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.693238 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64"} err="failed to get container status \"f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64\": rpc error: code = NotFound desc = could not find container \"f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64\": container with ID starting with f5f58092ec4664286af5719166491866934331a42b855dddf2ee974c67d1cf64 not found: ID does not exist" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.693265 4972 scope.go:117] "RemoveContainer" containerID="1b70a106d4d16cd7d1706fc5f2033fb68b358c42f47f017a49aa0efa94a2fe1c" Nov 21 10:56:05 crc kubenswrapper[4972]: E1121 10:56:05.694747 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b70a106d4d16cd7d1706fc5f2033fb68b358c42f47f017a49aa0efa94a2fe1c\": container with ID starting with 1b70a106d4d16cd7d1706fc5f2033fb68b358c42f47f017a49aa0efa94a2fe1c not found: ID does not exist" containerID="1b70a106d4d16cd7d1706fc5f2033fb68b358c42f47f017a49aa0efa94a2fe1c" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.694775 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b70a106d4d16cd7d1706fc5f2033fb68b358c42f47f017a49aa0efa94a2fe1c"} err="failed to get container status \"1b70a106d4d16cd7d1706fc5f2033fb68b358c42f47f017a49aa0efa94a2fe1c\": rpc error: code = NotFound desc = could not find container \"1b70a106d4d16cd7d1706fc5f2033fb68b358c42f47f017a49aa0efa94a2fe1c\": container with ID starting with 1b70a106d4d16cd7d1706fc5f2033fb68b358c42f47f017a49aa0efa94a2fe1c not found: ID does not exist" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.694791 4972 scope.go:117] "RemoveContainer" containerID="8ed9ed0799b93ad517b1cb4cbc425f2e0756ee8c4e6a30c73d48750daa998129" Nov 21 10:56:05 crc kubenswrapper[4972]: E1121 10:56:05.695173 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ed9ed0799b93ad517b1cb4cbc425f2e0756ee8c4e6a30c73d48750daa998129\": container with ID starting with 8ed9ed0799b93ad517b1cb4cbc425f2e0756ee8c4e6a30c73d48750daa998129 not found: ID does not exist" containerID="8ed9ed0799b93ad517b1cb4cbc425f2e0756ee8c4e6a30c73d48750daa998129" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.695265 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ed9ed0799b93ad517b1cb4cbc425f2e0756ee8c4e6a30c73d48750daa998129"} err="failed to get container status \"8ed9ed0799b93ad517b1cb4cbc425f2e0756ee8c4e6a30c73d48750daa998129\": rpc error: code = NotFound desc = could not find container \"8ed9ed0799b93ad517b1cb4cbc425f2e0756ee8c4e6a30c73d48750daa998129\": container with ID starting with 8ed9ed0799b93ad517b1cb4cbc425f2e0756ee8c4e6a30c73d48750daa998129 not found: ID does not exist" Nov 21 10:56:05 crc kubenswrapper[4972]: I1121 10:56:05.770471 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdd8de76-e082-454c-b49b-fe425adb2b03" path="/var/lib/kubelet/pods/bdd8de76-e082-454c-b49b-fe425adb2b03/volumes" Nov 21 10:56:16 crc kubenswrapper[4972]: I1121 10:56:16.760771 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:56:16 crc kubenswrapper[4972]: E1121 10:56:16.763417 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:56:27 crc kubenswrapper[4972]: I1121 10:56:27.760301 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:56:27 crc kubenswrapper[4972]: E1121 10:56:27.762180 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:56:42 crc kubenswrapper[4972]: I1121 10:56:42.759469 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:56:42 crc kubenswrapper[4972]: E1121 10:56:42.761095 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.556135 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vtflc"] Nov 21 10:56:48 crc kubenswrapper[4972]: E1121 10:56:48.558957 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd8de76-e082-454c-b49b-fe425adb2b03" containerName="extract-content" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.559152 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd8de76-e082-454c-b49b-fe425adb2b03" containerName="extract-content" Nov 21 10:56:48 crc kubenswrapper[4972]: E1121 10:56:48.559335 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd8de76-e082-454c-b49b-fe425adb2b03" containerName="registry-server" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.559483 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd8de76-e082-454c-b49b-fe425adb2b03" containerName="registry-server" Nov 21 10:56:48 crc kubenswrapper[4972]: E1121 10:56:48.559641 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abc2448-97d4-420f-af4c-6c2d490eb3a6" containerName="extract-content" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.559765 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abc2448-97d4-420f-af4c-6c2d490eb3a6" containerName="extract-content" Nov 21 10:56:48 crc kubenswrapper[4972]: E1121 10:56:48.559950 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd8de76-e082-454c-b49b-fe425adb2b03" containerName="extract-utilities" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.560107 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd8de76-e082-454c-b49b-fe425adb2b03" containerName="extract-utilities" Nov 21 10:56:48 crc kubenswrapper[4972]: E1121 10:56:48.560258 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abc2448-97d4-420f-af4c-6c2d490eb3a6" containerName="extract-utilities" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.560387 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abc2448-97d4-420f-af4c-6c2d490eb3a6" containerName="extract-utilities" Nov 21 10:56:48 crc kubenswrapper[4972]: E1121 10:56:48.560542 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abc2448-97d4-420f-af4c-6c2d490eb3a6" containerName="registry-server" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.560684 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abc2448-97d4-420f-af4c-6c2d490eb3a6" containerName="registry-server" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.561136 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd8de76-e082-454c-b49b-fe425adb2b03" containerName="registry-server" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.561306 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9abc2448-97d4-420f-af4c-6c2d490eb3a6" containerName="registry-server" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.563252 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.565857 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtflc"] Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.658851 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-catalog-content\") pod \"certified-operators-vtflc\" (UID: \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\") " pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.658920 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spfsg\" (UniqueName: \"kubernetes.io/projected/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-kube-api-access-spfsg\") pod \"certified-operators-vtflc\" (UID: \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\") " pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.658988 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-utilities\") pod \"certified-operators-vtflc\" (UID: \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\") " pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.760126 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-utilities\") pod \"certified-operators-vtflc\" (UID: \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\") " pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.760272 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-catalog-content\") pod \"certified-operators-vtflc\" (UID: \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\") " pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.760332 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spfsg\" (UniqueName: \"kubernetes.io/projected/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-kube-api-access-spfsg\") pod \"certified-operators-vtflc\" (UID: \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\") " pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.760947 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-utilities\") pod \"certified-operators-vtflc\" (UID: \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\") " pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.761000 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-catalog-content\") pod \"certified-operators-vtflc\" (UID: \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\") " pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.800278 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spfsg\" (UniqueName: \"kubernetes.io/projected/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-kube-api-access-spfsg\") pod \"certified-operators-vtflc\" (UID: \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\") " pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:48 crc kubenswrapper[4972]: I1121 10:56:48.939119 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:49 crc kubenswrapper[4972]: I1121 10:56:49.416956 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtflc"] Nov 21 10:56:50 crc kubenswrapper[4972]: I1121 10:56:50.006949 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" containerID="2295cc45ba1665f879a22bbf78f36c025f6dd7f5aea6f06e9d2c41b8e1bf91ed" exitCode=0 Nov 21 10:56:50 crc kubenswrapper[4972]: I1121 10:56:50.007115 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtflc" event={"ID":"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6","Type":"ContainerDied","Data":"2295cc45ba1665f879a22bbf78f36c025f6dd7f5aea6f06e9d2c41b8e1bf91ed"} Nov 21 10:56:50 crc kubenswrapper[4972]: I1121 10:56:50.007364 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtflc" event={"ID":"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6","Type":"ContainerStarted","Data":"84481376b95d2b0a021b4b00331a6e802240011c1b09788ccde358863d337231"} Nov 21 10:56:51 crc kubenswrapper[4972]: I1121 10:56:51.020430 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" containerID="d838be89a196f9ad086198d0d5cba66f3d2235f5a0e79c33c5f818fe8e2052d7" exitCode=0 Nov 21 10:56:51 crc kubenswrapper[4972]: I1121 10:56:51.020632 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtflc" event={"ID":"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6","Type":"ContainerDied","Data":"d838be89a196f9ad086198d0d5cba66f3d2235f5a0e79c33c5f818fe8e2052d7"} Nov 21 10:56:52 crc kubenswrapper[4972]: I1121 10:56:52.032687 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtflc" event={"ID":"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6","Type":"ContainerStarted","Data":"cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f"} Nov 21 10:56:52 crc kubenswrapper[4972]: I1121 10:56:52.057644 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vtflc" podStartSLOduration=2.631832425 podStartE2EDuration="4.057618916s" podCreationTimestamp="2025-11-21 10:56:48 +0000 UTC" firstStartedPulling="2025-11-21 10:56:50.009234076 +0000 UTC m=+4555.118376674" lastFinishedPulling="2025-11-21 10:56:51.435020657 +0000 UTC m=+4556.544163165" observedRunningTime="2025-11-21 10:56:52.051786431 +0000 UTC m=+4557.160929009" watchObservedRunningTime="2025-11-21 10:56:52.057618916 +0000 UTC m=+4557.166761454" Nov 21 10:56:57 crc kubenswrapper[4972]: I1121 10:56:57.759481 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:56:57 crc kubenswrapper[4972]: E1121 10:56:57.760091 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:56:58 crc kubenswrapper[4972]: I1121 10:56:58.939782 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:58 crc kubenswrapper[4972]: I1121 10:56:58.939934 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:59 crc kubenswrapper[4972]: I1121 10:56:59.017304 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:59 crc kubenswrapper[4972]: I1121 10:56:59.147699 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:56:59 crc kubenswrapper[4972]: I1121 10:56:59.263389 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtflc"] Nov 21 10:57:01 crc kubenswrapper[4972]: I1121 10:57:01.119214 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vtflc" podUID="ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" containerName="registry-server" containerID="cri-o://cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f" gracePeriod=2 Nov 21 10:57:01 crc kubenswrapper[4972]: I1121 10:57:01.542897 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:57:01 crc kubenswrapper[4972]: I1121 10:57:01.593131 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-utilities\") pod \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\" (UID: \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\") " Nov 21 10:57:01 crc kubenswrapper[4972]: I1121 10:57:01.593310 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-catalog-content\") pod \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\" (UID: \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\") " Nov 21 10:57:01 crc kubenswrapper[4972]: I1121 10:57:01.593410 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spfsg\" (UniqueName: \"kubernetes.io/projected/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-kube-api-access-spfsg\") pod \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\" (UID: \"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6\") " Nov 21 10:57:01 crc kubenswrapper[4972]: I1121 10:57:01.594122 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-utilities" (OuterVolumeSpecName: "utilities") pod "ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" (UID: "ec7643d3-0e10-426a-9fb3-90f92a9fe9c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:57:01 crc kubenswrapper[4972]: I1121 10:57:01.603984 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-kube-api-access-spfsg" (OuterVolumeSpecName: "kube-api-access-spfsg") pod "ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" (UID: "ec7643d3-0e10-426a-9fb3-90f92a9fe9c6"). InnerVolumeSpecName "kube-api-access-spfsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:57:01 crc kubenswrapper[4972]: I1121 10:57:01.695755 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 10:57:01 crc kubenswrapper[4972]: I1121 10:57:01.696223 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spfsg\" (UniqueName: \"kubernetes.io/projected/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-kube-api-access-spfsg\") on node \"crc\" DevicePath \"\"" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.132000 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" containerID="cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f" exitCode=0 Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.132074 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtflc" event={"ID":"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6","Type":"ContainerDied","Data":"cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f"} Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.132111 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtflc" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.132137 4972 scope.go:117] "RemoveContainer" containerID="cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.132119 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtflc" event={"ID":"ec7643d3-0e10-426a-9fb3-90f92a9fe9c6","Type":"ContainerDied","Data":"84481376b95d2b0a021b4b00331a6e802240011c1b09788ccde358863d337231"} Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.154766 4972 scope.go:117] "RemoveContainer" containerID="d838be89a196f9ad086198d0d5cba66f3d2235f5a0e79c33c5f818fe8e2052d7" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.182044 4972 scope.go:117] "RemoveContainer" containerID="2295cc45ba1665f879a22bbf78f36c025f6dd7f5aea6f06e9d2c41b8e1bf91ed" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.210580 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" (UID: "ec7643d3-0e10-426a-9fb3-90f92a9fe9c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.225678 4972 scope.go:117] "RemoveContainer" containerID="cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f" Nov 21 10:57:02 crc kubenswrapper[4972]: E1121 10:57:02.226096 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f\": container with ID starting with cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f not found: ID does not exist" containerID="cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.226132 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f"} err="failed to get container status \"cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f\": rpc error: code = NotFound desc = could not find container \"cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f\": container with ID starting with cf7866edecf944241294796495d9e5cd697024a6f22918eaa55f784e343d673f not found: ID does not exist" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.226154 4972 scope.go:117] "RemoveContainer" containerID="d838be89a196f9ad086198d0d5cba66f3d2235f5a0e79c33c5f818fe8e2052d7" Nov 21 10:57:02 crc kubenswrapper[4972]: E1121 10:57:02.226364 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d838be89a196f9ad086198d0d5cba66f3d2235f5a0e79c33c5f818fe8e2052d7\": container with ID starting with d838be89a196f9ad086198d0d5cba66f3d2235f5a0e79c33c5f818fe8e2052d7 not found: ID does not exist" containerID="d838be89a196f9ad086198d0d5cba66f3d2235f5a0e79c33c5f818fe8e2052d7" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.226386 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d838be89a196f9ad086198d0d5cba66f3d2235f5a0e79c33c5f818fe8e2052d7"} err="failed to get container status \"d838be89a196f9ad086198d0d5cba66f3d2235f5a0e79c33c5f818fe8e2052d7\": rpc error: code = NotFound desc = could not find container \"d838be89a196f9ad086198d0d5cba66f3d2235f5a0e79c33c5f818fe8e2052d7\": container with ID starting with d838be89a196f9ad086198d0d5cba66f3d2235f5a0e79c33c5f818fe8e2052d7 not found: ID does not exist" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.226403 4972 scope.go:117] "RemoveContainer" containerID="2295cc45ba1665f879a22bbf78f36c025f6dd7f5aea6f06e9d2c41b8e1bf91ed" Nov 21 10:57:02 crc kubenswrapper[4972]: E1121 10:57:02.226733 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2295cc45ba1665f879a22bbf78f36c025f6dd7f5aea6f06e9d2c41b8e1bf91ed\": container with ID starting with 2295cc45ba1665f879a22bbf78f36c025f6dd7f5aea6f06e9d2c41b8e1bf91ed not found: ID does not exist" containerID="2295cc45ba1665f879a22bbf78f36c025f6dd7f5aea6f06e9d2c41b8e1bf91ed" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.226760 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2295cc45ba1665f879a22bbf78f36c025f6dd7f5aea6f06e9d2c41b8e1bf91ed"} err="failed to get container status \"2295cc45ba1665f879a22bbf78f36c025f6dd7f5aea6f06e9d2c41b8e1bf91ed\": rpc error: code = NotFound desc = could not find container \"2295cc45ba1665f879a22bbf78f36c025f6dd7f5aea6f06e9d2c41b8e1bf91ed\": container with ID starting with 2295cc45ba1665f879a22bbf78f36c025f6dd7f5aea6f06e9d2c41b8e1bf91ed not found: ID does not exist" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.305069 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.491943 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtflc"] Nov 21 10:57:02 crc kubenswrapper[4972]: I1121 10:57:02.503226 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vtflc"] Nov 21 10:57:03 crc kubenswrapper[4972]: I1121 10:57:03.774492 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" path="/var/lib/kubelet/pods/ec7643d3-0e10-426a-9fb3-90f92a9fe9c6/volumes" Nov 21 10:57:12 crc kubenswrapper[4972]: I1121 10:57:12.760215 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:57:12 crc kubenswrapper[4972]: E1121 10:57:12.760730 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:57:26 crc kubenswrapper[4972]: I1121 10:57:26.760033 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:57:26 crc kubenswrapper[4972]: E1121 10:57:26.761258 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:57:38 crc kubenswrapper[4972]: I1121 10:57:38.759695 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:57:38 crc kubenswrapper[4972]: E1121 10:57:38.761299 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:57:50 crc kubenswrapper[4972]: I1121 10:57:50.759780 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:57:50 crc kubenswrapper[4972]: E1121 10:57:50.760945 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:58:02 crc kubenswrapper[4972]: I1121 10:58:02.759786 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:58:02 crc kubenswrapper[4972]: E1121 10:58:02.761000 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:58:17 crc kubenswrapper[4972]: I1121 10:58:17.759931 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:58:17 crc kubenswrapper[4972]: E1121 10:58:17.761246 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 10:58:31 crc kubenswrapper[4972]: I1121 10:58:31.760506 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 10:58:32 crc kubenswrapper[4972]: I1121 10:58:32.968629 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"d09766e11e3fabe4af926f8addbec82b361431494255cdf37952ea1f017d3953"} Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.132650 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-7j7h4"] Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.143000 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-7j7h4"] Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.265707 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-j7qfm"] Nov 21 10:58:39 crc kubenswrapper[4972]: E1121 10:58:39.266667 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" containerName="extract-utilities" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.267322 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" containerName="extract-utilities" Nov 21 10:58:39 crc kubenswrapper[4972]: E1121 10:58:39.267498 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" containerName="extract-content" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.267681 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" containerName="extract-content" Nov 21 10:58:39 crc kubenswrapper[4972]: E1121 10:58:39.267879 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" containerName="registry-server" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.268040 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" containerName="registry-server" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.268469 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec7643d3-0e10-426a-9fb3-90f92a9fe9c6" containerName="registry-server" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.270333 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.273695 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.274394 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.276814 4972 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-5sjhw" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.276811 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.286217 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-j7qfm"] Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.337414 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aa0319b0-443f-4e54-9875-f47d990e9ed9-node-mnt\") pod \"crc-storage-crc-j7qfm\" (UID: \"aa0319b0-443f-4e54-9875-f47d990e9ed9\") " pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.337497 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aa0319b0-443f-4e54-9875-f47d990e9ed9-crc-storage\") pod \"crc-storage-crc-j7qfm\" (UID: \"aa0319b0-443f-4e54-9875-f47d990e9ed9\") " pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.337550 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qsdq\" (UniqueName: \"kubernetes.io/projected/aa0319b0-443f-4e54-9875-f47d990e9ed9-kube-api-access-9qsdq\") pod \"crc-storage-crc-j7qfm\" (UID: \"aa0319b0-443f-4e54-9875-f47d990e9ed9\") " pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.439483 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aa0319b0-443f-4e54-9875-f47d990e9ed9-node-mnt\") pod \"crc-storage-crc-j7qfm\" (UID: \"aa0319b0-443f-4e54-9875-f47d990e9ed9\") " pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.439527 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aa0319b0-443f-4e54-9875-f47d990e9ed9-crc-storage\") pod \"crc-storage-crc-j7qfm\" (UID: \"aa0319b0-443f-4e54-9875-f47d990e9ed9\") " pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.439552 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qsdq\" (UniqueName: \"kubernetes.io/projected/aa0319b0-443f-4e54-9875-f47d990e9ed9-kube-api-access-9qsdq\") pod \"crc-storage-crc-j7qfm\" (UID: \"aa0319b0-443f-4e54-9875-f47d990e9ed9\") " pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.439906 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aa0319b0-443f-4e54-9875-f47d990e9ed9-node-mnt\") pod \"crc-storage-crc-j7qfm\" (UID: \"aa0319b0-443f-4e54-9875-f47d990e9ed9\") " pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.440568 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aa0319b0-443f-4e54-9875-f47d990e9ed9-crc-storage\") pod \"crc-storage-crc-j7qfm\" (UID: \"aa0319b0-443f-4e54-9875-f47d990e9ed9\") " pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.458449 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qsdq\" (UniqueName: \"kubernetes.io/projected/aa0319b0-443f-4e54-9875-f47d990e9ed9-kube-api-access-9qsdq\") pod \"crc-storage-crc-j7qfm\" (UID: \"aa0319b0-443f-4e54-9875-f47d990e9ed9\") " pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.604375 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:39 crc kubenswrapper[4972]: I1121 10:58:39.774272 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f981c5ab-a86c-463f-9e6c-9e463aa4defd" path="/var/lib/kubelet/pods/f981c5ab-a86c-463f-9e6c-9e463aa4defd/volumes" Nov 21 10:58:40 crc kubenswrapper[4972]: I1121 10:58:40.076110 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-j7qfm"] Nov 21 10:58:41 crc kubenswrapper[4972]: I1121 10:58:41.039129 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-j7qfm" event={"ID":"aa0319b0-443f-4e54-9875-f47d990e9ed9","Type":"ContainerStarted","Data":"3661174fdd6297aad99b56d3700d92aed8a09d4687f520614d9bb9a5566ef1a8"} Nov 21 10:58:41 crc kubenswrapper[4972]: I1121 10:58:41.039554 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-j7qfm" event={"ID":"aa0319b0-443f-4e54-9875-f47d990e9ed9","Type":"ContainerStarted","Data":"231658e989dd50e616618be61962fc7d2be826af3d51a98e2fe44e64f24b1ff7"} Nov 21 10:58:41 crc kubenswrapper[4972]: I1121 10:58:41.076128 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="crc-storage/crc-storage-crc-j7qfm" podStartSLOduration=1.546041528 podStartE2EDuration="2.076101349s" podCreationTimestamp="2025-11-21 10:58:39 +0000 UTC" firstStartedPulling="2025-11-21 10:58:40.087021987 +0000 UTC m=+4665.196164495" lastFinishedPulling="2025-11-21 10:58:40.617081778 +0000 UTC m=+4665.726224316" observedRunningTime="2025-11-21 10:58:41.066562596 +0000 UTC m=+4666.175705134" watchObservedRunningTime="2025-11-21 10:58:41.076101349 +0000 UTC m=+4666.185243857" Nov 21 10:58:42 crc kubenswrapper[4972]: I1121 10:58:42.053901 4972 generic.go:334] "Generic (PLEG): container finished" podID="aa0319b0-443f-4e54-9875-f47d990e9ed9" containerID="3661174fdd6297aad99b56d3700d92aed8a09d4687f520614d9bb9a5566ef1a8" exitCode=0 Nov 21 10:58:42 crc kubenswrapper[4972]: I1121 10:58:42.053966 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-j7qfm" event={"ID":"aa0319b0-443f-4e54-9875-f47d990e9ed9","Type":"ContainerDied","Data":"3661174fdd6297aad99b56d3700d92aed8a09d4687f520614d9bb9a5566ef1a8"} Nov 21 10:58:43 crc kubenswrapper[4972]: I1121 10:58:43.424378 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:43 crc kubenswrapper[4972]: I1121 10:58:43.509574 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aa0319b0-443f-4e54-9875-f47d990e9ed9-node-mnt\") pod \"aa0319b0-443f-4e54-9875-f47d990e9ed9\" (UID: \"aa0319b0-443f-4e54-9875-f47d990e9ed9\") " Nov 21 10:58:43 crc kubenswrapper[4972]: I1121 10:58:43.509625 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qsdq\" (UniqueName: \"kubernetes.io/projected/aa0319b0-443f-4e54-9875-f47d990e9ed9-kube-api-access-9qsdq\") pod \"aa0319b0-443f-4e54-9875-f47d990e9ed9\" (UID: \"aa0319b0-443f-4e54-9875-f47d990e9ed9\") " Nov 21 10:58:43 crc kubenswrapper[4972]: I1121 10:58:43.509657 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aa0319b0-443f-4e54-9875-f47d990e9ed9-crc-storage\") pod \"aa0319b0-443f-4e54-9875-f47d990e9ed9\" (UID: \"aa0319b0-443f-4e54-9875-f47d990e9ed9\") " Nov 21 10:58:43 crc kubenswrapper[4972]: I1121 10:58:43.509745 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa0319b0-443f-4e54-9875-f47d990e9ed9-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "aa0319b0-443f-4e54-9875-f47d990e9ed9" (UID: "aa0319b0-443f-4e54-9875-f47d990e9ed9"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:58:43 crc kubenswrapper[4972]: I1121 10:58:43.510216 4972 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aa0319b0-443f-4e54-9875-f47d990e9ed9-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 21 10:58:43 crc kubenswrapper[4972]: I1121 10:58:43.517646 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0319b0-443f-4e54-9875-f47d990e9ed9-kube-api-access-9qsdq" (OuterVolumeSpecName: "kube-api-access-9qsdq") pod "aa0319b0-443f-4e54-9875-f47d990e9ed9" (UID: "aa0319b0-443f-4e54-9875-f47d990e9ed9"). InnerVolumeSpecName "kube-api-access-9qsdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:58:43 crc kubenswrapper[4972]: I1121 10:58:43.533636 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa0319b0-443f-4e54-9875-f47d990e9ed9-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "aa0319b0-443f-4e54-9875-f47d990e9ed9" (UID: "aa0319b0-443f-4e54-9875-f47d990e9ed9"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:58:43 crc kubenswrapper[4972]: I1121 10:58:43.611934 4972 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aa0319b0-443f-4e54-9875-f47d990e9ed9-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 21 10:58:43 crc kubenswrapper[4972]: I1121 10:58:43.611975 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qsdq\" (UniqueName: \"kubernetes.io/projected/aa0319b0-443f-4e54-9875-f47d990e9ed9-kube-api-access-9qsdq\") on node \"crc\" DevicePath \"\"" Nov 21 10:58:44 crc kubenswrapper[4972]: I1121 10:58:44.076942 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-j7qfm" event={"ID":"aa0319b0-443f-4e54-9875-f47d990e9ed9","Type":"ContainerDied","Data":"231658e989dd50e616618be61962fc7d2be826af3d51a98e2fe44e64f24b1ff7"} Nov 21 10:58:44 crc kubenswrapper[4972]: I1121 10:58:44.076986 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-j7qfm" Nov 21 10:58:44 crc kubenswrapper[4972]: I1121 10:58:44.077002 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="231658e989dd50e616618be61962fc7d2be826af3d51a98e2fe44e64f24b1ff7" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.212082 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-j7qfm"] Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.220919 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-j7qfm"] Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.325619 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-ht8zr"] Nov 21 10:58:45 crc kubenswrapper[4972]: E1121 10:58:45.326036 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0319b0-443f-4e54-9875-f47d990e9ed9" containerName="storage" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.326059 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0319b0-443f-4e54-9875-f47d990e9ed9" containerName="storage" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.326237 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0319b0-443f-4e54-9875-f47d990e9ed9" containerName="storage" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.326892 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.329456 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.330242 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.330276 4972 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-5sjhw" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.330375 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.333218 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-ht8zr"] Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.440739 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/474d8480-3903-4ac2-a855-104ee5c6c371-crc-storage\") pod \"crc-storage-crc-ht8zr\" (UID: \"474d8480-3903-4ac2-a855-104ee5c6c371\") " pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.440804 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/474d8480-3903-4ac2-a855-104ee5c6c371-node-mnt\") pod \"crc-storage-crc-ht8zr\" (UID: \"474d8480-3903-4ac2-a855-104ee5c6c371\") " pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.440944 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wmjr\" (UniqueName: \"kubernetes.io/projected/474d8480-3903-4ac2-a855-104ee5c6c371-kube-api-access-2wmjr\") pod \"crc-storage-crc-ht8zr\" (UID: \"474d8480-3903-4ac2-a855-104ee5c6c371\") " pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.541890 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/474d8480-3903-4ac2-a855-104ee5c6c371-node-mnt\") pod \"crc-storage-crc-ht8zr\" (UID: \"474d8480-3903-4ac2-a855-104ee5c6c371\") " pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.542021 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wmjr\" (UniqueName: \"kubernetes.io/projected/474d8480-3903-4ac2-a855-104ee5c6c371-kube-api-access-2wmjr\") pod \"crc-storage-crc-ht8zr\" (UID: \"474d8480-3903-4ac2-a855-104ee5c6c371\") " pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.542159 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/474d8480-3903-4ac2-a855-104ee5c6c371-crc-storage\") pod \"crc-storage-crc-ht8zr\" (UID: \"474d8480-3903-4ac2-a855-104ee5c6c371\") " pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.542556 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/474d8480-3903-4ac2-a855-104ee5c6c371-node-mnt\") pod \"crc-storage-crc-ht8zr\" (UID: \"474d8480-3903-4ac2-a855-104ee5c6c371\") " pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.543166 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/474d8480-3903-4ac2-a855-104ee5c6c371-crc-storage\") pod \"crc-storage-crc-ht8zr\" (UID: \"474d8480-3903-4ac2-a855-104ee5c6c371\") " pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.560643 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wmjr\" (UniqueName: \"kubernetes.io/projected/474d8480-3903-4ac2-a855-104ee5c6c371-kube-api-access-2wmjr\") pod \"crc-storage-crc-ht8zr\" (UID: \"474d8480-3903-4ac2-a855-104ee5c6c371\") " pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.645760 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.771603 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa0319b0-443f-4e54-9875-f47d990e9ed9" path="/var/lib/kubelet/pods/aa0319b0-443f-4e54-9875-f47d990e9ed9/volumes" Nov 21 10:58:45 crc kubenswrapper[4972]: I1121 10:58:45.885712 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-ht8zr"] Nov 21 10:58:46 crc kubenswrapper[4972]: I1121 10:58:46.097189 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ht8zr" event={"ID":"474d8480-3903-4ac2-a855-104ee5c6c371","Type":"ContainerStarted","Data":"62e47876608cda87da0baa3b3b865419aae92c87a74018f899290d2e5b5f0f2c"} Nov 21 10:58:47 crc kubenswrapper[4972]: I1121 10:58:47.107219 4972 generic.go:334] "Generic (PLEG): container finished" podID="474d8480-3903-4ac2-a855-104ee5c6c371" containerID="02ad8aefa4a854aaa6b22fe668623bd470688170c9c8fb94af8789195c8bde97" exitCode=0 Nov 21 10:58:47 crc kubenswrapper[4972]: I1121 10:58:47.107346 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ht8zr" event={"ID":"474d8480-3903-4ac2-a855-104ee5c6c371","Type":"ContainerDied","Data":"02ad8aefa4a854aaa6b22fe668623bd470688170c9c8fb94af8789195c8bde97"} Nov 21 10:58:48 crc kubenswrapper[4972]: I1121 10:58:48.472733 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:48 crc kubenswrapper[4972]: I1121 10:58:48.591032 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wmjr\" (UniqueName: \"kubernetes.io/projected/474d8480-3903-4ac2-a855-104ee5c6c371-kube-api-access-2wmjr\") pod \"474d8480-3903-4ac2-a855-104ee5c6c371\" (UID: \"474d8480-3903-4ac2-a855-104ee5c6c371\") " Nov 21 10:58:48 crc kubenswrapper[4972]: I1121 10:58:48.591166 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/474d8480-3903-4ac2-a855-104ee5c6c371-node-mnt\") pod \"474d8480-3903-4ac2-a855-104ee5c6c371\" (UID: \"474d8480-3903-4ac2-a855-104ee5c6c371\") " Nov 21 10:58:48 crc kubenswrapper[4972]: I1121 10:58:48.591264 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/474d8480-3903-4ac2-a855-104ee5c6c371-crc-storage\") pod \"474d8480-3903-4ac2-a855-104ee5c6c371\" (UID: \"474d8480-3903-4ac2-a855-104ee5c6c371\") " Nov 21 10:58:48 crc kubenswrapper[4972]: I1121 10:58:48.591347 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/474d8480-3903-4ac2-a855-104ee5c6c371-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "474d8480-3903-4ac2-a855-104ee5c6c371" (UID: "474d8480-3903-4ac2-a855-104ee5c6c371"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 10:58:48 crc kubenswrapper[4972]: I1121 10:58:48.591749 4972 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/474d8480-3903-4ac2-a855-104ee5c6c371-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 21 10:58:48 crc kubenswrapper[4972]: I1121 10:58:48.598251 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/474d8480-3903-4ac2-a855-104ee5c6c371-kube-api-access-2wmjr" (OuterVolumeSpecName: "kube-api-access-2wmjr") pod "474d8480-3903-4ac2-a855-104ee5c6c371" (UID: "474d8480-3903-4ac2-a855-104ee5c6c371"). InnerVolumeSpecName "kube-api-access-2wmjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 10:58:48 crc kubenswrapper[4972]: I1121 10:58:48.609510 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/474d8480-3903-4ac2-a855-104ee5c6c371-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "474d8480-3903-4ac2-a855-104ee5c6c371" (UID: "474d8480-3903-4ac2-a855-104ee5c6c371"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 10:58:48 crc kubenswrapper[4972]: I1121 10:58:48.693929 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wmjr\" (UniqueName: \"kubernetes.io/projected/474d8480-3903-4ac2-a855-104ee5c6c371-kube-api-access-2wmjr\") on node \"crc\" DevicePath \"\"" Nov 21 10:58:48 crc kubenswrapper[4972]: I1121 10:58:48.693982 4972 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/474d8480-3903-4ac2-a855-104ee5c6c371-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 21 10:58:49 crc kubenswrapper[4972]: I1121 10:58:49.128775 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ht8zr" event={"ID":"474d8480-3903-4ac2-a855-104ee5c6c371","Type":"ContainerDied","Data":"62e47876608cda87da0baa3b3b865419aae92c87a74018f899290d2e5b5f0f2c"} Nov 21 10:58:49 crc kubenswrapper[4972]: I1121 10:58:49.128859 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ht8zr" Nov 21 10:58:49 crc kubenswrapper[4972]: I1121 10:58:49.128880 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62e47876608cda87da0baa3b3b865419aae92c87a74018f899290d2e5b5f0f2c" Nov 21 10:59:25 crc kubenswrapper[4972]: I1121 10:59:25.319106 4972 scope.go:117] "RemoveContainer" containerID="0915aa76e6131d641353a0050c95e39dea2149fec20aa1d558232ea365ab6cc2" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.165991 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j"] Nov 21 11:00:00 crc kubenswrapper[4972]: E1121 11:00:00.166828 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474d8480-3903-4ac2-a855-104ee5c6c371" containerName="storage" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.166869 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="474d8480-3903-4ac2-a855-104ee5c6c371" containerName="storage" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.167101 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="474d8480-3903-4ac2-a855-104ee5c6c371" containerName="storage" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.167763 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.171015 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.171130 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.198792 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j"] Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.205574 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf59t\" (UniqueName: \"kubernetes.io/projected/e659f22f-1804-4119-a907-353634f17737-kube-api-access-zf59t\") pod \"collect-profiles-29395380-92w7j\" (UID: \"e659f22f-1804-4119-a907-353634f17737\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.205673 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e659f22f-1804-4119-a907-353634f17737-secret-volume\") pod \"collect-profiles-29395380-92w7j\" (UID: \"e659f22f-1804-4119-a907-353634f17737\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.205716 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e659f22f-1804-4119-a907-353634f17737-config-volume\") pod \"collect-profiles-29395380-92w7j\" (UID: \"e659f22f-1804-4119-a907-353634f17737\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.307015 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf59t\" (UniqueName: \"kubernetes.io/projected/e659f22f-1804-4119-a907-353634f17737-kube-api-access-zf59t\") pod \"collect-profiles-29395380-92w7j\" (UID: \"e659f22f-1804-4119-a907-353634f17737\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.307206 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e659f22f-1804-4119-a907-353634f17737-secret-volume\") pod \"collect-profiles-29395380-92w7j\" (UID: \"e659f22f-1804-4119-a907-353634f17737\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.307282 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e659f22f-1804-4119-a907-353634f17737-config-volume\") pod \"collect-profiles-29395380-92w7j\" (UID: \"e659f22f-1804-4119-a907-353634f17737\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.309246 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e659f22f-1804-4119-a907-353634f17737-config-volume\") pod \"collect-profiles-29395380-92w7j\" (UID: \"e659f22f-1804-4119-a907-353634f17737\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.320963 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e659f22f-1804-4119-a907-353634f17737-secret-volume\") pod \"collect-profiles-29395380-92w7j\" (UID: \"e659f22f-1804-4119-a907-353634f17737\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.328263 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf59t\" (UniqueName: \"kubernetes.io/projected/e659f22f-1804-4119-a907-353634f17737-kube-api-access-zf59t\") pod \"collect-profiles-29395380-92w7j\" (UID: \"e659f22f-1804-4119-a907-353634f17737\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.501008 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:00 crc kubenswrapper[4972]: I1121 11:00:00.965428 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j"] Nov 21 11:00:01 crc kubenswrapper[4972]: I1121 11:00:01.793221 4972 generic.go:334] "Generic (PLEG): container finished" podID="e659f22f-1804-4119-a907-353634f17737" containerID="1470278944ae5d717505841a1b68e12438ad03ec783d8cda584e3e690b3c85c4" exitCode=0 Nov 21 11:00:01 crc kubenswrapper[4972]: I1121 11:00:01.793290 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" event={"ID":"e659f22f-1804-4119-a907-353634f17737","Type":"ContainerDied","Data":"1470278944ae5d717505841a1b68e12438ad03ec783d8cda584e3e690b3c85c4"} Nov 21 11:00:01 crc kubenswrapper[4972]: I1121 11:00:01.793595 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" event={"ID":"e659f22f-1804-4119-a907-353634f17737","Type":"ContainerStarted","Data":"7c6066c6a2e8b3bc8f554a4c4450a450a56f8afda85a3fe3cd43e5d44cd353d2"} Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.202112 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.357650 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e659f22f-1804-4119-a907-353634f17737-secret-volume\") pod \"e659f22f-1804-4119-a907-353634f17737\" (UID: \"e659f22f-1804-4119-a907-353634f17737\") " Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.357783 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e659f22f-1804-4119-a907-353634f17737-config-volume\") pod \"e659f22f-1804-4119-a907-353634f17737\" (UID: \"e659f22f-1804-4119-a907-353634f17737\") " Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.357913 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf59t\" (UniqueName: \"kubernetes.io/projected/e659f22f-1804-4119-a907-353634f17737-kube-api-access-zf59t\") pod \"e659f22f-1804-4119-a907-353634f17737\" (UID: \"e659f22f-1804-4119-a907-353634f17737\") " Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.358953 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e659f22f-1804-4119-a907-353634f17737-config-volume" (OuterVolumeSpecName: "config-volume") pod "e659f22f-1804-4119-a907-353634f17737" (UID: "e659f22f-1804-4119-a907-353634f17737"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.363237 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e659f22f-1804-4119-a907-353634f17737-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e659f22f-1804-4119-a907-353634f17737" (UID: "e659f22f-1804-4119-a907-353634f17737"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.363767 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e659f22f-1804-4119-a907-353634f17737-kube-api-access-zf59t" (OuterVolumeSpecName: "kube-api-access-zf59t") pod "e659f22f-1804-4119-a907-353634f17737" (UID: "e659f22f-1804-4119-a907-353634f17737"). InnerVolumeSpecName "kube-api-access-zf59t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.459507 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf59t\" (UniqueName: \"kubernetes.io/projected/e659f22f-1804-4119-a907-353634f17737-kube-api-access-zf59t\") on node \"crc\" DevicePath \"\"" Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.459557 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e659f22f-1804-4119-a907-353634f17737-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.459578 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e659f22f-1804-4119-a907-353634f17737-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.813722 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" event={"ID":"e659f22f-1804-4119-a907-353634f17737","Type":"ContainerDied","Data":"7c6066c6a2e8b3bc8f554a4c4450a450a56f8afda85a3fe3cd43e5d44cd353d2"} Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.814300 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c6066c6a2e8b3bc8f554a4c4450a450a56f8afda85a3fe3cd43e5d44cd353d2" Nov 21 11:00:03 crc kubenswrapper[4972]: I1121 11:00:03.813775 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j" Nov 21 11:00:04 crc kubenswrapper[4972]: I1121 11:00:04.279316 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv"] Nov 21 11:00:04 crc kubenswrapper[4972]: I1121 11:00:04.288612 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395335-nbbnv"] Nov 21 11:00:05 crc kubenswrapper[4972]: I1121 11:00:05.778137 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f1691f-af0d-4343-827a-513217babb7d" path="/var/lib/kubelet/pods/e0f1691f-af0d-4343-827a-513217babb7d/volumes" Nov 21 11:00:25 crc kubenswrapper[4972]: I1121 11:00:25.406240 4972 scope.go:117] "RemoveContainer" containerID="4cff9a935e8d7c70e3d4e30bacd44a6aef447885d0d0b33807c157573fab6ea3" Nov 21 11:00:56 crc kubenswrapper[4972]: I1121 11:00:56.178681 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:00:56 crc kubenswrapper[4972]: I1121 11:00:56.179391 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:01:26 crc kubenswrapper[4972]: I1121 11:01:26.179349 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:01:26 crc kubenswrapper[4972]: I1121 11:01:26.180142 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:01:56 crc kubenswrapper[4972]: I1121 11:01:56.178939 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:01:56 crc kubenswrapper[4972]: I1121 11:01:56.179602 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:01:56 crc kubenswrapper[4972]: I1121 11:01:56.179670 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 11:01:56 crc kubenswrapper[4972]: I1121 11:01:56.180606 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d09766e11e3fabe4af926f8addbec82b361431494255cdf37952ea1f017d3953"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 11:01:56 crc kubenswrapper[4972]: I1121 11:01:56.180739 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://d09766e11e3fabe4af926f8addbec82b361431494255cdf37952ea1f017d3953" gracePeriod=600 Nov 21 11:01:56 crc kubenswrapper[4972]: I1121 11:01:56.978406 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="d09766e11e3fabe4af926f8addbec82b361431494255cdf37952ea1f017d3953" exitCode=0 Nov 21 11:01:56 crc kubenswrapper[4972]: I1121 11:01:56.978473 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"d09766e11e3fabe4af926f8addbec82b361431494255cdf37952ea1f017d3953"} Nov 21 11:01:56 crc kubenswrapper[4972]: I1121 11:01:56.979074 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916"} Nov 21 11:01:56 crc kubenswrapper[4972]: I1121 11:01:56.979101 4972 scope.go:117] "RemoveContainer" containerID="76b4b33a61a4e5f00a7b38bcf1e0ed07ac334aa9e23f2b6c44a54932031fa95b" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.631949 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66d656cccf-skh2p"] Nov 21 11:02:00 crc kubenswrapper[4972]: E1121 11:02:00.632804 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e659f22f-1804-4119-a907-353634f17737" containerName="collect-profiles" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.632822 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e659f22f-1804-4119-a907-353634f17737" containerName="collect-profiles" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.633037 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e659f22f-1804-4119-a907-353634f17737" containerName="collect-profiles" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.633921 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.635614 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.636252 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.636425 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.636562 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.636712 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wns68" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.646999 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66d656cccf-skh2p"] Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.714318 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f42e567-383d-402c-bcd7-a343de4cecc3-dns-svc\") pod \"dnsmasq-dns-66d656cccf-skh2p\" (UID: \"1f42e567-383d-402c-bcd7-a343de4cecc3\") " pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.714377 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm22m\" (UniqueName: \"kubernetes.io/projected/1f42e567-383d-402c-bcd7-a343de4cecc3-kube-api-access-sm22m\") pod \"dnsmasq-dns-66d656cccf-skh2p\" (UID: \"1f42e567-383d-402c-bcd7-a343de4cecc3\") " pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.714415 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f42e567-383d-402c-bcd7-a343de4cecc3-config\") pod \"dnsmasq-dns-66d656cccf-skh2p\" (UID: \"1f42e567-383d-402c-bcd7-a343de4cecc3\") " pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.829913 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f42e567-383d-402c-bcd7-a343de4cecc3-dns-svc\") pod \"dnsmasq-dns-66d656cccf-skh2p\" (UID: \"1f42e567-383d-402c-bcd7-a343de4cecc3\") " pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.829979 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm22m\" (UniqueName: \"kubernetes.io/projected/1f42e567-383d-402c-bcd7-a343de4cecc3-kube-api-access-sm22m\") pod \"dnsmasq-dns-66d656cccf-skh2p\" (UID: \"1f42e567-383d-402c-bcd7-a343de4cecc3\") " pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.830017 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f42e567-383d-402c-bcd7-a343de4cecc3-config\") pod \"dnsmasq-dns-66d656cccf-skh2p\" (UID: \"1f42e567-383d-402c-bcd7-a343de4cecc3\") " pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.831167 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f42e567-383d-402c-bcd7-a343de4cecc3-dns-svc\") pod \"dnsmasq-dns-66d656cccf-skh2p\" (UID: \"1f42e567-383d-402c-bcd7-a343de4cecc3\") " pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.831185 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f42e567-383d-402c-bcd7-a343de4cecc3-config\") pod \"dnsmasq-dns-66d656cccf-skh2p\" (UID: \"1f42e567-383d-402c-bcd7-a343de4cecc3\") " pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.855951 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm22m\" (UniqueName: \"kubernetes.io/projected/1f42e567-383d-402c-bcd7-a343de4cecc3-kube-api-access-sm22m\") pod \"dnsmasq-dns-66d656cccf-skh2p\" (UID: \"1f42e567-383d-402c-bcd7-a343de4cecc3\") " pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.879125 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-777df6d877-4kql8"] Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.880316 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.909419 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-777df6d877-4kql8"] Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.931963 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a291332e-75bf-4564-bef6-9548f6fb6326-config\") pod \"dnsmasq-dns-777df6d877-4kql8\" (UID: \"a291332e-75bf-4564-bef6-9548f6fb6326\") " pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.932023 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a291332e-75bf-4564-bef6-9548f6fb6326-dns-svc\") pod \"dnsmasq-dns-777df6d877-4kql8\" (UID: \"a291332e-75bf-4564-bef6-9548f6fb6326\") " pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.932047 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4zfr\" (UniqueName: \"kubernetes.io/projected/a291332e-75bf-4564-bef6-9548f6fb6326-kube-api-access-b4zfr\") pod \"dnsmasq-dns-777df6d877-4kql8\" (UID: \"a291332e-75bf-4564-bef6-9548f6fb6326\") " pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:00 crc kubenswrapper[4972]: I1121 11:02:00.953946 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.034494 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a291332e-75bf-4564-bef6-9548f6fb6326-config\") pod \"dnsmasq-dns-777df6d877-4kql8\" (UID: \"a291332e-75bf-4564-bef6-9548f6fb6326\") " pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.035486 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a291332e-75bf-4564-bef6-9548f6fb6326-config\") pod \"dnsmasq-dns-777df6d877-4kql8\" (UID: \"a291332e-75bf-4564-bef6-9548f6fb6326\") " pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.035643 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a291332e-75bf-4564-bef6-9548f6fb6326-dns-svc\") pod \"dnsmasq-dns-777df6d877-4kql8\" (UID: \"a291332e-75bf-4564-bef6-9548f6fb6326\") " pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.035678 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4zfr\" (UniqueName: \"kubernetes.io/projected/a291332e-75bf-4564-bef6-9548f6fb6326-kube-api-access-b4zfr\") pod \"dnsmasq-dns-777df6d877-4kql8\" (UID: \"a291332e-75bf-4564-bef6-9548f6fb6326\") " pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.036362 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a291332e-75bf-4564-bef6-9548f6fb6326-dns-svc\") pod \"dnsmasq-dns-777df6d877-4kql8\" (UID: \"a291332e-75bf-4564-bef6-9548f6fb6326\") " pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.084722 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4zfr\" (UniqueName: \"kubernetes.io/projected/a291332e-75bf-4564-bef6-9548f6fb6326-kube-api-access-b4zfr\") pod \"dnsmasq-dns-777df6d877-4kql8\" (UID: \"a291332e-75bf-4564-bef6-9548f6fb6326\") " pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.203270 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.428324 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66d656cccf-skh2p"] Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.634945 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-777df6d877-4kql8"] Nov 21 11:02:01 crc kubenswrapper[4972]: W1121 11:02:01.636736 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda291332e_75bf_4564_bef6_9548f6fb6326.slice/crio-5fa3d2b47c72efe96a17e10a4017cb59121193c9f8233c7e3ac789788d0c5280 WatchSource:0}: Error finding container 5fa3d2b47c72efe96a17e10a4017cb59121193c9f8233c7e3ac789788d0c5280: Status 404 returned error can't find the container with id 5fa3d2b47c72efe96a17e10a4017cb59121193c9f8233c7e3ac789788d0c5280 Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.733937 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.735268 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.737621 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.738003 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.739634 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2n5tj" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.740108 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.743710 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.757959 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.847342 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.847401 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b588a7a1-ba0c-42f3-a7e5-a446e8625180-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.847421 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.847442 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b588a7a1-ba0c-42f3-a7e5-a446e8625180-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.847637 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2z47\" (UniqueName: \"kubernetes.io/projected/b588a7a1-ba0c-42f3-a7e5-a446e8625180-kube-api-access-r2z47\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.847723 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b588a7a1-ba0c-42f3-a7e5-a446e8625180-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.847791 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b588a7a1-ba0c-42f3-a7e5-a446e8625180-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.847913 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.847994 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.949279 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b588a7a1-ba0c-42f3-a7e5-a446e8625180-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.949343 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.949422 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b588a7a1-ba0c-42f3-a7e5-a446e8625180-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.949492 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2z47\" (UniqueName: \"kubernetes.io/projected/b588a7a1-ba0c-42f3-a7e5-a446e8625180-kube-api-access-r2z47\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.949534 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b588a7a1-ba0c-42f3-a7e5-a446e8625180-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.949582 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b588a7a1-ba0c-42f3-a7e5-a446e8625180-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.949625 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.949723 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.949818 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.950925 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b588a7a1-ba0c-42f3-a7e5-a446e8625180-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.951608 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.951825 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.952963 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b588a7a1-ba0c-42f3-a7e5-a446e8625180-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.956875 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b588a7a1-ba0c-42f3-a7e5-a446e8625180-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.957399 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.959561 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b588a7a1-ba0c-42f3-a7e5-a446e8625180-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.960488 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.960542 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a02877631be6b2e59b565b40cbb8409747d9930538e396e0a986d5968ecc2ec3/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.973919 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2z47\" (UniqueName: \"kubernetes.io/projected/b588a7a1-ba0c-42f3-a7e5-a446e8625180-kube-api-access-r2z47\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:01 crc kubenswrapper[4972]: I1121 11:02:01.998963 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\") pod \"rabbitmq-server-0\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " pod="openstack/rabbitmq-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.033309 4972 generic.go:334] "Generic (PLEG): container finished" podID="a291332e-75bf-4564-bef6-9548f6fb6326" containerID="1d38b54546ea566b4c16b70e38dac0c6c76157be41acacc197316c44e3d2b68b" exitCode=0 Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.033404 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777df6d877-4kql8" event={"ID":"a291332e-75bf-4564-bef6-9548f6fb6326","Type":"ContainerDied","Data":"1d38b54546ea566b4c16b70e38dac0c6c76157be41acacc197316c44e3d2b68b"} Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.033441 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777df6d877-4kql8" event={"ID":"a291332e-75bf-4564-bef6-9548f6fb6326","Type":"ContainerStarted","Data":"5fa3d2b47c72efe96a17e10a4017cb59121193c9f8233c7e3ac789788d0c5280"} Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.038002 4972 generic.go:334] "Generic (PLEG): container finished" podID="1f42e567-383d-402c-bcd7-a343de4cecc3" containerID="c92e753a582515bfe58c55a530d065a4e3a660947439a961e949499f20a34e86" exitCode=0 Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.038034 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" event={"ID":"1f42e567-383d-402c-bcd7-a343de4cecc3","Type":"ContainerDied","Data":"c92e753a582515bfe58c55a530d065a4e3a660947439a961e949499f20a34e86"} Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.038058 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" event={"ID":"1f42e567-383d-402c-bcd7-a343de4cecc3","Type":"ContainerStarted","Data":"3aff443b0adf91ee1f5f0205183166c10114d6b938483772651d6ca273161161"} Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.080894 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.083392 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.089127 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-j65c8" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.089400 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.089567 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.089720 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.089748 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.101796 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.247896 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.264668 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.264962 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.265075 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/214c173a-580d-4c53-b877-63bec03cb169-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.265192 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.265344 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zh5h\" (UniqueName: \"kubernetes.io/projected/214c173a-580d-4c53-b877-63bec03cb169-kube-api-access-7zh5h\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.265467 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/214c173a-580d-4c53-b877-63bec03cb169-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.265572 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/214c173a-580d-4c53-b877-63bec03cb169-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.265676 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.265784 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/214c173a-580d-4c53-b877-63bec03cb169-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.367726 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zh5h\" (UniqueName: \"kubernetes.io/projected/214c173a-580d-4c53-b877-63bec03cb169-kube-api-access-7zh5h\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.368125 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/214c173a-580d-4c53-b877-63bec03cb169-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.368148 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/214c173a-580d-4c53-b877-63bec03cb169-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.369252 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/214c173a-580d-4c53-b877-63bec03cb169-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.369325 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.369996 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/214c173a-580d-4c53-b877-63bec03cb169-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.370486 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.370870 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.370520 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.371004 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/214c173a-580d-4c53-b877-63bec03cb169-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.371055 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.371373 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.377715 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.388459 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/214c173a-580d-4c53-b877-63bec03cb169-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.388958 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/214c173a-580d-4c53-b877-63bec03cb169-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.389225 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/214c173a-580d-4c53-b877-63bec03cb169-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.403659 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zh5h\" (UniqueName: \"kubernetes.io/projected/214c173a-580d-4c53-b877-63bec03cb169-kube-api-access-7zh5h\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.411389 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.411431 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/66f7b2d6020d501055f35bd0798f7cbb44853b34e3b28470332a3d09f3e9e5ed/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.514299 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\") pod \"rabbitmq-cell1-server-0\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.733199 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:02 crc kubenswrapper[4972]: I1121 11:02:02.802185 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.050148 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" event={"ID":"1f42e567-383d-402c-bcd7-a343de4cecc3","Type":"ContainerStarted","Data":"7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2"} Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.050568 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.052200 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777df6d877-4kql8" event={"ID":"a291332e-75bf-4564-bef6-9548f6fb6326","Type":"ContainerStarted","Data":"ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d"} Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.052456 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.053957 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b588a7a1-ba0c-42f3-a7e5-a446e8625180","Type":"ContainerStarted","Data":"fff89eaa314648e4440c99943cd78143f7acf97d72f5508c7224f0e46ff105f5"} Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.075489 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" podStartSLOduration=3.075464308 podStartE2EDuration="3.075464308s" podCreationTimestamp="2025-11-21 11:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:02:03.06574147 +0000 UTC m=+4868.174884008" watchObservedRunningTime="2025-11-21 11:02:03.075464308 +0000 UTC m=+4868.184606846" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.091987 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-777df6d877-4kql8" podStartSLOduration=3.091965886 podStartE2EDuration="3.091965886s" podCreationTimestamp="2025-11-21 11:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:02:03.087633421 +0000 UTC m=+4868.196775969" watchObservedRunningTime="2025-11-21 11:02:03.091965886 +0000 UTC m=+4868.201108384" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.217179 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.485645 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.486803 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.488904 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mqmks" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.491771 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.492074 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.492172 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.509054 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.509610 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.587901 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b2sb\" (UniqueName: \"kubernetes.io/projected/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-kube-api-access-5b2sb\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.587960 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-kolla-config\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.588058 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3805b90d-3b34-4ea0-9762-03f3e067fb41\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3805b90d-3b34-4ea0-9762-03f3e067fb41\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.588205 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.588238 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-config-data-generated\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.588286 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-operator-scripts\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.588329 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-config-data-default\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.588359 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.690276 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-operator-scripts\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.690376 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-config-data-default\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.690424 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.690555 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b2sb\" (UniqueName: \"kubernetes.io/projected/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-kube-api-access-5b2sb\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.690604 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-kolla-config\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.690800 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3805b90d-3b34-4ea0-9762-03f3e067fb41\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3805b90d-3b34-4ea0-9762-03f3e067fb41\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.690891 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.690930 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-config-data-generated\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.691324 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-config-data-generated\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.691867 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-operator-scripts\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.691966 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-config-data-default\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.692031 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-kolla-config\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.695221 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.695260 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3805b90d-3b34-4ea0-9762-03f3e067fb41\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3805b90d-3b34-4ea0-9762-03f3e067fb41\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c422e093cad2aecd777b9553d5b6af19a7b065e4c88e21e29e2a4d299b3d86ff/globalmount\"" pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.697478 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.697707 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.706193 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b2sb\" (UniqueName: \"kubernetes.io/projected/83330eac-fef8-4cfc-9e9b-2ff1fea0d559-kube-api-access-5b2sb\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.733486 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3805b90d-3b34-4ea0-9762-03f3e067fb41\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3805b90d-3b34-4ea0-9762-03f3e067fb41\") pod \"openstack-galera-0\" (UID: \"83330eac-fef8-4cfc-9e9b-2ff1fea0d559\") " pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.772702 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.773558 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.775577 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.775919 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-wqf5k" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.786929 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.817760 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.892940 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrcmq\" (UniqueName: \"kubernetes.io/projected/8476a6e2-19c1-4070-a33c-690fad3f8c1b-kube-api-access-hrcmq\") pod \"memcached-0\" (UID: \"8476a6e2-19c1-4070-a33c-690fad3f8c1b\") " pod="openstack/memcached-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.893298 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8476a6e2-19c1-4070-a33c-690fad3f8c1b-kolla-config\") pod \"memcached-0\" (UID: \"8476a6e2-19c1-4070-a33c-690fad3f8c1b\") " pod="openstack/memcached-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.893405 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8476a6e2-19c1-4070-a33c-690fad3f8c1b-config-data\") pod \"memcached-0\" (UID: \"8476a6e2-19c1-4070-a33c-690fad3f8c1b\") " pod="openstack/memcached-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.994546 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8476a6e2-19c1-4070-a33c-690fad3f8c1b-config-data\") pod \"memcached-0\" (UID: \"8476a6e2-19c1-4070-a33c-690fad3f8c1b\") " pod="openstack/memcached-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.994808 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrcmq\" (UniqueName: \"kubernetes.io/projected/8476a6e2-19c1-4070-a33c-690fad3f8c1b-kube-api-access-hrcmq\") pod \"memcached-0\" (UID: \"8476a6e2-19c1-4070-a33c-690fad3f8c1b\") " pod="openstack/memcached-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.994909 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8476a6e2-19c1-4070-a33c-690fad3f8c1b-kolla-config\") pod \"memcached-0\" (UID: \"8476a6e2-19c1-4070-a33c-690fad3f8c1b\") " pod="openstack/memcached-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.995526 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8476a6e2-19c1-4070-a33c-690fad3f8c1b-kolla-config\") pod \"memcached-0\" (UID: \"8476a6e2-19c1-4070-a33c-690fad3f8c1b\") " pod="openstack/memcached-0" Nov 21 11:02:03 crc kubenswrapper[4972]: I1121 11:02:03.996209 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8476a6e2-19c1-4070-a33c-690fad3f8c1b-config-data\") pod \"memcached-0\" (UID: \"8476a6e2-19c1-4070-a33c-690fad3f8c1b\") " pod="openstack/memcached-0" Nov 21 11:02:04 crc kubenswrapper[4972]: I1121 11:02:04.033501 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrcmq\" (UniqueName: \"kubernetes.io/projected/8476a6e2-19c1-4070-a33c-690fad3f8c1b-kube-api-access-hrcmq\") pod \"memcached-0\" (UID: \"8476a6e2-19c1-4070-a33c-690fad3f8c1b\") " pod="openstack/memcached-0" Nov 21 11:02:04 crc kubenswrapper[4972]: I1121 11:02:04.089798 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 21 11:02:04 crc kubenswrapper[4972]: I1121 11:02:04.093754 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"214c173a-580d-4c53-b877-63bec03cb169","Type":"ContainerStarted","Data":"15bc9dc9108f2dfed119d189fc9188a9099dfcb8b759f8793d16134d16394465"} Nov 21 11:02:04 crc kubenswrapper[4972]: I1121 11:02:04.367648 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 11:02:04 crc kubenswrapper[4972]: W1121 11:02:04.449638 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83330eac_fef8_4cfc_9e9b_2ff1fea0d559.slice/crio-38b93afea61ab8464c47852310e32ad215a323644b8a0f658431aa4e57b23561 WatchSource:0}: Error finding container 38b93afea61ab8464c47852310e32ad215a323644b8a0f658431aa4e57b23561: Status 404 returned error can't find the container with id 38b93afea61ab8464c47852310e32ad215a323644b8a0f658431aa4e57b23561 Nov 21 11:02:04 crc kubenswrapper[4972]: W1121 11:02:04.605865 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8476a6e2_19c1_4070_a33c_690fad3f8c1b.slice/crio-92f7d3f0c898e593fb5a92d4d3df42116b448e66fad9436a7df516dbe0e6bdb6 WatchSource:0}: Error finding container 92f7d3f0c898e593fb5a92d4d3df42116b448e66fad9436a7df516dbe0e6bdb6: Status 404 returned error can't find the container with id 92f7d3f0c898e593fb5a92d4d3df42116b448e66fad9436a7df516dbe0e6bdb6 Nov 21 11:02:04 crc kubenswrapper[4972]: I1121 11:02:04.607653 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 21 11:02:04 crc kubenswrapper[4972]: I1121 11:02:04.917663 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 11:02:04 crc kubenswrapper[4972]: I1121 11:02:04.920256 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:04 crc kubenswrapper[4972]: I1121 11:02:04.923373 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 21 11:02:04 crc kubenswrapper[4972]: I1121 11:02:04.925685 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9ppnd" Nov 21 11:02:04 crc kubenswrapper[4972]: I1121 11:02:04.928994 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 21 11:02:04 crc kubenswrapper[4972]: I1121 11:02:04.929297 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 21 11:02:04 crc kubenswrapper[4972]: I1121 11:02:04.936738 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.011426 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bc01c508-b649-456b-8b19-22661b56f192-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.011490 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc01c508-b649-456b-8b19-22661b56f192-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.011526 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bc01c508-b649-456b-8b19-22661b56f192-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.011748 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc01c508-b649-456b-8b19-22661b56f192-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.011868 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bc01c508-b649-456b-8b19-22661b56f192-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.011973 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf2jh\" (UniqueName: \"kubernetes.io/projected/bc01c508-b649-456b-8b19-22661b56f192-kube-api-access-qf2jh\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.012019 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc01c508-b649-456b-8b19-22661b56f192-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.012127 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c4ee25c3-e735-4e7c-b74d-33dd49d12bb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c4ee25c3-e735-4e7c-b74d-33dd49d12bb4\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.103438 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"214c173a-580d-4c53-b877-63bec03cb169","Type":"ContainerStarted","Data":"ab1601d5ab5b33546a1771b01e27fce2fed0db134291ffea748466fb0566f40c"} Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.106076 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b588a7a1-ba0c-42f3-a7e5-a446e8625180","Type":"ContainerStarted","Data":"0b203a41d0e248541371ea3557ea64ac272e62c5d997d40e77a4041303b8e8ac"} Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.107865 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"83330eac-fef8-4cfc-9e9b-2ff1fea0d559","Type":"ContainerStarted","Data":"5650ac6a06173423a5d043f35c7b008ee831e48c756c2f77ff0268f54a1cc31d"} Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.107904 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"83330eac-fef8-4cfc-9e9b-2ff1fea0d559","Type":"ContainerStarted","Data":"38b93afea61ab8464c47852310e32ad215a323644b8a0f658431aa4e57b23561"} Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.110710 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8476a6e2-19c1-4070-a33c-690fad3f8c1b","Type":"ContainerStarted","Data":"2f629a638f4c824eb18573114d7dab68497858bbd768e798eddc38b9e3523664"} Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.110985 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8476a6e2-19c1-4070-a33c-690fad3f8c1b","Type":"ContainerStarted","Data":"92f7d3f0c898e593fb5a92d4d3df42116b448e66fad9436a7df516dbe0e6bdb6"} Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.111139 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.114574 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bc01c508-b649-456b-8b19-22661b56f192-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.114630 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc01c508-b649-456b-8b19-22661b56f192-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.114663 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bc01c508-b649-456b-8b19-22661b56f192-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.114703 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc01c508-b649-456b-8b19-22661b56f192-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.114738 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bc01c508-b649-456b-8b19-22661b56f192-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.114784 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf2jh\" (UniqueName: \"kubernetes.io/projected/bc01c508-b649-456b-8b19-22661b56f192-kube-api-access-qf2jh\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.114814 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc01c508-b649-456b-8b19-22661b56f192-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.114989 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c4ee25c3-e735-4e7c-b74d-33dd49d12bb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c4ee25c3-e735-4e7c-b74d-33dd49d12bb4\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.115396 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bc01c508-b649-456b-8b19-22661b56f192-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.116473 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bc01c508-b649-456b-8b19-22661b56f192-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.118187 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bc01c508-b649-456b-8b19-22661b56f192-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.120269 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc01c508-b649-456b-8b19-22661b56f192-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.121905 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc01c508-b649-456b-8b19-22661b56f192-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.133374 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.133611 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c4ee25c3-e735-4e7c-b74d-33dd49d12bb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c4ee25c3-e735-4e7c-b74d-33dd49d12bb4\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/224d1a6f9b5297e83de9e0e5af17aef9c12a58ae484d42dbba228e6b7be79442/globalmount\"" pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.134916 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc01c508-b649-456b-8b19-22661b56f192-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.147281 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf2jh\" (UniqueName: \"kubernetes.io/projected/bc01c508-b649-456b-8b19-22661b56f192-kube-api-access-qf2jh\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.157664 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.157648853 podStartE2EDuration="2.157648853s" podCreationTimestamp="2025-11-21 11:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:02:05.156500343 +0000 UTC m=+4870.265642841" watchObservedRunningTime="2025-11-21 11:02:05.157648853 +0000 UTC m=+4870.266791351" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.188726 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c4ee25c3-e735-4e7c-b74d-33dd49d12bb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c4ee25c3-e735-4e7c-b74d-33dd49d12bb4\") pod \"openstack-cell1-galera-0\" (UID: \"bc01c508-b649-456b-8b19-22661b56f192\") " pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.277746 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:05 crc kubenswrapper[4972]: I1121 11:02:05.754628 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 11:02:05 crc kubenswrapper[4972]: W1121 11:02:05.757672 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc01c508_b649_456b_8b19_22661b56f192.slice/crio-2278acb724e7934af0f00b2934a5b84722f28e866cf804828756b2aa015aab9d WatchSource:0}: Error finding container 2278acb724e7934af0f00b2934a5b84722f28e866cf804828756b2aa015aab9d: Status 404 returned error can't find the container with id 2278acb724e7934af0f00b2934a5b84722f28e866cf804828756b2aa015aab9d Nov 21 11:02:06 crc kubenswrapper[4972]: I1121 11:02:06.118747 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bc01c508-b649-456b-8b19-22661b56f192","Type":"ContainerStarted","Data":"06753a30a7a412a028309894aa7b21fb90a663f014ed5a8019e7c18b45ae8e52"} Nov 21 11:02:06 crc kubenswrapper[4972]: I1121 11:02:06.119211 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bc01c508-b649-456b-8b19-22661b56f192","Type":"ContainerStarted","Data":"2278acb724e7934af0f00b2934a5b84722f28e866cf804828756b2aa015aab9d"} Nov 21 11:02:09 crc kubenswrapper[4972]: I1121 11:02:09.090967 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 21 11:02:10 crc kubenswrapper[4972]: I1121 11:02:10.168051 4972 generic.go:334] "Generic (PLEG): container finished" podID="83330eac-fef8-4cfc-9e9b-2ff1fea0d559" containerID="5650ac6a06173423a5d043f35c7b008ee831e48c756c2f77ff0268f54a1cc31d" exitCode=0 Nov 21 11:02:10 crc kubenswrapper[4972]: I1121 11:02:10.168182 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"83330eac-fef8-4cfc-9e9b-2ff1fea0d559","Type":"ContainerDied","Data":"5650ac6a06173423a5d043f35c7b008ee831e48c756c2f77ff0268f54a1cc31d"} Nov 21 11:02:10 crc kubenswrapper[4972]: I1121 11:02:10.955009 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.176663 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"83330eac-fef8-4cfc-9e9b-2ff1fea0d559","Type":"ContainerStarted","Data":"24abbc8be98f0691e85128fe5f09edb37f341891de1ea5745891f241d0a0425b"} Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.178630 4972 generic.go:334] "Generic (PLEG): container finished" podID="bc01c508-b649-456b-8b19-22661b56f192" containerID="06753a30a7a412a028309894aa7b21fb90a663f014ed5a8019e7c18b45ae8e52" exitCode=0 Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.178675 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bc01c508-b649-456b-8b19-22661b56f192","Type":"ContainerDied","Data":"06753a30a7a412a028309894aa7b21fb90a663f014ed5a8019e7c18b45ae8e52"} Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.202106 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.202080308 podStartE2EDuration="9.202080308s" podCreationTimestamp="2025-11-21 11:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:02:11.197172738 +0000 UTC m=+4876.306315266" watchObservedRunningTime="2025-11-21 11:02:11.202080308 +0000 UTC m=+4876.311222826" Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.204898 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.276109 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d656cccf-skh2p"] Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.276391 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" podUID="1f42e567-383d-402c-bcd7-a343de4cecc3" containerName="dnsmasq-dns" containerID="cri-o://7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2" gracePeriod=10 Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.818312 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.950678 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm22m\" (UniqueName: \"kubernetes.io/projected/1f42e567-383d-402c-bcd7-a343de4cecc3-kube-api-access-sm22m\") pod \"1f42e567-383d-402c-bcd7-a343de4cecc3\" (UID: \"1f42e567-383d-402c-bcd7-a343de4cecc3\") " Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.950788 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f42e567-383d-402c-bcd7-a343de4cecc3-dns-svc\") pod \"1f42e567-383d-402c-bcd7-a343de4cecc3\" (UID: \"1f42e567-383d-402c-bcd7-a343de4cecc3\") " Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.950864 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f42e567-383d-402c-bcd7-a343de4cecc3-config\") pod \"1f42e567-383d-402c-bcd7-a343de4cecc3\" (UID: \"1f42e567-383d-402c-bcd7-a343de4cecc3\") " Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.955621 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f42e567-383d-402c-bcd7-a343de4cecc3-kube-api-access-sm22m" (OuterVolumeSpecName: "kube-api-access-sm22m") pod "1f42e567-383d-402c-bcd7-a343de4cecc3" (UID: "1f42e567-383d-402c-bcd7-a343de4cecc3"). InnerVolumeSpecName "kube-api-access-sm22m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.987868 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f42e567-383d-402c-bcd7-a343de4cecc3-config" (OuterVolumeSpecName: "config") pod "1f42e567-383d-402c-bcd7-a343de4cecc3" (UID: "1f42e567-383d-402c-bcd7-a343de4cecc3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:02:11 crc kubenswrapper[4972]: I1121 11:02:11.989541 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f42e567-383d-402c-bcd7-a343de4cecc3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f42e567-383d-402c-bcd7-a343de4cecc3" (UID: "1f42e567-383d-402c-bcd7-a343de4cecc3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.052397 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f42e567-383d-402c-bcd7-a343de4cecc3-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.052454 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm22m\" (UniqueName: \"kubernetes.io/projected/1f42e567-383d-402c-bcd7-a343de4cecc3-kube-api-access-sm22m\") on node \"crc\" DevicePath \"\"" Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.052475 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f42e567-383d-402c-bcd7-a343de4cecc3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.190043 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bc01c508-b649-456b-8b19-22661b56f192","Type":"ContainerStarted","Data":"9657f7557873f69dc21a7d583c9c27a7d9d1d374dddabbbb5214487cb1f8eef8"} Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.192274 4972 generic.go:334] "Generic (PLEG): container finished" podID="1f42e567-383d-402c-bcd7-a343de4cecc3" containerID="7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2" exitCode=0 Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.192333 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" event={"ID":"1f42e567-383d-402c-bcd7-a343de4cecc3","Type":"ContainerDied","Data":"7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2"} Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.192343 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.192379 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d656cccf-skh2p" event={"ID":"1f42e567-383d-402c-bcd7-a343de4cecc3","Type":"ContainerDied","Data":"3aff443b0adf91ee1f5f0205183166c10114d6b938483772651d6ca273161161"} Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.192409 4972 scope.go:117] "RemoveContainer" containerID="7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2" Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.217011 4972 scope.go:117] "RemoveContainer" containerID="c92e753a582515bfe58c55a530d065a4e3a660947439a961e949499f20a34e86" Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.231109 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.231085974 podStartE2EDuration="9.231085974s" podCreationTimestamp="2025-11-21 11:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:02:12.228329321 +0000 UTC m=+4877.337471839" watchObservedRunningTime="2025-11-21 11:02:12.231085974 +0000 UTC m=+4877.340228482" Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.260429 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d656cccf-skh2p"] Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.263731 4972 scope.go:117] "RemoveContainer" containerID="7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2" Nov 21 11:02:12 crc kubenswrapper[4972]: E1121 11:02:12.264403 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2\": container with ID starting with 7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2 not found: ID does not exist" containerID="7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2" Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.264435 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2"} err="failed to get container status \"7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2\": rpc error: code = NotFound desc = could not find container \"7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2\": container with ID starting with 7d1c9d19382537de35ba35bbd5f439e9e141f5d74b919e2aebad0ab8c3b841d2 not found: ID does not exist" Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.264456 4972 scope.go:117] "RemoveContainer" containerID="c92e753a582515bfe58c55a530d065a4e3a660947439a961e949499f20a34e86" Nov 21 11:02:12 crc kubenswrapper[4972]: E1121 11:02:12.265029 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92e753a582515bfe58c55a530d065a4e3a660947439a961e949499f20a34e86\": container with ID starting with c92e753a582515bfe58c55a530d065a4e3a660947439a961e949499f20a34e86 not found: ID does not exist" containerID="c92e753a582515bfe58c55a530d065a4e3a660947439a961e949499f20a34e86" Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.265094 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92e753a582515bfe58c55a530d065a4e3a660947439a961e949499f20a34e86"} err="failed to get container status \"c92e753a582515bfe58c55a530d065a4e3a660947439a961e949499f20a34e86\": rpc error: code = NotFound desc = could not find container \"c92e753a582515bfe58c55a530d065a4e3a660947439a961e949499f20a34e86\": container with ID starting with c92e753a582515bfe58c55a530d065a4e3a660947439a961e949499f20a34e86 not found: ID does not exist" Nov 21 11:02:12 crc kubenswrapper[4972]: I1121 11:02:12.266666 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66d656cccf-skh2p"] Nov 21 11:02:13 crc kubenswrapper[4972]: I1121 11:02:13.777785 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f42e567-383d-402c-bcd7-a343de4cecc3" path="/var/lib/kubelet/pods/1f42e567-383d-402c-bcd7-a343de4cecc3/volumes" Nov 21 11:02:13 crc kubenswrapper[4972]: I1121 11:02:13.818557 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 21 11:02:13 crc kubenswrapper[4972]: I1121 11:02:13.819036 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 21 11:02:15 crc kubenswrapper[4972]: I1121 11:02:15.278883 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:15 crc kubenswrapper[4972]: I1121 11:02:15.279193 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:16 crc kubenswrapper[4972]: I1121 11:02:16.129900 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 21 11:02:16 crc kubenswrapper[4972]: I1121 11:02:16.249628 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 21 11:02:17 crc kubenswrapper[4972]: I1121 11:02:17.530737 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:17 crc kubenswrapper[4972]: I1121 11:02:17.653254 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 21 11:02:37 crc kubenswrapper[4972]: I1121 11:02:37.439915 4972 generic.go:334] "Generic (PLEG): container finished" podID="b588a7a1-ba0c-42f3-a7e5-a446e8625180" containerID="0b203a41d0e248541371ea3557ea64ac272e62c5d997d40e77a4041303b8e8ac" exitCode=0 Nov 21 11:02:37 crc kubenswrapper[4972]: I1121 11:02:37.440343 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b588a7a1-ba0c-42f3-a7e5-a446e8625180","Type":"ContainerDied","Data":"0b203a41d0e248541371ea3557ea64ac272e62c5d997d40e77a4041303b8e8ac"} Nov 21 11:02:38 crc kubenswrapper[4972]: I1121 11:02:38.451022 4972 generic.go:334] "Generic (PLEG): container finished" podID="214c173a-580d-4c53-b877-63bec03cb169" containerID="ab1601d5ab5b33546a1771b01e27fce2fed0db134291ffea748466fb0566f40c" exitCode=0 Nov 21 11:02:38 crc kubenswrapper[4972]: I1121 11:02:38.451085 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"214c173a-580d-4c53-b877-63bec03cb169","Type":"ContainerDied","Data":"ab1601d5ab5b33546a1771b01e27fce2fed0db134291ffea748466fb0566f40c"} Nov 21 11:02:38 crc kubenswrapper[4972]: I1121 11:02:38.455096 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b588a7a1-ba0c-42f3-a7e5-a446e8625180","Type":"ContainerStarted","Data":"8b8dc9ad424bdcd83869725ac7fa68fbbfef9537a427cec407b28c0b1f1700cb"} Nov 21 11:02:38 crc kubenswrapper[4972]: I1121 11:02:38.455625 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 21 11:02:38 crc kubenswrapper[4972]: I1121 11:02:38.530176 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.530146223 podStartE2EDuration="38.530146223s" podCreationTimestamp="2025-11-21 11:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:02:38.518325849 +0000 UTC m=+4903.627468387" watchObservedRunningTime="2025-11-21 11:02:38.530146223 +0000 UTC m=+4903.639288721" Nov 21 11:02:39 crc kubenswrapper[4972]: I1121 11:02:39.462944 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"214c173a-580d-4c53-b877-63bec03cb169","Type":"ContainerStarted","Data":"1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc"} Nov 21 11:02:39 crc kubenswrapper[4972]: I1121 11:02:39.464181 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:39 crc kubenswrapper[4972]: I1121 11:02:39.485775 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.48575709 podStartE2EDuration="38.48575709s" podCreationTimestamp="2025-11-21 11:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:02:39.480618763 +0000 UTC m=+4904.589761271" watchObservedRunningTime="2025-11-21 11:02:39.48575709 +0000 UTC m=+4904.594899588" Nov 21 11:02:52 crc kubenswrapper[4972]: I1121 11:02:52.252058 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 21 11:02:52 crc kubenswrapper[4972]: I1121 11:02:52.736353 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.633614 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-665ff86d95-s75z4"] Nov 21 11:02:58 crc kubenswrapper[4972]: E1121 11:02:58.635061 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f42e567-383d-402c-bcd7-a343de4cecc3" containerName="dnsmasq-dns" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.635093 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f42e567-383d-402c-bcd7-a343de4cecc3" containerName="dnsmasq-dns" Nov 21 11:02:58 crc kubenswrapper[4972]: E1121 11:02:58.635139 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f42e567-383d-402c-bcd7-a343de4cecc3" containerName="init" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.635156 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f42e567-383d-402c-bcd7-a343de4cecc3" containerName="init" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.635525 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f42e567-383d-402c-bcd7-a343de4cecc3" containerName="dnsmasq-dns" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.637441 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.645434 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-665ff86d95-s75z4"] Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.746324 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stjh5\" (UniqueName: \"kubernetes.io/projected/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-kube-api-access-stjh5\") pod \"dnsmasq-dns-665ff86d95-s75z4\" (UID: \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\") " pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.746373 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-config\") pod \"dnsmasq-dns-665ff86d95-s75z4\" (UID: \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\") " pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.746671 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-dns-svc\") pod \"dnsmasq-dns-665ff86d95-s75z4\" (UID: \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\") " pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.848093 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stjh5\" (UniqueName: \"kubernetes.io/projected/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-kube-api-access-stjh5\") pod \"dnsmasq-dns-665ff86d95-s75z4\" (UID: \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\") " pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.848187 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-config\") pod \"dnsmasq-dns-665ff86d95-s75z4\" (UID: \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\") " pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.848421 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-dns-svc\") pod \"dnsmasq-dns-665ff86d95-s75z4\" (UID: \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\") " pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.849485 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-config\") pod \"dnsmasq-dns-665ff86d95-s75z4\" (UID: \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\") " pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.850098 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-dns-svc\") pod \"dnsmasq-dns-665ff86d95-s75z4\" (UID: \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\") " pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.884628 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stjh5\" (UniqueName: \"kubernetes.io/projected/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-kube-api-access-stjh5\") pod \"dnsmasq-dns-665ff86d95-s75z4\" (UID: \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\") " pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:02:58 crc kubenswrapper[4972]: I1121 11:02:58.994138 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:02:59 crc kubenswrapper[4972]: I1121 11:02:59.379476 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 11:02:59 crc kubenswrapper[4972]: I1121 11:02:59.441038 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-665ff86d95-s75z4"] Nov 21 11:02:59 crc kubenswrapper[4972]: W1121 11:02:59.445495 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb11b16e_c1ba_47eb_90e0_d03f0b2412e3.slice/crio-55e4d85a7fef33a6289b6f3238850cf580e855e978ef9ee5574787006efb3a5e WatchSource:0}: Error finding container 55e4d85a7fef33a6289b6f3238850cf580e855e978ef9ee5574787006efb3a5e: Status 404 returned error can't find the container with id 55e4d85a7fef33a6289b6f3238850cf580e855e978ef9ee5574787006efb3a5e Nov 21 11:02:59 crc kubenswrapper[4972]: I1121 11:02:59.638164 4972 generic.go:334] "Generic (PLEG): container finished" podID="db11b16e-c1ba-47eb-90e0-d03f0b2412e3" containerID="7192a9351c177a3c9dd2ebfe951a2f9b1cf83bc67efddca74aaee7f05c9ed78f" exitCode=0 Nov 21 11:02:59 crc kubenswrapper[4972]: I1121 11:02:59.638203 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" event={"ID":"db11b16e-c1ba-47eb-90e0-d03f0b2412e3","Type":"ContainerDied","Data":"7192a9351c177a3c9dd2ebfe951a2f9b1cf83bc67efddca74aaee7f05c9ed78f"} Nov 21 11:02:59 crc kubenswrapper[4972]: I1121 11:02:59.638228 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" event={"ID":"db11b16e-c1ba-47eb-90e0-d03f0b2412e3","Type":"ContainerStarted","Data":"55e4d85a7fef33a6289b6f3238850cf580e855e978ef9ee5574787006efb3a5e"} Nov 21 11:03:00 crc kubenswrapper[4972]: I1121 11:03:00.069135 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 11:03:00 crc kubenswrapper[4972]: I1121 11:03:00.647650 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" event={"ID":"db11b16e-c1ba-47eb-90e0-d03f0b2412e3","Type":"ContainerStarted","Data":"ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee"} Nov 21 11:03:00 crc kubenswrapper[4972]: I1121 11:03:00.648419 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:03:01 crc kubenswrapper[4972]: I1121 11:03:01.131282 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b588a7a1-ba0c-42f3-a7e5-a446e8625180" containerName="rabbitmq" containerID="cri-o://8b8dc9ad424bdcd83869725ac7fa68fbbfef9537a427cec407b28c0b1f1700cb" gracePeriod=604799 Nov 21 11:03:01 crc kubenswrapper[4972]: I1121 11:03:01.804889 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="214c173a-580d-4c53-b877-63bec03cb169" containerName="rabbitmq" containerID="cri-o://1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc" gracePeriod=604799 Nov 21 11:03:02 crc kubenswrapper[4972]: I1121 11:03:02.248749 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b588a7a1-ba0c-42f3-a7e5-a446e8625180" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.236:5672: connect: connection refused" Nov 21 11:03:02 crc kubenswrapper[4972]: I1121 11:03:02.734160 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="214c173a-580d-4c53-b877-63bec03cb169" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.237:5672: connect: connection refused" Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.726698 4972 generic.go:334] "Generic (PLEG): container finished" podID="b588a7a1-ba0c-42f3-a7e5-a446e8625180" containerID="8b8dc9ad424bdcd83869725ac7fa68fbbfef9537a427cec407b28c0b1f1700cb" exitCode=0 Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.726811 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b588a7a1-ba0c-42f3-a7e5-a446e8625180","Type":"ContainerDied","Data":"8b8dc9ad424bdcd83869725ac7fa68fbbfef9537a427cec407b28c0b1f1700cb"} Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.727476 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b588a7a1-ba0c-42f3-a7e5-a446e8625180","Type":"ContainerDied","Data":"fff89eaa314648e4440c99943cd78143f7acf97d72f5508c7224f0e46ff105f5"} Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.727509 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fff89eaa314648e4440c99943cd78143f7acf97d72f5508c7224f0e46ff105f5" Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.770967 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.821445 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" podStartSLOduration=9.82141037 podStartE2EDuration="9.82141037s" podCreationTimestamp="2025-11-21 11:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:03:00.665577331 +0000 UTC m=+4925.774719839" watchObservedRunningTime="2025-11-21 11:03:07.82141037 +0000 UTC m=+4932.930552908" Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.928639 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2z47\" (UniqueName: \"kubernetes.io/projected/b588a7a1-ba0c-42f3-a7e5-a446e8625180-kube-api-access-r2z47\") pod \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.928695 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b588a7a1-ba0c-42f3-a7e5-a446e8625180-pod-info\") pod \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.928729 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-erlang-cookie\") pod \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.928761 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-plugins\") pod \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.928805 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b588a7a1-ba0c-42f3-a7e5-a446e8625180-erlang-cookie-secret\") pod \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.928938 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\") pod \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.929024 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b588a7a1-ba0c-42f3-a7e5-a446e8625180-server-conf\") pod \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.929086 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-confd\") pod \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.929122 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b588a7a1-ba0c-42f3-a7e5-a446e8625180-plugins-conf\") pod \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\" (UID: \"b588a7a1-ba0c-42f3-a7e5-a446e8625180\") " Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.931279 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b588a7a1-ba0c-42f3-a7e5-a446e8625180" (UID: "b588a7a1-ba0c-42f3-a7e5-a446e8625180"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.931474 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b588a7a1-ba0c-42f3-a7e5-a446e8625180" (UID: "b588a7a1-ba0c-42f3-a7e5-a446e8625180"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.932128 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b588a7a1-ba0c-42f3-a7e5-a446e8625180-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b588a7a1-ba0c-42f3-a7e5-a446e8625180" (UID: "b588a7a1-ba0c-42f3-a7e5-a446e8625180"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.935436 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b588a7a1-ba0c-42f3-a7e5-a446e8625180-pod-info" (OuterVolumeSpecName: "pod-info") pod "b588a7a1-ba0c-42f3-a7e5-a446e8625180" (UID: "b588a7a1-ba0c-42f3-a7e5-a446e8625180"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.936314 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b588a7a1-ba0c-42f3-a7e5-a446e8625180-kube-api-access-r2z47" (OuterVolumeSpecName: "kube-api-access-r2z47") pod "b588a7a1-ba0c-42f3-a7e5-a446e8625180" (UID: "b588a7a1-ba0c-42f3-a7e5-a446e8625180"). InnerVolumeSpecName "kube-api-access-r2z47". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.941511 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b588a7a1-ba0c-42f3-a7e5-a446e8625180-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b588a7a1-ba0c-42f3-a7e5-a446e8625180" (UID: "b588a7a1-ba0c-42f3-a7e5-a446e8625180"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.949104 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6" (OuterVolumeSpecName: "persistence") pod "b588a7a1-ba0c-42f3-a7e5-a446e8625180" (UID: "b588a7a1-ba0c-42f3-a7e5-a446e8625180"). InnerVolumeSpecName "pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 11:03:07 crc kubenswrapper[4972]: I1121 11:03:07.958530 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b588a7a1-ba0c-42f3-a7e5-a446e8625180-server-conf" (OuterVolumeSpecName: "server-conf") pod "b588a7a1-ba0c-42f3-a7e5-a446e8625180" (UID: "b588a7a1-ba0c-42f3-a7e5-a446e8625180"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.030647 4972 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b588a7a1-ba0c-42f3-a7e5-a446e8625180-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.030689 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2z47\" (UniqueName: \"kubernetes.io/projected/b588a7a1-ba0c-42f3-a7e5-a446e8625180-kube-api-access-r2z47\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.030703 4972 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b588a7a1-ba0c-42f3-a7e5-a446e8625180-pod-info\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.030715 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.030726 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.030736 4972 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b588a7a1-ba0c-42f3-a7e5-a446e8625180-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.030776 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\") on node \"crc\" " Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.030792 4972 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b588a7a1-ba0c-42f3-a7e5-a446e8625180-server-conf\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.043057 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b588a7a1-ba0c-42f3-a7e5-a446e8625180" (UID: "b588a7a1-ba0c-42f3-a7e5-a446e8625180"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.046200 4972 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.046355 4972 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6") on node "crc" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.132558 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b588a7a1-ba0c-42f3-a7e5-a446e8625180-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.132602 4972 reconciler_common.go:293] "Volume detached for volume \"pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.452502 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.538545 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/214c173a-580d-4c53-b877-63bec03cb169-erlang-cookie-secret\") pod \"214c173a-580d-4c53-b877-63bec03cb169\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.538611 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-plugins\") pod \"214c173a-580d-4c53-b877-63bec03cb169\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.538648 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zh5h\" (UniqueName: \"kubernetes.io/projected/214c173a-580d-4c53-b877-63bec03cb169-kube-api-access-7zh5h\") pod \"214c173a-580d-4c53-b877-63bec03cb169\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.538673 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/214c173a-580d-4c53-b877-63bec03cb169-server-conf\") pod \"214c173a-580d-4c53-b877-63bec03cb169\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.538720 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-confd\") pod \"214c173a-580d-4c53-b877-63bec03cb169\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.538773 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-erlang-cookie\") pod \"214c173a-580d-4c53-b877-63bec03cb169\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.538817 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/214c173a-580d-4c53-b877-63bec03cb169-plugins-conf\") pod \"214c173a-580d-4c53-b877-63bec03cb169\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.538991 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\") pod \"214c173a-580d-4c53-b877-63bec03cb169\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.539067 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/214c173a-580d-4c53-b877-63bec03cb169-pod-info\") pod \"214c173a-580d-4c53-b877-63bec03cb169\" (UID: \"214c173a-580d-4c53-b877-63bec03cb169\") " Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.539241 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "214c173a-580d-4c53-b877-63bec03cb169" (UID: "214c173a-580d-4c53-b877-63bec03cb169"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.539767 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214c173a-580d-4c53-b877-63bec03cb169-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "214c173a-580d-4c53-b877-63bec03cb169" (UID: "214c173a-580d-4c53-b877-63bec03cb169"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.539805 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "214c173a-580d-4c53-b877-63bec03cb169" (UID: "214c173a-580d-4c53-b877-63bec03cb169"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.540086 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.540113 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.540127 4972 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/214c173a-580d-4c53-b877-63bec03cb169-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.542932 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/214c173a-580d-4c53-b877-63bec03cb169-pod-info" (OuterVolumeSpecName: "pod-info") pod "214c173a-580d-4c53-b877-63bec03cb169" (UID: "214c173a-580d-4c53-b877-63bec03cb169"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.551454 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/214c173a-580d-4c53-b877-63bec03cb169-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "214c173a-580d-4c53-b877-63bec03cb169" (UID: "214c173a-580d-4c53-b877-63bec03cb169"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.552190 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5" (OuterVolumeSpecName: "persistence") pod "214c173a-580d-4c53-b877-63bec03cb169" (UID: "214c173a-580d-4c53-b877-63bec03cb169"). InnerVolumeSpecName "pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.557055 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214c173a-580d-4c53-b877-63bec03cb169-kube-api-access-7zh5h" (OuterVolumeSpecName: "kube-api-access-7zh5h") pod "214c173a-580d-4c53-b877-63bec03cb169" (UID: "214c173a-580d-4c53-b877-63bec03cb169"). InnerVolumeSpecName "kube-api-access-7zh5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.561562 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214c173a-580d-4c53-b877-63bec03cb169-server-conf" (OuterVolumeSpecName: "server-conf") pod "214c173a-580d-4c53-b877-63bec03cb169" (UID: "214c173a-580d-4c53-b877-63bec03cb169"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.623130 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "214c173a-580d-4c53-b877-63bec03cb169" (UID: "214c173a-580d-4c53-b877-63bec03cb169"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.641477 4972 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/214c173a-580d-4c53-b877-63bec03cb169-pod-info\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.641514 4972 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/214c173a-580d-4c53-b877-63bec03cb169-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.641529 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zh5h\" (UniqueName: \"kubernetes.io/projected/214c173a-580d-4c53-b877-63bec03cb169-kube-api-access-7zh5h\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.641541 4972 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/214c173a-580d-4c53-b877-63bec03cb169-server-conf\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.641553 4972 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/214c173a-580d-4c53-b877-63bec03cb169-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.641595 4972 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\") on node \"crc\" " Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.663796 4972 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.664040 4972 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5") on node "crc" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.735887 4972 generic.go:334] "Generic (PLEG): container finished" podID="214c173a-580d-4c53-b877-63bec03cb169" containerID="1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc" exitCode=0 Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.736001 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.736757 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"214c173a-580d-4c53-b877-63bec03cb169","Type":"ContainerDied","Data":"1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc"} Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.737085 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"214c173a-580d-4c53-b877-63bec03cb169","Type":"ContainerDied","Data":"15bc9dc9108f2dfed119d189fc9188a9099dfcb8b759f8793d16134d16394465"} Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.737182 4972 scope.go:117] "RemoveContainer" containerID="1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.736886 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.743432 4972 reconciler_common.go:293] "Volume detached for volume \"pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.755474 4972 scope.go:117] "RemoveContainer" containerID="ab1601d5ab5b33546a1771b01e27fce2fed0db134291ffea748466fb0566f40c" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.773886 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.779619 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.806185 4972 scope.go:117] "RemoveContainer" containerID="1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc" Nov 21 11:03:08 crc kubenswrapper[4972]: E1121 11:03:08.806860 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc\": container with ID starting with 1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc not found: ID does not exist" containerID="1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.806933 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc"} err="failed to get container status \"1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc\": rpc error: code = NotFound desc = could not find container \"1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc\": container with ID starting with 1c7866afe5766b52c169c733ed5cb0acc5578bcb2fc78a482da02816b1210dbc not found: ID does not exist" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.806967 4972 scope.go:117] "RemoveContainer" containerID="ab1601d5ab5b33546a1771b01e27fce2fed0db134291ffea748466fb0566f40c" Nov 21 11:03:08 crc kubenswrapper[4972]: E1121 11:03:08.807285 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab1601d5ab5b33546a1771b01e27fce2fed0db134291ffea748466fb0566f40c\": container with ID starting with ab1601d5ab5b33546a1771b01e27fce2fed0db134291ffea748466fb0566f40c not found: ID does not exist" containerID="ab1601d5ab5b33546a1771b01e27fce2fed0db134291ffea748466fb0566f40c" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.807338 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1601d5ab5b33546a1771b01e27fce2fed0db134291ffea748466fb0566f40c"} err="failed to get container status \"ab1601d5ab5b33546a1771b01e27fce2fed0db134291ffea748466fb0566f40c\": rpc error: code = NotFound desc = could not find container \"ab1601d5ab5b33546a1771b01e27fce2fed0db134291ffea748466fb0566f40c\": container with ID starting with ab1601d5ab5b33546a1771b01e27fce2fed0db134291ffea748466fb0566f40c not found: ID does not exist" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.810892 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.817184 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.830682 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 11:03:08 crc kubenswrapper[4972]: E1121 11:03:08.831204 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214c173a-580d-4c53-b877-63bec03cb169" containerName="rabbitmq" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.831234 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="214c173a-580d-4c53-b877-63bec03cb169" containerName="rabbitmq" Nov 21 11:03:08 crc kubenswrapper[4972]: E1121 11:03:08.831270 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214c173a-580d-4c53-b877-63bec03cb169" containerName="setup-container" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.831281 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="214c173a-580d-4c53-b877-63bec03cb169" containerName="setup-container" Nov 21 11:03:08 crc kubenswrapper[4972]: E1121 11:03:08.831302 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b588a7a1-ba0c-42f3-a7e5-a446e8625180" containerName="setup-container" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.831314 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b588a7a1-ba0c-42f3-a7e5-a446e8625180" containerName="setup-container" Nov 21 11:03:08 crc kubenswrapper[4972]: E1121 11:03:08.831344 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b588a7a1-ba0c-42f3-a7e5-a446e8625180" containerName="rabbitmq" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.831354 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b588a7a1-ba0c-42f3-a7e5-a446e8625180" containerName="rabbitmq" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.831586 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b588a7a1-ba0c-42f3-a7e5-a446e8625180" containerName="rabbitmq" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.831622 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="214c173a-580d-4c53-b877-63bec03cb169" containerName="rabbitmq" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.832953 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.837086 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.837309 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.840535 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.844085 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.844282 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.848666 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-j65c8" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.849231 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.851148 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.857284 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2n5tj" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.857508 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.857652 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.857812 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.857974 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.876984 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947129 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rbbg\" (UniqueName: \"kubernetes.io/projected/2f18efe1-6a41-4cbd-9ed4-889624248484-kube-api-access-9rbbg\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947220 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947262 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f18efe1-6a41-4cbd-9ed4-889624248484-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947289 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947332 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/375f2bb7-3c9f-46f2-812b-fc5325524d0b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947350 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/375f2bb7-3c9f-46f2-812b-fc5325524d0b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947368 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7zsq\" (UniqueName: \"kubernetes.io/projected/375f2bb7-3c9f-46f2-812b-fc5325524d0b-kube-api-access-v7zsq\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947402 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f18efe1-6a41-4cbd-9ed4-889624248484-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947421 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f18efe1-6a41-4cbd-9ed4-889624248484-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947437 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f18efe1-6a41-4cbd-9ed4-889624248484-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947453 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/375f2bb7-3c9f-46f2-812b-fc5325524d0b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947488 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/375f2bb7-3c9f-46f2-812b-fc5325524d0b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947515 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/375f2bb7-3c9f-46f2-812b-fc5325524d0b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947531 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f18efe1-6a41-4cbd-9ed4-889624248484-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947573 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f18efe1-6a41-4cbd-9ed4-889624248484-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947590 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/375f2bb7-3c9f-46f2-812b-fc5325524d0b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947608 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/375f2bb7-3c9f-46f2-812b-fc5325524d0b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.947665 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f18efe1-6a41-4cbd-9ed4-889624248484-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:08 crc kubenswrapper[4972]: I1121 11:03:08.996033 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049050 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f18efe1-6a41-4cbd-9ed4-889624248484-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049093 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/375f2bb7-3c9f-46f2-812b-fc5325524d0b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049139 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f18efe1-6a41-4cbd-9ed4-889624248484-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049155 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/375f2bb7-3c9f-46f2-812b-fc5325524d0b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049174 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/375f2bb7-3c9f-46f2-812b-fc5325524d0b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049212 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f18efe1-6a41-4cbd-9ed4-889624248484-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049242 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rbbg\" (UniqueName: \"kubernetes.io/projected/2f18efe1-6a41-4cbd-9ed4-889624248484-kube-api-access-9rbbg\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049304 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049321 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f18efe1-6a41-4cbd-9ed4-889624248484-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049343 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049362 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/375f2bb7-3c9f-46f2-812b-fc5325524d0b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049379 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/375f2bb7-3c9f-46f2-812b-fc5325524d0b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049395 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7zsq\" (UniqueName: \"kubernetes.io/projected/375f2bb7-3c9f-46f2-812b-fc5325524d0b-kube-api-access-v7zsq\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049412 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f18efe1-6a41-4cbd-9ed4-889624248484-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049427 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f18efe1-6a41-4cbd-9ed4-889624248484-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049443 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f18efe1-6a41-4cbd-9ed4-889624248484-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049458 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/375f2bb7-3c9f-46f2-812b-fc5325524d0b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.049470 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/375f2bb7-3c9f-46f2-812b-fc5325524d0b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.051367 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f18efe1-6a41-4cbd-9ed4-889624248484-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.051735 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/375f2bb7-3c9f-46f2-812b-fc5325524d0b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.053470 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/375f2bb7-3c9f-46f2-812b-fc5325524d0b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.054011 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f18efe1-6a41-4cbd-9ed4-889624248484-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.055093 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f18efe1-6a41-4cbd-9ed4-889624248484-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.055403 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/375f2bb7-3c9f-46f2-812b-fc5325524d0b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.056531 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f18efe1-6a41-4cbd-9ed4-889624248484-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.056628 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/375f2bb7-3c9f-46f2-812b-fc5325524d0b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.056965 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-777df6d877-4kql8"] Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.057254 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/375f2bb7-3c9f-46f2-812b-fc5325524d0b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.057752 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/375f2bb7-3c9f-46f2-812b-fc5325524d0b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.057970 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/375f2bb7-3c9f-46f2-812b-fc5325524d0b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.058559 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-777df6d877-4kql8" podUID="a291332e-75bf-4564-bef6-9548f6fb6326" containerName="dnsmasq-dns" containerID="cri-o://ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d" gracePeriod=10 Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.059529 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f18efe1-6a41-4cbd-9ed4-889624248484-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.059842 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f18efe1-6a41-4cbd-9ed4-889624248484-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.063045 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f18efe1-6a41-4cbd-9ed4-889624248484-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.063374 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.063404 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a02877631be6b2e59b565b40cbb8409747d9930538e396e0a986d5968ecc2ec3/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.065255 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.065310 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/66f7b2d6020d501055f35bd0798f7cbb44853b34e3b28470332a3d09f3e9e5ed/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.077454 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rbbg\" (UniqueName: \"kubernetes.io/projected/2f18efe1-6a41-4cbd-9ed4-889624248484-kube-api-access-9rbbg\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.077541 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7zsq\" (UniqueName: \"kubernetes.io/projected/375f2bb7-3c9f-46f2-812b-fc5325524d0b-kube-api-access-v7zsq\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.098547 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c585b0eb-a16f-4e61-8047-80a9670dc0d6\") pod \"rabbitmq-server-0\" (UID: \"2f18efe1-6a41-4cbd-9ed4-889624248484\") " pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.102660 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf8be34-770f-4fa0-ae75-d50b7601b7b5\") pod \"rabbitmq-cell1-server-0\" (UID: \"375f2bb7-3c9f-46f2-812b-fc5325524d0b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.178141 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.201092 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.554792 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.658668 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4zfr\" (UniqueName: \"kubernetes.io/projected/a291332e-75bf-4564-bef6-9548f6fb6326-kube-api-access-b4zfr\") pod \"a291332e-75bf-4564-bef6-9548f6fb6326\" (UID: \"a291332e-75bf-4564-bef6-9548f6fb6326\") " Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.658727 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a291332e-75bf-4564-bef6-9548f6fb6326-dns-svc\") pod \"a291332e-75bf-4564-bef6-9548f6fb6326\" (UID: \"a291332e-75bf-4564-bef6-9548f6fb6326\") " Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.658783 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a291332e-75bf-4564-bef6-9548f6fb6326-config\") pod \"a291332e-75bf-4564-bef6-9548f6fb6326\" (UID: \"a291332e-75bf-4564-bef6-9548f6fb6326\") " Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.662677 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a291332e-75bf-4564-bef6-9548f6fb6326-kube-api-access-b4zfr" (OuterVolumeSpecName: "kube-api-access-b4zfr") pod "a291332e-75bf-4564-bef6-9548f6fb6326" (UID: "a291332e-75bf-4564-bef6-9548f6fb6326"). InnerVolumeSpecName "kube-api-access-b4zfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.666462 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.693269 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a291332e-75bf-4564-bef6-9548f6fb6326-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a291332e-75bf-4564-bef6-9548f6fb6326" (UID: "a291332e-75bf-4564-bef6-9548f6fb6326"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.701221 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a291332e-75bf-4564-bef6-9548f6fb6326-config" (OuterVolumeSpecName: "config") pod "a291332e-75bf-4564-bef6-9548f6fb6326" (UID: "a291332e-75bf-4564-bef6-9548f6fb6326"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.745204 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"375f2bb7-3c9f-46f2-812b-fc5325524d0b","Type":"ContainerStarted","Data":"7c0002b442ba9306bfbf9386d9ef5376c39899ea6cf3a1425f4e8291adf8a80e"} Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.747118 4972 generic.go:334] "Generic (PLEG): container finished" podID="a291332e-75bf-4564-bef6-9548f6fb6326" containerID="ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d" exitCode=0 Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.747169 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777df6d877-4kql8" event={"ID":"a291332e-75bf-4564-bef6-9548f6fb6326","Type":"ContainerDied","Data":"ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d"} Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.747186 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777df6d877-4kql8" event={"ID":"a291332e-75bf-4564-bef6-9548f6fb6326","Type":"ContainerDied","Data":"5fa3d2b47c72efe96a17e10a4017cb59121193c9f8233c7e3ac789788d0c5280"} Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.747197 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-777df6d877-4kql8" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.747204 4972 scope.go:117] "RemoveContainer" containerID="ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.761365 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4zfr\" (UniqueName: \"kubernetes.io/projected/a291332e-75bf-4564-bef6-9548f6fb6326-kube-api-access-b4zfr\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.761395 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a291332e-75bf-4564-bef6-9548f6fb6326-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.761410 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a291332e-75bf-4564-bef6-9548f6fb6326-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.773303 4972 scope.go:117] "RemoveContainer" containerID="1d38b54546ea566b4c16b70e38dac0c6c76157be41acacc197316c44e3d2b68b" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.781597 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214c173a-580d-4c53-b877-63bec03cb169" path="/var/lib/kubelet/pods/214c173a-580d-4c53-b877-63bec03cb169/volumes" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.782459 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b588a7a1-ba0c-42f3-a7e5-a446e8625180" path="/var/lib/kubelet/pods/b588a7a1-ba0c-42f3-a7e5-a446e8625180/volumes" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.787479 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-777df6d877-4kql8"] Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.791583 4972 scope.go:117] "RemoveContainer" containerID="ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d" Nov 21 11:03:09 crc kubenswrapper[4972]: E1121 11:03:09.792048 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d\": container with ID starting with ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d not found: ID does not exist" containerID="ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.792120 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d"} err="failed to get container status \"ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d\": rpc error: code = NotFound desc = could not find container \"ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d\": container with ID starting with ff386d85b033de110900d31b7f988b23bcaceec8ed8956939f1740a31d28445d not found: ID does not exist" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.792157 4972 scope.go:117] "RemoveContainer" containerID="1d38b54546ea566b4c16b70e38dac0c6c76157be41acacc197316c44e3d2b68b" Nov 21 11:03:09 crc kubenswrapper[4972]: E1121 11:03:09.792465 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d38b54546ea566b4c16b70e38dac0c6c76157be41acacc197316c44e3d2b68b\": container with ID starting with 1d38b54546ea566b4c16b70e38dac0c6c76157be41acacc197316c44e3d2b68b not found: ID does not exist" containerID="1d38b54546ea566b4c16b70e38dac0c6c76157be41acacc197316c44e3d2b68b" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.792501 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d38b54546ea566b4c16b70e38dac0c6c76157be41acacc197316c44e3d2b68b"} err="failed to get container status \"1d38b54546ea566b4c16b70e38dac0c6c76157be41acacc197316c44e3d2b68b\": rpc error: code = NotFound desc = could not find container \"1d38b54546ea566b4c16b70e38dac0c6c76157be41acacc197316c44e3d2b68b\": container with ID starting with 1d38b54546ea566b4c16b70e38dac0c6c76157be41acacc197316c44e3d2b68b not found: ID does not exist" Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.793869 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-777df6d877-4kql8"] Nov 21 11:03:09 crc kubenswrapper[4972]: I1121 11:03:09.807000 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 11:03:09 crc kubenswrapper[4972]: W1121 11:03:09.810016 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f18efe1_6a41_4cbd_9ed4_889624248484.slice/crio-7ca06f638774c9fa592943e7deb81ec8e9be71618eb803b870aea338a7cadc49 WatchSource:0}: Error finding container 7ca06f638774c9fa592943e7deb81ec8e9be71618eb803b870aea338a7cadc49: Status 404 returned error can't find the container with id 7ca06f638774c9fa592943e7deb81ec8e9be71618eb803b870aea338a7cadc49 Nov 21 11:03:10 crc kubenswrapper[4972]: I1121 11:03:10.764294 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f18efe1-6a41-4cbd-9ed4-889624248484","Type":"ContainerStarted","Data":"7ca06f638774c9fa592943e7deb81ec8e9be71618eb803b870aea338a7cadc49"} Nov 21 11:03:11 crc kubenswrapper[4972]: I1121 11:03:11.773194 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a291332e-75bf-4564-bef6-9548f6fb6326" path="/var/lib/kubelet/pods/a291332e-75bf-4564-bef6-9548f6fb6326/volumes" Nov 21 11:03:11 crc kubenswrapper[4972]: I1121 11:03:11.774850 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f18efe1-6a41-4cbd-9ed4-889624248484","Type":"ContainerStarted","Data":"36c7274991312ed9c7a89dab0c0769aa02b3037b134068db427643a38b42b6c8"} Nov 21 11:03:11 crc kubenswrapper[4972]: I1121 11:03:11.777163 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"375f2bb7-3c9f-46f2-812b-fc5325524d0b","Type":"ContainerStarted","Data":"f4833809f1e5f088c3eb9bac217b28dced129abce3f5d4e9d756e865413f9a7d"} Nov 21 11:03:44 crc kubenswrapper[4972]: I1121 11:03:44.077478 4972 generic.go:334] "Generic (PLEG): container finished" podID="2f18efe1-6a41-4cbd-9ed4-889624248484" containerID="36c7274991312ed9c7a89dab0c0769aa02b3037b134068db427643a38b42b6c8" exitCode=0 Nov 21 11:03:44 crc kubenswrapper[4972]: I1121 11:03:44.077612 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f18efe1-6a41-4cbd-9ed4-889624248484","Type":"ContainerDied","Data":"36c7274991312ed9c7a89dab0c0769aa02b3037b134068db427643a38b42b6c8"} Nov 21 11:03:44 crc kubenswrapper[4972]: I1121 11:03:44.086392 4972 generic.go:334] "Generic (PLEG): container finished" podID="375f2bb7-3c9f-46f2-812b-fc5325524d0b" containerID="f4833809f1e5f088c3eb9bac217b28dced129abce3f5d4e9d756e865413f9a7d" exitCode=0 Nov 21 11:03:44 crc kubenswrapper[4972]: I1121 11:03:44.086468 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"375f2bb7-3c9f-46f2-812b-fc5325524d0b","Type":"ContainerDied","Data":"f4833809f1e5f088c3eb9bac217b28dced129abce3f5d4e9d756e865413f9a7d"} Nov 21 11:03:45 crc kubenswrapper[4972]: I1121 11:03:45.099062 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f18efe1-6a41-4cbd-9ed4-889624248484","Type":"ContainerStarted","Data":"e54bd5c435059660fac5b8ba2381a3c21d90776eaa4d7c9eca951ba4e89bb56b"} Nov 21 11:03:45 crc kubenswrapper[4972]: I1121 11:03:45.100773 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 21 11:03:45 crc kubenswrapper[4972]: I1121 11:03:45.101942 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"375f2bb7-3c9f-46f2-812b-fc5325524d0b","Type":"ContainerStarted","Data":"1640527c1d40c3e055404056c77781eaecd61c9ad8acaa0d9324cf8425b275ce"} Nov 21 11:03:45 crc kubenswrapper[4972]: I1121 11:03:45.102207 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:45 crc kubenswrapper[4972]: I1121 11:03:45.136671 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.136653985 podStartE2EDuration="37.136653985s" podCreationTimestamp="2025-11-21 11:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:03:45.134642262 +0000 UTC m=+4970.243784780" watchObservedRunningTime="2025-11-21 11:03:45.136653985 +0000 UTC m=+4970.245796483" Nov 21 11:03:45 crc kubenswrapper[4972]: I1121 11:03:45.154894 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.154873639 podStartE2EDuration="37.154873639s" podCreationTimestamp="2025-11-21 11:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:03:45.152417624 +0000 UTC m=+4970.261560162" watchObservedRunningTime="2025-11-21 11:03:45.154873639 +0000 UTC m=+4970.264016137" Nov 21 11:03:56 crc kubenswrapper[4972]: I1121 11:03:56.178748 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:03:56 crc kubenswrapper[4972]: I1121 11:03:56.179405 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:03:59 crc kubenswrapper[4972]: I1121 11:03:59.182162 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 21 11:03:59 crc kubenswrapper[4972]: I1121 11:03:59.205096 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 21 11:04:07 crc kubenswrapper[4972]: I1121 11:04:07.744921 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Nov 21 11:04:07 crc kubenswrapper[4972]: E1121 11:04:07.745882 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a291332e-75bf-4564-bef6-9548f6fb6326" containerName="init" Nov 21 11:04:07 crc kubenswrapper[4972]: I1121 11:04:07.745901 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a291332e-75bf-4564-bef6-9548f6fb6326" containerName="init" Nov 21 11:04:07 crc kubenswrapper[4972]: E1121 11:04:07.745910 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a291332e-75bf-4564-bef6-9548f6fb6326" containerName="dnsmasq-dns" Nov 21 11:04:07 crc kubenswrapper[4972]: I1121 11:04:07.745918 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a291332e-75bf-4564-bef6-9548f6fb6326" containerName="dnsmasq-dns" Nov 21 11:04:07 crc kubenswrapper[4972]: I1121 11:04:07.746112 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a291332e-75bf-4564-bef6-9548f6fb6326" containerName="dnsmasq-dns" Nov 21 11:04:07 crc kubenswrapper[4972]: I1121 11:04:07.746597 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 21 11:04:07 crc kubenswrapper[4972]: I1121 11:04:07.750785 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-65tvb" Nov 21 11:04:07 crc kubenswrapper[4972]: I1121 11:04:07.774522 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 21 11:04:07 crc kubenswrapper[4972]: I1121 11:04:07.861469 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p22sg\" (UniqueName: \"kubernetes.io/projected/2f0fc899-067a-40d5-a7c9-3e50c3559198-kube-api-access-p22sg\") pod \"mariadb-client-1-default\" (UID: \"2f0fc899-067a-40d5-a7c9-3e50c3559198\") " pod="openstack/mariadb-client-1-default" Nov 21 11:04:07 crc kubenswrapper[4972]: I1121 11:04:07.964296 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p22sg\" (UniqueName: \"kubernetes.io/projected/2f0fc899-067a-40d5-a7c9-3e50c3559198-kube-api-access-p22sg\") pod \"mariadb-client-1-default\" (UID: \"2f0fc899-067a-40d5-a7c9-3e50c3559198\") " pod="openstack/mariadb-client-1-default" Nov 21 11:04:07 crc kubenswrapper[4972]: I1121 11:04:07.985818 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p22sg\" (UniqueName: \"kubernetes.io/projected/2f0fc899-067a-40d5-a7c9-3e50c3559198-kube-api-access-p22sg\") pod \"mariadb-client-1-default\" (UID: \"2f0fc899-067a-40d5-a7c9-3e50c3559198\") " pod="openstack/mariadb-client-1-default" Nov 21 11:04:08 crc kubenswrapper[4972]: I1121 11:04:08.076173 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 21 11:04:08 crc kubenswrapper[4972]: I1121 11:04:08.633210 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 11:04:08 crc kubenswrapper[4972]: I1121 11:04:08.633847 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 21 11:04:09 crc kubenswrapper[4972]: I1121 11:04:09.288050 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"2f0fc899-067a-40d5-a7c9-3e50c3559198","Type":"ContainerStarted","Data":"3501823a8472fc4ffc8881aa5cbd74b15365149015e525a56490d1d739187795"} Nov 21 11:04:13 crc kubenswrapper[4972]: I1121 11:04:13.325193 4972 generic.go:334] "Generic (PLEG): container finished" podID="2f0fc899-067a-40d5-a7c9-3e50c3559198" containerID="f4b83b42392410e2ac0a02ef58dc750a91743f9dcad538f54ed819f21b8f1000" exitCode=0 Nov 21 11:04:13 crc kubenswrapper[4972]: I1121 11:04:13.325272 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"2f0fc899-067a-40d5-a7c9-3e50c3559198","Type":"ContainerDied","Data":"f4b83b42392410e2ac0a02ef58dc750a91743f9dcad538f54ed819f21b8f1000"} Nov 21 11:04:14 crc kubenswrapper[4972]: I1121 11:04:14.774001 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 21 11:04:14 crc kubenswrapper[4972]: I1121 11:04:14.800930 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_2f0fc899-067a-40d5-a7c9-3e50c3559198/mariadb-client-1-default/0.log" Nov 21 11:04:14 crc kubenswrapper[4972]: I1121 11:04:14.832754 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 21 11:04:14 crc kubenswrapper[4972]: I1121 11:04:14.840467 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 21 11:04:14 crc kubenswrapper[4972]: I1121 11:04:14.880379 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p22sg\" (UniqueName: \"kubernetes.io/projected/2f0fc899-067a-40d5-a7c9-3e50c3559198-kube-api-access-p22sg\") pod \"2f0fc899-067a-40d5-a7c9-3e50c3559198\" (UID: \"2f0fc899-067a-40d5-a7c9-3e50c3559198\") " Nov 21 11:04:14 crc kubenswrapper[4972]: I1121 11:04:14.887409 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f0fc899-067a-40d5-a7c9-3e50c3559198-kube-api-access-p22sg" (OuterVolumeSpecName: "kube-api-access-p22sg") pod "2f0fc899-067a-40d5-a7c9-3e50c3559198" (UID: "2f0fc899-067a-40d5-a7c9-3e50c3559198"). InnerVolumeSpecName "kube-api-access-p22sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:04:14 crc kubenswrapper[4972]: I1121 11:04:14.982591 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p22sg\" (UniqueName: \"kubernetes.io/projected/2f0fc899-067a-40d5-a7c9-3e50c3559198-kube-api-access-p22sg\") on node \"crc\" DevicePath \"\"" Nov 21 11:04:15 crc kubenswrapper[4972]: I1121 11:04:15.312454 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Nov 21 11:04:15 crc kubenswrapper[4972]: E1121 11:04:15.312890 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0fc899-067a-40d5-a7c9-3e50c3559198" containerName="mariadb-client-1-default" Nov 21 11:04:15 crc kubenswrapper[4972]: I1121 11:04:15.312912 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0fc899-067a-40d5-a7c9-3e50c3559198" containerName="mariadb-client-1-default" Nov 21 11:04:15 crc kubenswrapper[4972]: I1121 11:04:15.313091 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f0fc899-067a-40d5-a7c9-3e50c3559198" containerName="mariadb-client-1-default" Nov 21 11:04:15 crc kubenswrapper[4972]: I1121 11:04:15.313706 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 21 11:04:15 crc kubenswrapper[4972]: I1121 11:04:15.331106 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 21 11:04:15 crc kubenswrapper[4972]: I1121 11:04:15.381555 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3501823a8472fc4ffc8881aa5cbd74b15365149015e525a56490d1d739187795" Nov 21 11:04:15 crc kubenswrapper[4972]: I1121 11:04:15.381627 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 21 11:04:15 crc kubenswrapper[4972]: I1121 11:04:15.388363 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh97h\" (UniqueName: \"kubernetes.io/projected/e5ffb800-170d-4f37-8fa4-1b0261bbbc9b-kube-api-access-bh97h\") pod \"mariadb-client-2-default\" (UID: \"e5ffb800-170d-4f37-8fa4-1b0261bbbc9b\") " pod="openstack/mariadb-client-2-default" Nov 21 11:04:15 crc kubenswrapper[4972]: I1121 11:04:15.489811 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh97h\" (UniqueName: \"kubernetes.io/projected/e5ffb800-170d-4f37-8fa4-1b0261bbbc9b-kube-api-access-bh97h\") pod \"mariadb-client-2-default\" (UID: \"e5ffb800-170d-4f37-8fa4-1b0261bbbc9b\") " pod="openstack/mariadb-client-2-default" Nov 21 11:04:15 crc kubenswrapper[4972]: I1121 11:04:15.510984 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh97h\" (UniqueName: \"kubernetes.io/projected/e5ffb800-170d-4f37-8fa4-1b0261bbbc9b-kube-api-access-bh97h\") pod \"mariadb-client-2-default\" (UID: \"e5ffb800-170d-4f37-8fa4-1b0261bbbc9b\") " pod="openstack/mariadb-client-2-default" Nov 21 11:04:15 crc kubenswrapper[4972]: I1121 11:04:15.630009 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 21 11:04:15 crc kubenswrapper[4972]: I1121 11:04:15.776623 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f0fc899-067a-40d5-a7c9-3e50c3559198" path="/var/lib/kubelet/pods/2f0fc899-067a-40d5-a7c9-3e50c3559198/volumes" Nov 21 11:04:16 crc kubenswrapper[4972]: I1121 11:04:15.963164 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 21 11:04:16 crc kubenswrapper[4972]: W1121 11:04:15.969033 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5ffb800_170d_4f37_8fa4_1b0261bbbc9b.slice/crio-d2b2b1fc251580059962637a0140939cb3f7d26e4858efba3b8c6b85f0eaa618 WatchSource:0}: Error finding container d2b2b1fc251580059962637a0140939cb3f7d26e4858efba3b8c6b85f0eaa618: Status 404 returned error can't find the container with id d2b2b1fc251580059962637a0140939cb3f7d26e4858efba3b8c6b85f0eaa618 Nov 21 11:04:16 crc kubenswrapper[4972]: I1121 11:04:16.390529 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"e5ffb800-170d-4f37-8fa4-1b0261bbbc9b","Type":"ContainerStarted","Data":"fc9944ed424b2c2fbfe33da5d710ce87d487d51306137c82df35a82b5f33a105"} Nov 21 11:04:16 crc kubenswrapper[4972]: I1121 11:04:16.390572 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"e5ffb800-170d-4f37-8fa4-1b0261bbbc9b","Type":"ContainerStarted","Data":"d2b2b1fc251580059962637a0140939cb3f7d26e4858efba3b8c6b85f0eaa618"} Nov 21 11:04:16 crc kubenswrapper[4972]: I1121 11:04:16.402996 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=1.402936443 podStartE2EDuration="1.402936443s" podCreationTimestamp="2025-11-21 11:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:04:16.400847897 +0000 UTC m=+5001.509990415" watchObservedRunningTime="2025-11-21 11:04:16.402936443 +0000 UTC m=+5001.512078951" Nov 21 11:04:17 crc kubenswrapper[4972]: I1121 11:04:17.399526 4972 generic.go:334] "Generic (PLEG): container finished" podID="e5ffb800-170d-4f37-8fa4-1b0261bbbc9b" containerID="fc9944ed424b2c2fbfe33da5d710ce87d487d51306137c82df35a82b5f33a105" exitCode=1 Nov 21 11:04:17 crc kubenswrapper[4972]: I1121 11:04:17.399801 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"e5ffb800-170d-4f37-8fa4-1b0261bbbc9b","Type":"ContainerDied","Data":"fc9944ed424b2c2fbfe33da5d710ce87d487d51306137c82df35a82b5f33a105"} Nov 21 11:04:18 crc kubenswrapper[4972]: I1121 11:04:18.807458 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 21 11:04:18 crc kubenswrapper[4972]: I1121 11:04:18.842980 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 21 11:04:18 crc kubenswrapper[4972]: I1121 11:04:18.851527 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 21 11:04:18 crc kubenswrapper[4972]: I1121 11:04:18.887419 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh97h\" (UniqueName: \"kubernetes.io/projected/e5ffb800-170d-4f37-8fa4-1b0261bbbc9b-kube-api-access-bh97h\") pod \"e5ffb800-170d-4f37-8fa4-1b0261bbbc9b\" (UID: \"e5ffb800-170d-4f37-8fa4-1b0261bbbc9b\") " Nov 21 11:04:18 crc kubenswrapper[4972]: I1121 11:04:18.894315 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ffb800-170d-4f37-8fa4-1b0261bbbc9b-kube-api-access-bh97h" (OuterVolumeSpecName: "kube-api-access-bh97h") pod "e5ffb800-170d-4f37-8fa4-1b0261bbbc9b" (UID: "e5ffb800-170d-4f37-8fa4-1b0261bbbc9b"). InnerVolumeSpecName "kube-api-access-bh97h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:04:18 crc kubenswrapper[4972]: I1121 11:04:18.991907 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh97h\" (UniqueName: \"kubernetes.io/projected/e5ffb800-170d-4f37-8fa4-1b0261bbbc9b-kube-api-access-bh97h\") on node \"crc\" DevicePath \"\"" Nov 21 11:04:19 crc kubenswrapper[4972]: I1121 11:04:19.263176 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Nov 21 11:04:19 crc kubenswrapper[4972]: E1121 11:04:19.264337 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ffb800-170d-4f37-8fa4-1b0261bbbc9b" containerName="mariadb-client-2-default" Nov 21 11:04:19 crc kubenswrapper[4972]: I1121 11:04:19.264396 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ffb800-170d-4f37-8fa4-1b0261bbbc9b" containerName="mariadb-client-2-default" Nov 21 11:04:19 crc kubenswrapper[4972]: I1121 11:04:19.264803 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ffb800-170d-4f37-8fa4-1b0261bbbc9b" containerName="mariadb-client-2-default" Nov 21 11:04:19 crc kubenswrapper[4972]: I1121 11:04:19.265810 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 21 11:04:19 crc kubenswrapper[4972]: I1121 11:04:19.277528 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 21 11:04:19 crc kubenswrapper[4972]: I1121 11:04:19.401328 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhg8k\" (UniqueName: \"kubernetes.io/projected/4d94034e-79a2-4c3f-91a7-976ad175ef1c-kube-api-access-lhg8k\") pod \"mariadb-client-1\" (UID: \"4d94034e-79a2-4c3f-91a7-976ad175ef1c\") " pod="openstack/mariadb-client-1" Nov 21 11:04:19 crc kubenswrapper[4972]: I1121 11:04:19.430597 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2b2b1fc251580059962637a0140939cb3f7d26e4858efba3b8c6b85f0eaa618" Nov 21 11:04:19 crc kubenswrapper[4972]: I1121 11:04:19.430723 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 21 11:04:19 crc kubenswrapper[4972]: I1121 11:04:19.503175 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhg8k\" (UniqueName: \"kubernetes.io/projected/4d94034e-79a2-4c3f-91a7-976ad175ef1c-kube-api-access-lhg8k\") pod \"mariadb-client-1\" (UID: \"4d94034e-79a2-4c3f-91a7-976ad175ef1c\") " pod="openstack/mariadb-client-1" Nov 21 11:04:19 crc kubenswrapper[4972]: I1121 11:04:19.527314 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhg8k\" (UniqueName: \"kubernetes.io/projected/4d94034e-79a2-4c3f-91a7-976ad175ef1c-kube-api-access-lhg8k\") pod \"mariadb-client-1\" (UID: \"4d94034e-79a2-4c3f-91a7-976ad175ef1c\") " pod="openstack/mariadb-client-1" Nov 21 11:04:19 crc kubenswrapper[4972]: I1121 11:04:19.601408 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 21 11:04:19 crc kubenswrapper[4972]: I1121 11:04:19.774823 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5ffb800-170d-4f37-8fa4-1b0261bbbc9b" path="/var/lib/kubelet/pods/e5ffb800-170d-4f37-8fa4-1b0261bbbc9b/volumes" Nov 21 11:04:20 crc kubenswrapper[4972]: I1121 11:04:20.192000 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 21 11:04:20 crc kubenswrapper[4972]: W1121 11:04:20.196036 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d94034e_79a2_4c3f_91a7_976ad175ef1c.slice/crio-a9b076e7aa189b1b03b6987c5b3615c966acb267a18fe34cd451e9086414a344 WatchSource:0}: Error finding container a9b076e7aa189b1b03b6987c5b3615c966acb267a18fe34cd451e9086414a344: Status 404 returned error can't find the container with id a9b076e7aa189b1b03b6987c5b3615c966acb267a18fe34cd451e9086414a344 Nov 21 11:04:20 crc kubenswrapper[4972]: I1121 11:04:20.441145 4972 generic.go:334] "Generic (PLEG): container finished" podID="4d94034e-79a2-4c3f-91a7-976ad175ef1c" containerID="38c98a3c28fc550a5a24a58b6c64818b67e901d3933fb2cf52788d3665e7983f" exitCode=0 Nov 21 11:04:20 crc kubenswrapper[4972]: I1121 11:04:20.441220 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"4d94034e-79a2-4c3f-91a7-976ad175ef1c","Type":"ContainerDied","Data":"38c98a3c28fc550a5a24a58b6c64818b67e901d3933fb2cf52788d3665e7983f"} Nov 21 11:04:20 crc kubenswrapper[4972]: I1121 11:04:20.441256 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"4d94034e-79a2-4c3f-91a7-976ad175ef1c","Type":"ContainerStarted","Data":"a9b076e7aa189b1b03b6987c5b3615c966acb267a18fe34cd451e9086414a344"} Nov 21 11:04:21 crc kubenswrapper[4972]: I1121 11:04:21.920108 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 21 11:04:21 crc kubenswrapper[4972]: I1121 11:04:21.938107 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_4d94034e-79a2-4c3f-91a7-976ad175ef1c/mariadb-client-1/0.log" Nov 21 11:04:21 crc kubenswrapper[4972]: I1121 11:04:21.968165 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Nov 21 11:04:21 crc kubenswrapper[4972]: I1121 11:04:21.977558 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.058908 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhg8k\" (UniqueName: \"kubernetes.io/projected/4d94034e-79a2-4c3f-91a7-976ad175ef1c-kube-api-access-lhg8k\") pod \"4d94034e-79a2-4c3f-91a7-976ad175ef1c\" (UID: \"4d94034e-79a2-4c3f-91a7-976ad175ef1c\") " Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.067641 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d94034e-79a2-4c3f-91a7-976ad175ef1c-kube-api-access-lhg8k" (OuterVolumeSpecName: "kube-api-access-lhg8k") pod "4d94034e-79a2-4c3f-91a7-976ad175ef1c" (UID: "4d94034e-79a2-4c3f-91a7-976ad175ef1c"). InnerVolumeSpecName "kube-api-access-lhg8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.160712 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhg8k\" (UniqueName: \"kubernetes.io/projected/4d94034e-79a2-4c3f-91a7-976ad175ef1c-kube-api-access-lhg8k\") on node \"crc\" DevicePath \"\"" Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.407382 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Nov 21 11:04:22 crc kubenswrapper[4972]: E1121 11:04:22.408910 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d94034e-79a2-4c3f-91a7-976ad175ef1c" containerName="mariadb-client-1" Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.408945 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d94034e-79a2-4c3f-91a7-976ad175ef1c" containerName="mariadb-client-1" Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.409627 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d94034e-79a2-4c3f-91a7-976ad175ef1c" containerName="mariadb-client-1" Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.411010 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.437168 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.460177 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9b076e7aa189b1b03b6987c5b3615c966acb267a18fe34cd451e9086414a344" Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.460265 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.566061 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbvpk\" (UniqueName: \"kubernetes.io/projected/604cb205-4186-4eea-a83f-3961336e50d4-kube-api-access-dbvpk\") pod \"mariadb-client-4-default\" (UID: \"604cb205-4186-4eea-a83f-3961336e50d4\") " pod="openstack/mariadb-client-4-default" Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.668031 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbvpk\" (UniqueName: \"kubernetes.io/projected/604cb205-4186-4eea-a83f-3961336e50d4-kube-api-access-dbvpk\") pod \"mariadb-client-4-default\" (UID: \"604cb205-4186-4eea-a83f-3961336e50d4\") " pod="openstack/mariadb-client-4-default" Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.701455 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbvpk\" (UniqueName: \"kubernetes.io/projected/604cb205-4186-4eea-a83f-3961336e50d4-kube-api-access-dbvpk\") pod \"mariadb-client-4-default\" (UID: \"604cb205-4186-4eea-a83f-3961336e50d4\") " pod="openstack/mariadb-client-4-default" Nov 21 11:04:22 crc kubenswrapper[4972]: I1121 11:04:22.757926 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 21 11:04:23 crc kubenswrapper[4972]: I1121 11:04:23.422882 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 21 11:04:23 crc kubenswrapper[4972]: I1121 11:04:23.468994 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"604cb205-4186-4eea-a83f-3961336e50d4","Type":"ContainerStarted","Data":"4466b85f3cfd06bcfbe384c55d5610d84ba5efdaf5c5d7a3548aba4a7aa4134f"} Nov 21 11:04:23 crc kubenswrapper[4972]: I1121 11:04:23.768805 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d94034e-79a2-4c3f-91a7-976ad175ef1c" path="/var/lib/kubelet/pods/4d94034e-79a2-4c3f-91a7-976ad175ef1c/volumes" Nov 21 11:04:24 crc kubenswrapper[4972]: I1121 11:04:24.479594 4972 generic.go:334] "Generic (PLEG): container finished" podID="604cb205-4186-4eea-a83f-3961336e50d4" containerID="effeae86efcef0ae4e1df3cef6bbca9cca7451ef9d763437136ab366664dc128" exitCode=0 Nov 21 11:04:24 crc kubenswrapper[4972]: I1121 11:04:24.479652 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"604cb205-4186-4eea-a83f-3961336e50d4","Type":"ContainerDied","Data":"effeae86efcef0ae4e1df3cef6bbca9cca7451ef9d763437136ab366664dc128"} Nov 21 11:04:25 crc kubenswrapper[4972]: I1121 11:04:25.880869 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 21 11:04:25 crc kubenswrapper[4972]: I1121 11:04:25.900445 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_604cb205-4186-4eea-a83f-3961336e50d4/mariadb-client-4-default/0.log" Nov 21 11:04:25 crc kubenswrapper[4972]: I1121 11:04:25.924790 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 21 11:04:25 crc kubenswrapper[4972]: I1121 11:04:25.930205 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 21 11:04:26 crc kubenswrapper[4972]: I1121 11:04:26.027420 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbvpk\" (UniqueName: \"kubernetes.io/projected/604cb205-4186-4eea-a83f-3961336e50d4-kube-api-access-dbvpk\") pod \"604cb205-4186-4eea-a83f-3961336e50d4\" (UID: \"604cb205-4186-4eea-a83f-3961336e50d4\") " Nov 21 11:04:26 crc kubenswrapper[4972]: I1121 11:04:26.079233 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604cb205-4186-4eea-a83f-3961336e50d4-kube-api-access-dbvpk" (OuterVolumeSpecName: "kube-api-access-dbvpk") pod "604cb205-4186-4eea-a83f-3961336e50d4" (UID: "604cb205-4186-4eea-a83f-3961336e50d4"). InnerVolumeSpecName "kube-api-access-dbvpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:04:26 crc kubenswrapper[4972]: I1121 11:04:26.129542 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbvpk\" (UniqueName: \"kubernetes.io/projected/604cb205-4186-4eea-a83f-3961336e50d4-kube-api-access-dbvpk\") on node \"crc\" DevicePath \"\"" Nov 21 11:04:26 crc kubenswrapper[4972]: I1121 11:04:26.178892 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:04:26 crc kubenswrapper[4972]: I1121 11:04:26.178973 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:04:26 crc kubenswrapper[4972]: I1121 11:04:26.505462 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4466b85f3cfd06bcfbe384c55d5610d84ba5efdaf5c5d7a3548aba4a7aa4134f" Nov 21 11:04:26 crc kubenswrapper[4972]: I1121 11:04:26.505573 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 21 11:04:27 crc kubenswrapper[4972]: I1121 11:04:27.776881 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="604cb205-4186-4eea-a83f-3961336e50d4" path="/var/lib/kubelet/pods/604cb205-4186-4eea-a83f-3961336e50d4/volumes" Nov 21 11:04:30 crc kubenswrapper[4972]: I1121 11:04:30.157917 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Nov 21 11:04:30 crc kubenswrapper[4972]: E1121 11:04:30.158755 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604cb205-4186-4eea-a83f-3961336e50d4" containerName="mariadb-client-4-default" Nov 21 11:04:30 crc kubenswrapper[4972]: I1121 11:04:30.158776 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="604cb205-4186-4eea-a83f-3961336e50d4" containerName="mariadb-client-4-default" Nov 21 11:04:30 crc kubenswrapper[4972]: I1121 11:04:30.159168 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="604cb205-4186-4eea-a83f-3961336e50d4" containerName="mariadb-client-4-default" Nov 21 11:04:30 crc kubenswrapper[4972]: I1121 11:04:30.159960 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 21 11:04:30 crc kubenswrapper[4972]: I1121 11:04:30.171510 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-65tvb" Nov 21 11:04:30 crc kubenswrapper[4972]: I1121 11:04:30.172948 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 21 11:04:30 crc kubenswrapper[4972]: I1121 11:04:30.299252 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917-kube-api-access-gdpp6\") pod \"mariadb-client-5-default\" (UID: \"9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917\") " pod="openstack/mariadb-client-5-default" Nov 21 11:04:30 crc kubenswrapper[4972]: I1121 11:04:30.400288 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917-kube-api-access-gdpp6\") pod \"mariadb-client-5-default\" (UID: \"9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917\") " pod="openstack/mariadb-client-5-default" Nov 21 11:04:30 crc kubenswrapper[4972]: I1121 11:04:30.419889 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917-kube-api-access-gdpp6\") pod \"mariadb-client-5-default\" (UID: \"9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917\") " pod="openstack/mariadb-client-5-default" Nov 21 11:04:30 crc kubenswrapper[4972]: I1121 11:04:30.496246 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 21 11:04:30 crc kubenswrapper[4972]: I1121 11:04:30.866486 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 21 11:04:31 crc kubenswrapper[4972]: I1121 11:04:31.573958 4972 generic.go:334] "Generic (PLEG): container finished" podID="9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917" containerID="7f34cab9a2afef988a944172a47899318a93ee59f9196f5e8ffc3044e747762f" exitCode=0 Nov 21 11:04:31 crc kubenswrapper[4972]: I1121 11:04:31.574115 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917","Type":"ContainerDied","Data":"7f34cab9a2afef988a944172a47899318a93ee59f9196f5e8ffc3044e747762f"} Nov 21 11:04:31 crc kubenswrapper[4972]: I1121 11:04:31.574427 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917","Type":"ContainerStarted","Data":"1581dae754e9a463f23f5976c772d7636fc37adf7eb7f5967a099d28b4d921dd"} Nov 21 11:04:32 crc kubenswrapper[4972]: I1121 11:04:32.979796 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 21 11:04:32 crc kubenswrapper[4972]: I1121 11:04:32.998057 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917/mariadb-client-5-default/0.log" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.024544 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.029318 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.145935 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917-kube-api-access-gdpp6\") pod \"9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917\" (UID: \"9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917\") " Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.153162 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917-kube-api-access-gdpp6" (OuterVolumeSpecName: "kube-api-access-gdpp6") pod "9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917" (UID: "9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917"). InnerVolumeSpecName "kube-api-access-gdpp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.196112 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Nov 21 11:04:33 crc kubenswrapper[4972]: E1121 11:04:33.196440 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917" containerName="mariadb-client-5-default" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.196457 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917" containerName="mariadb-client-5-default" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.196609 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917" containerName="mariadb-client-5-default" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.197126 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.204541 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.247569 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917-kube-api-access-gdpp6\") on node \"crc\" DevicePath \"\"" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.348604 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnwvr\" (UniqueName: \"kubernetes.io/projected/23c9bdd7-e376-4fa4-9c6c-36efab5646bd-kube-api-access-dnwvr\") pod \"mariadb-client-6-default\" (UID: \"23c9bdd7-e376-4fa4-9c6c-36efab5646bd\") " pod="openstack/mariadb-client-6-default" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.450036 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnwvr\" (UniqueName: \"kubernetes.io/projected/23c9bdd7-e376-4fa4-9c6c-36efab5646bd-kube-api-access-dnwvr\") pod \"mariadb-client-6-default\" (UID: \"23c9bdd7-e376-4fa4-9c6c-36efab5646bd\") " pod="openstack/mariadb-client-6-default" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.478786 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnwvr\" (UniqueName: \"kubernetes.io/projected/23c9bdd7-e376-4fa4-9c6c-36efab5646bd-kube-api-access-dnwvr\") pod \"mariadb-client-6-default\" (UID: \"23c9bdd7-e376-4fa4-9c6c-36efab5646bd\") " pod="openstack/mariadb-client-6-default" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.518807 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.595194 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1581dae754e9a463f23f5976c772d7636fc37adf7eb7f5967a099d28b4d921dd" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.595299 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 21 11:04:33 crc kubenswrapper[4972]: I1121 11:04:33.774747 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917" path="/var/lib/kubelet/pods/9a4867ee-cca1-44a1-a6a9-1eb9cc7a8917/volumes" Nov 21 11:04:34 crc kubenswrapper[4972]: I1121 11:04:34.082091 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 21 11:04:34 crc kubenswrapper[4972]: I1121 11:04:34.620479 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"23c9bdd7-e376-4fa4-9c6c-36efab5646bd","Type":"ContainerStarted","Data":"5f995e592040a5cfe6bee650f679ce02b698749e5c06feb964e2f83d010b7b51"} Nov 21 11:04:34 crc kubenswrapper[4972]: I1121 11:04:34.621066 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"23c9bdd7-e376-4fa4-9c6c-36efab5646bd","Type":"ContainerStarted","Data":"ef4a36f18afaf77218eba07eb9e030f142e9c371c7d7fce7d7e5b1e7b24b083d"} Nov 21 11:04:34 crc kubenswrapper[4972]: I1121 11:04:34.644721 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.644499375 podStartE2EDuration="1.644499375s" podCreationTimestamp="2025-11-21 11:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:04:34.640044776 +0000 UTC m=+5019.749187314" watchObservedRunningTime="2025-11-21 11:04:34.644499375 +0000 UTC m=+5019.753641903" Nov 21 11:04:35 crc kubenswrapper[4972]: I1121 11:04:35.632965 4972 generic.go:334] "Generic (PLEG): container finished" podID="23c9bdd7-e376-4fa4-9c6c-36efab5646bd" containerID="5f995e592040a5cfe6bee650f679ce02b698749e5c06feb964e2f83d010b7b51" exitCode=1 Nov 21 11:04:35 crc kubenswrapper[4972]: I1121 11:04:35.633082 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"23c9bdd7-e376-4fa4-9c6c-36efab5646bd","Type":"ContainerDied","Data":"5f995e592040a5cfe6bee650f679ce02b698749e5c06feb964e2f83d010b7b51"} Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.096944 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.146596 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.155684 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.226904 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnwvr\" (UniqueName: \"kubernetes.io/projected/23c9bdd7-e376-4fa4-9c6c-36efab5646bd-kube-api-access-dnwvr\") pod \"23c9bdd7-e376-4fa4-9c6c-36efab5646bd\" (UID: \"23c9bdd7-e376-4fa4-9c6c-36efab5646bd\") " Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.235956 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c9bdd7-e376-4fa4-9c6c-36efab5646bd-kube-api-access-dnwvr" (OuterVolumeSpecName: "kube-api-access-dnwvr") pod "23c9bdd7-e376-4fa4-9c6c-36efab5646bd" (UID: "23c9bdd7-e376-4fa4-9c6c-36efab5646bd"). InnerVolumeSpecName "kube-api-access-dnwvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.295123 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Nov 21 11:04:37 crc kubenswrapper[4972]: E1121 11:04:37.295562 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c9bdd7-e376-4fa4-9c6c-36efab5646bd" containerName="mariadb-client-6-default" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.295591 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c9bdd7-e376-4fa4-9c6c-36efab5646bd" containerName="mariadb-client-6-default" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.296096 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c9bdd7-e376-4fa4-9c6c-36efab5646bd" containerName="mariadb-client-6-default" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.297200 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.311420 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.329350 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnwvr\" (UniqueName: \"kubernetes.io/projected/23c9bdd7-e376-4fa4-9c6c-36efab5646bd-kube-api-access-dnwvr\") on node \"crc\" DevicePath \"\"" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.430702 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4w4l\" (UniqueName: \"kubernetes.io/projected/92ef0d20-6660-4e7a-aed9-8f876939e7ec-kube-api-access-j4w4l\") pod \"mariadb-client-7-default\" (UID: \"92ef0d20-6660-4e7a-aed9-8f876939e7ec\") " pod="openstack/mariadb-client-7-default" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.531732 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4w4l\" (UniqueName: \"kubernetes.io/projected/92ef0d20-6660-4e7a-aed9-8f876939e7ec-kube-api-access-j4w4l\") pod \"mariadb-client-7-default\" (UID: \"92ef0d20-6660-4e7a-aed9-8f876939e7ec\") " pod="openstack/mariadb-client-7-default" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.556614 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4w4l\" (UniqueName: \"kubernetes.io/projected/92ef0d20-6660-4e7a-aed9-8f876939e7ec-kube-api-access-j4w4l\") pod \"mariadb-client-7-default\" (UID: \"92ef0d20-6660-4e7a-aed9-8f876939e7ec\") " pod="openstack/mariadb-client-7-default" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.632193 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.666771 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef4a36f18afaf77218eba07eb9e030f142e9c371c7d7fce7d7e5b1e7b24b083d" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.666844 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 21 11:04:37 crc kubenswrapper[4972]: I1121 11:04:37.772084 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23c9bdd7-e376-4fa4-9c6c-36efab5646bd" path="/var/lib/kubelet/pods/23c9bdd7-e376-4fa4-9c6c-36efab5646bd/volumes" Nov 21 11:04:38 crc kubenswrapper[4972]: I1121 11:04:38.156800 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 21 11:04:38 crc kubenswrapper[4972]: W1121 11:04:38.166030 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92ef0d20_6660_4e7a_aed9_8f876939e7ec.slice/crio-eb666fbd1195061e9ae345c6955a8bc13ea066443f33a3393fc1478ddc874ea1 WatchSource:0}: Error finding container eb666fbd1195061e9ae345c6955a8bc13ea066443f33a3393fc1478ddc874ea1: Status 404 returned error can't find the container with id eb666fbd1195061e9ae345c6955a8bc13ea066443f33a3393fc1478ddc874ea1 Nov 21 11:04:38 crc kubenswrapper[4972]: I1121 11:04:38.683327 4972 generic.go:334] "Generic (PLEG): container finished" podID="92ef0d20-6660-4e7a-aed9-8f876939e7ec" containerID="957756c855e8a87fa6b6fe32a4042ce95378fdec89b516cce16c801c3cbf039a" exitCode=0 Nov 21 11:04:38 crc kubenswrapper[4972]: I1121 11:04:38.683381 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"92ef0d20-6660-4e7a-aed9-8f876939e7ec","Type":"ContainerDied","Data":"957756c855e8a87fa6b6fe32a4042ce95378fdec89b516cce16c801c3cbf039a"} Nov 21 11:04:38 crc kubenswrapper[4972]: I1121 11:04:38.683791 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"92ef0d20-6660-4e7a-aed9-8f876939e7ec","Type":"ContainerStarted","Data":"eb666fbd1195061e9ae345c6955a8bc13ea066443f33a3393fc1478ddc874ea1"} Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.239173 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.263421 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_92ef0d20-6660-4e7a-aed9-8f876939e7ec/mariadb-client-7-default/0.log" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.276708 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4w4l\" (UniqueName: \"kubernetes.io/projected/92ef0d20-6660-4e7a-aed9-8f876939e7ec-kube-api-access-j4w4l\") pod \"92ef0d20-6660-4e7a-aed9-8f876939e7ec\" (UID: \"92ef0d20-6660-4e7a-aed9-8f876939e7ec\") " Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.293449 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92ef0d20-6660-4e7a-aed9-8f876939e7ec-kube-api-access-j4w4l" (OuterVolumeSpecName: "kube-api-access-j4w4l") pod "92ef0d20-6660-4e7a-aed9-8f876939e7ec" (UID: "92ef0d20-6660-4e7a-aed9-8f876939e7ec"). InnerVolumeSpecName "kube-api-access-j4w4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.295341 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.306975 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.378634 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4w4l\" (UniqueName: \"kubernetes.io/projected/92ef0d20-6660-4e7a-aed9-8f876939e7ec-kube-api-access-j4w4l\") on node \"crc\" DevicePath \"\"" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.457673 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Nov 21 11:04:40 crc kubenswrapper[4972]: E1121 11:04:40.458221 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ef0d20-6660-4e7a-aed9-8f876939e7ec" containerName="mariadb-client-7-default" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.458253 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ef0d20-6660-4e7a-aed9-8f876939e7ec" containerName="mariadb-client-7-default" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.458765 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="92ef0d20-6660-4e7a-aed9-8f876939e7ec" containerName="mariadb-client-7-default" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.461417 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.465176 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.479341 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcpxf\" (UniqueName: \"kubernetes.io/projected/72abd9df-d91d-4fe0-8c6d-586607576f3d-kube-api-access-wcpxf\") pod \"mariadb-client-2\" (UID: \"72abd9df-d91d-4fe0-8c6d-586607576f3d\") " pod="openstack/mariadb-client-2" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.582009 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcpxf\" (UniqueName: \"kubernetes.io/projected/72abd9df-d91d-4fe0-8c6d-586607576f3d-kube-api-access-wcpxf\") pod \"mariadb-client-2\" (UID: \"72abd9df-d91d-4fe0-8c6d-586607576f3d\") " pod="openstack/mariadb-client-2" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.615884 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcpxf\" (UniqueName: \"kubernetes.io/projected/72abd9df-d91d-4fe0-8c6d-586607576f3d-kube-api-access-wcpxf\") pod \"mariadb-client-2\" (UID: \"72abd9df-d91d-4fe0-8c6d-586607576f3d\") " pod="openstack/mariadb-client-2" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.704623 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb666fbd1195061e9ae345c6955a8bc13ea066443f33a3393fc1478ddc874ea1" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.704713 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 21 11:04:40 crc kubenswrapper[4972]: I1121 11:04:40.792491 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 21 11:04:41 crc kubenswrapper[4972]: I1121 11:04:41.135465 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 21 11:04:41 crc kubenswrapper[4972]: W1121 11:04:41.137078 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72abd9df_d91d_4fe0_8c6d_586607576f3d.slice/crio-a753b7ca6a9345701a0cfde482352073bb8994f903071aa3a2a1ca85784ad07d WatchSource:0}: Error finding container a753b7ca6a9345701a0cfde482352073bb8994f903071aa3a2a1ca85784ad07d: Status 404 returned error can't find the container with id a753b7ca6a9345701a0cfde482352073bb8994f903071aa3a2a1ca85784ad07d Nov 21 11:04:41 crc kubenswrapper[4972]: I1121 11:04:41.717016 4972 generic.go:334] "Generic (PLEG): container finished" podID="72abd9df-d91d-4fe0-8c6d-586607576f3d" containerID="1101eb487b54b716e141b32ba24e14e83fbd631b144261a239f896a232f017c1" exitCode=0 Nov 21 11:04:41 crc kubenswrapper[4972]: I1121 11:04:41.717130 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"72abd9df-d91d-4fe0-8c6d-586607576f3d","Type":"ContainerDied","Data":"1101eb487b54b716e141b32ba24e14e83fbd631b144261a239f896a232f017c1"} Nov 21 11:04:41 crc kubenswrapper[4972]: I1121 11:04:41.717417 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"72abd9df-d91d-4fe0-8c6d-586607576f3d","Type":"ContainerStarted","Data":"a753b7ca6a9345701a0cfde482352073bb8994f903071aa3a2a1ca85784ad07d"} Nov 21 11:04:41 crc kubenswrapper[4972]: I1121 11:04:41.775915 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92ef0d20-6660-4e7a-aed9-8f876939e7ec" path="/var/lib/kubelet/pods/92ef0d20-6660-4e7a-aed9-8f876939e7ec/volumes" Nov 21 11:04:43 crc kubenswrapper[4972]: I1121 11:04:43.255537 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 21 11:04:43 crc kubenswrapper[4972]: I1121 11:04:43.278573 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_72abd9df-d91d-4fe0-8c6d-586607576f3d/mariadb-client-2/0.log" Nov 21 11:04:43 crc kubenswrapper[4972]: I1121 11:04:43.309742 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Nov 21 11:04:43 crc kubenswrapper[4972]: I1121 11:04:43.318753 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Nov 21 11:04:43 crc kubenswrapper[4972]: I1121 11:04:43.432726 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcpxf\" (UniqueName: \"kubernetes.io/projected/72abd9df-d91d-4fe0-8c6d-586607576f3d-kube-api-access-wcpxf\") pod \"72abd9df-d91d-4fe0-8c6d-586607576f3d\" (UID: \"72abd9df-d91d-4fe0-8c6d-586607576f3d\") " Nov 21 11:04:43 crc kubenswrapper[4972]: I1121 11:04:43.440334 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72abd9df-d91d-4fe0-8c6d-586607576f3d-kube-api-access-wcpxf" (OuterVolumeSpecName: "kube-api-access-wcpxf") pod "72abd9df-d91d-4fe0-8c6d-586607576f3d" (UID: "72abd9df-d91d-4fe0-8c6d-586607576f3d"). InnerVolumeSpecName "kube-api-access-wcpxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:04:43 crc kubenswrapper[4972]: I1121 11:04:43.536226 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcpxf\" (UniqueName: \"kubernetes.io/projected/72abd9df-d91d-4fe0-8c6d-586607576f3d-kube-api-access-wcpxf\") on node \"crc\" DevicePath \"\"" Nov 21 11:04:43 crc kubenswrapper[4972]: I1121 11:04:43.741640 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a753b7ca6a9345701a0cfde482352073bb8994f903071aa3a2a1ca85784ad07d" Nov 21 11:04:43 crc kubenswrapper[4972]: I1121 11:04:43.741723 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 21 11:04:43 crc kubenswrapper[4972]: I1121 11:04:43.777517 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72abd9df-d91d-4fe0-8c6d-586607576f3d" path="/var/lib/kubelet/pods/72abd9df-d91d-4fe0-8c6d-586607576f3d/volumes" Nov 21 11:04:56 crc kubenswrapper[4972]: I1121 11:04:56.179065 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:04:56 crc kubenswrapper[4972]: I1121 11:04:56.179821 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:04:56 crc kubenswrapper[4972]: I1121 11:04:56.179901 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 11:04:56 crc kubenswrapper[4972]: I1121 11:04:56.184524 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 11:04:56 crc kubenswrapper[4972]: I1121 11:04:56.184925 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" gracePeriod=600 Nov 21 11:04:56 crc kubenswrapper[4972]: E1121 11:04:56.321273 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:04:56 crc kubenswrapper[4972]: I1121 11:04:56.877922 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" exitCode=0 Nov 21 11:04:56 crc kubenswrapper[4972]: I1121 11:04:56.878066 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916"} Nov 21 11:04:56 crc kubenswrapper[4972]: I1121 11:04:56.878282 4972 scope.go:117] "RemoveContainer" containerID="d09766e11e3fabe4af926f8addbec82b361431494255cdf37952ea1f017d3953" Nov 21 11:04:56 crc kubenswrapper[4972]: I1121 11:04:56.878939 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:04:56 crc kubenswrapper[4972]: E1121 11:04:56.879177 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:05:09 crc kubenswrapper[4972]: I1121 11:05:09.760378 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:05:09 crc kubenswrapper[4972]: E1121 11:05:09.761445 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:05:22 crc kubenswrapper[4972]: I1121 11:05:22.760056 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:05:22 crc kubenswrapper[4972]: E1121 11:05:22.761141 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:05:25 crc kubenswrapper[4972]: I1121 11:05:25.944789 4972 scope.go:117] "RemoveContainer" containerID="3661174fdd6297aad99b56d3700d92aed8a09d4687f520614d9bb9a5566ef1a8" Nov 21 11:05:37 crc kubenswrapper[4972]: I1121 11:05:37.760162 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:05:37 crc kubenswrapper[4972]: E1121 11:05:37.761013 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:05:48 crc kubenswrapper[4972]: I1121 11:05:48.759739 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:05:48 crc kubenswrapper[4972]: E1121 11:05:48.760760 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:06:03 crc kubenswrapper[4972]: I1121 11:06:03.760006 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:06:03 crc kubenswrapper[4972]: E1121 11:06:03.760942 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:06:18 crc kubenswrapper[4972]: I1121 11:06:18.759424 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:06:18 crc kubenswrapper[4972]: E1121 11:06:18.760333 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.158667 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5jndd"] Nov 21 11:06:26 crc kubenswrapper[4972]: E1121 11:06:26.160380 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72abd9df-d91d-4fe0-8c6d-586607576f3d" containerName="mariadb-client-2" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.160414 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="72abd9df-d91d-4fe0-8c6d-586607576f3d" containerName="mariadb-client-2" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.160864 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="72abd9df-d91d-4fe0-8c6d-586607576f3d" containerName="mariadb-client-2" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.163117 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.188411 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5jndd"] Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.280189 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7141e518-859b-4204-b438-d69482a2e562-utilities\") pod \"redhat-operators-5jndd\" (UID: \"7141e518-859b-4204-b438-d69482a2e562\") " pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.280302 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7141e518-859b-4204-b438-d69482a2e562-catalog-content\") pod \"redhat-operators-5jndd\" (UID: \"7141e518-859b-4204-b438-d69482a2e562\") " pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.280365 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc77k\" (UniqueName: \"kubernetes.io/projected/7141e518-859b-4204-b438-d69482a2e562-kube-api-access-kc77k\") pod \"redhat-operators-5jndd\" (UID: \"7141e518-859b-4204-b438-d69482a2e562\") " pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.382368 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc77k\" (UniqueName: \"kubernetes.io/projected/7141e518-859b-4204-b438-d69482a2e562-kube-api-access-kc77k\") pod \"redhat-operators-5jndd\" (UID: \"7141e518-859b-4204-b438-d69482a2e562\") " pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.382474 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7141e518-859b-4204-b438-d69482a2e562-utilities\") pod \"redhat-operators-5jndd\" (UID: \"7141e518-859b-4204-b438-d69482a2e562\") " pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.382512 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7141e518-859b-4204-b438-d69482a2e562-catalog-content\") pod \"redhat-operators-5jndd\" (UID: \"7141e518-859b-4204-b438-d69482a2e562\") " pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.382982 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7141e518-859b-4204-b438-d69482a2e562-catalog-content\") pod \"redhat-operators-5jndd\" (UID: \"7141e518-859b-4204-b438-d69482a2e562\") " pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.383158 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7141e518-859b-4204-b438-d69482a2e562-utilities\") pod \"redhat-operators-5jndd\" (UID: \"7141e518-859b-4204-b438-d69482a2e562\") " pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.407394 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc77k\" (UniqueName: \"kubernetes.io/projected/7141e518-859b-4204-b438-d69482a2e562-kube-api-access-kc77k\") pod \"redhat-operators-5jndd\" (UID: \"7141e518-859b-4204-b438-d69482a2e562\") " pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.506777 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:26 crc kubenswrapper[4972]: I1121 11:06:26.959168 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5jndd"] Nov 21 11:06:27 crc kubenswrapper[4972]: I1121 11:06:27.843891 4972 generic.go:334] "Generic (PLEG): container finished" podID="7141e518-859b-4204-b438-d69482a2e562" containerID="fe00d566d26fe900a4ed7ab10295b08b2e6e45ef1c44970f0b8d84b58654c4ec" exitCode=0 Nov 21 11:06:27 crc kubenswrapper[4972]: I1121 11:06:27.843940 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jndd" event={"ID":"7141e518-859b-4204-b438-d69482a2e562","Type":"ContainerDied","Data":"fe00d566d26fe900a4ed7ab10295b08b2e6e45ef1c44970f0b8d84b58654c4ec"} Nov 21 11:06:27 crc kubenswrapper[4972]: I1121 11:06:27.843966 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jndd" event={"ID":"7141e518-859b-4204-b438-d69482a2e562","Type":"ContainerStarted","Data":"63589c06990d1cf90a94727ea8fb5932c58ed1fc8da33a0af38716a61094c243"} Nov 21 11:06:28 crc kubenswrapper[4972]: I1121 11:06:28.854082 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jndd" event={"ID":"7141e518-859b-4204-b438-d69482a2e562","Type":"ContainerStarted","Data":"6fa4b7c3f42cb990d2b5a79524653f08d03e4864e743c5a385fc5ba8dff9c74e"} Nov 21 11:06:29 crc kubenswrapper[4972]: I1121 11:06:29.864388 4972 generic.go:334] "Generic (PLEG): container finished" podID="7141e518-859b-4204-b438-d69482a2e562" containerID="6fa4b7c3f42cb990d2b5a79524653f08d03e4864e743c5a385fc5ba8dff9c74e" exitCode=0 Nov 21 11:06:29 crc kubenswrapper[4972]: I1121 11:06:29.864489 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jndd" event={"ID":"7141e518-859b-4204-b438-d69482a2e562","Type":"ContainerDied","Data":"6fa4b7c3f42cb990d2b5a79524653f08d03e4864e743c5a385fc5ba8dff9c74e"} Nov 21 11:06:30 crc kubenswrapper[4972]: I1121 11:06:30.874618 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jndd" event={"ID":"7141e518-859b-4204-b438-d69482a2e562","Type":"ContainerStarted","Data":"a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710"} Nov 21 11:06:30 crc kubenswrapper[4972]: I1121 11:06:30.903843 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5jndd" podStartSLOduration=2.46953609 podStartE2EDuration="4.90380266s" podCreationTimestamp="2025-11-21 11:06:26 +0000 UTC" firstStartedPulling="2025-11-21 11:06:27.84648551 +0000 UTC m=+5132.955628048" lastFinishedPulling="2025-11-21 11:06:30.28075208 +0000 UTC m=+5135.389894618" observedRunningTime="2025-11-21 11:06:30.894323428 +0000 UTC m=+5136.003465936" watchObservedRunningTime="2025-11-21 11:06:30.90380266 +0000 UTC m=+5136.012945168" Nov 21 11:06:32 crc kubenswrapper[4972]: I1121 11:06:32.759769 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:06:32 crc kubenswrapper[4972]: E1121 11:06:32.761025 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:06:36 crc kubenswrapper[4972]: I1121 11:06:36.507282 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:36 crc kubenswrapper[4972]: I1121 11:06:36.507571 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:37 crc kubenswrapper[4972]: I1121 11:06:37.571838 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5jndd" podUID="7141e518-859b-4204-b438-d69482a2e562" containerName="registry-server" probeResult="failure" output=< Nov 21 11:06:37 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 11:06:37 crc kubenswrapper[4972]: > Nov 21 11:06:43 crc kubenswrapper[4972]: I1121 11:06:43.759764 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:06:43 crc kubenswrapper[4972]: E1121 11:06:43.760699 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:06:46 crc kubenswrapper[4972]: I1121 11:06:46.580374 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:46 crc kubenswrapper[4972]: I1121 11:06:46.645024 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:46 crc kubenswrapper[4972]: I1121 11:06:46.826072 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5jndd"] Nov 21 11:06:48 crc kubenswrapper[4972]: I1121 11:06:48.023167 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5jndd" podUID="7141e518-859b-4204-b438-d69482a2e562" containerName="registry-server" containerID="cri-o://a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710" gracePeriod=2 Nov 21 11:06:48 crc kubenswrapper[4972]: I1121 11:06:48.587716 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:48 crc kubenswrapper[4972]: I1121 11:06:48.662132 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7141e518-859b-4204-b438-d69482a2e562-utilities" (OuterVolumeSpecName: "utilities") pod "7141e518-859b-4204-b438-d69482a2e562" (UID: "7141e518-859b-4204-b438-d69482a2e562"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:06:48 crc kubenswrapper[4972]: I1121 11:06:48.660718 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7141e518-859b-4204-b438-d69482a2e562-utilities\") pod \"7141e518-859b-4204-b438-d69482a2e562\" (UID: \"7141e518-859b-4204-b438-d69482a2e562\") " Nov 21 11:06:48 crc kubenswrapper[4972]: I1121 11:06:48.662247 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc77k\" (UniqueName: \"kubernetes.io/projected/7141e518-859b-4204-b438-d69482a2e562-kube-api-access-kc77k\") pod \"7141e518-859b-4204-b438-d69482a2e562\" (UID: \"7141e518-859b-4204-b438-d69482a2e562\") " Nov 21 11:06:48 crc kubenswrapper[4972]: I1121 11:06:48.662323 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7141e518-859b-4204-b438-d69482a2e562-catalog-content\") pod \"7141e518-859b-4204-b438-d69482a2e562\" (UID: \"7141e518-859b-4204-b438-d69482a2e562\") " Nov 21 11:06:48 crc kubenswrapper[4972]: I1121 11:06:48.662727 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7141e518-859b-4204-b438-d69482a2e562-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:06:48 crc kubenswrapper[4972]: I1121 11:06:48.670384 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7141e518-859b-4204-b438-d69482a2e562-kube-api-access-kc77k" (OuterVolumeSpecName: "kube-api-access-kc77k") pod "7141e518-859b-4204-b438-d69482a2e562" (UID: "7141e518-859b-4204-b438-d69482a2e562"). InnerVolumeSpecName "kube-api-access-kc77k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:06:48 crc kubenswrapper[4972]: I1121 11:06:48.762806 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7141e518-859b-4204-b438-d69482a2e562-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7141e518-859b-4204-b438-d69482a2e562" (UID: "7141e518-859b-4204-b438-d69482a2e562"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:06:48 crc kubenswrapper[4972]: I1121 11:06:48.763797 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc77k\" (UniqueName: \"kubernetes.io/projected/7141e518-859b-4204-b438-d69482a2e562-kube-api-access-kc77k\") on node \"crc\" DevicePath \"\"" Nov 21 11:06:48 crc kubenswrapper[4972]: I1121 11:06:48.763820 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7141e518-859b-4204-b438-d69482a2e562-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.037307 4972 generic.go:334] "Generic (PLEG): container finished" podID="7141e518-859b-4204-b438-d69482a2e562" containerID="a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710" exitCode=0 Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.037386 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jndd" event={"ID":"7141e518-859b-4204-b438-d69482a2e562","Type":"ContainerDied","Data":"a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710"} Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.037467 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jndd" event={"ID":"7141e518-859b-4204-b438-d69482a2e562","Type":"ContainerDied","Data":"63589c06990d1cf90a94727ea8fb5932c58ed1fc8da33a0af38716a61094c243"} Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.037506 4972 scope.go:117] "RemoveContainer" containerID="a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710" Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.037513 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jndd" Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.069980 4972 scope.go:117] "RemoveContainer" containerID="6fa4b7c3f42cb990d2b5a79524653f08d03e4864e743c5a385fc5ba8dff9c74e" Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.085117 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5jndd"] Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.090369 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5jndd"] Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.112066 4972 scope.go:117] "RemoveContainer" containerID="fe00d566d26fe900a4ed7ab10295b08b2e6e45ef1c44970f0b8d84b58654c4ec" Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.150628 4972 scope.go:117] "RemoveContainer" containerID="a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710" Nov 21 11:06:49 crc kubenswrapper[4972]: E1121 11:06:49.151792 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710\": container with ID starting with a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710 not found: ID does not exist" containerID="a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710" Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.151878 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710"} err="failed to get container status \"a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710\": rpc error: code = NotFound desc = could not find container \"a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710\": container with ID starting with a2cde4400e036c958dd959d7a7d6d140403fdc489e9177affdfc457bbfcfd710 not found: ID does not exist" Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.151917 4972 scope.go:117] "RemoveContainer" containerID="6fa4b7c3f42cb990d2b5a79524653f08d03e4864e743c5a385fc5ba8dff9c74e" Nov 21 11:06:49 crc kubenswrapper[4972]: E1121 11:06:49.152284 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fa4b7c3f42cb990d2b5a79524653f08d03e4864e743c5a385fc5ba8dff9c74e\": container with ID starting with 6fa4b7c3f42cb990d2b5a79524653f08d03e4864e743c5a385fc5ba8dff9c74e not found: ID does not exist" containerID="6fa4b7c3f42cb990d2b5a79524653f08d03e4864e743c5a385fc5ba8dff9c74e" Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.152342 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa4b7c3f42cb990d2b5a79524653f08d03e4864e743c5a385fc5ba8dff9c74e"} err="failed to get container status \"6fa4b7c3f42cb990d2b5a79524653f08d03e4864e743c5a385fc5ba8dff9c74e\": rpc error: code = NotFound desc = could not find container \"6fa4b7c3f42cb990d2b5a79524653f08d03e4864e743c5a385fc5ba8dff9c74e\": container with ID starting with 6fa4b7c3f42cb990d2b5a79524653f08d03e4864e743c5a385fc5ba8dff9c74e not found: ID does not exist" Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.152385 4972 scope.go:117] "RemoveContainer" containerID="fe00d566d26fe900a4ed7ab10295b08b2e6e45ef1c44970f0b8d84b58654c4ec" Nov 21 11:06:49 crc kubenswrapper[4972]: E1121 11:06:49.152767 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe00d566d26fe900a4ed7ab10295b08b2e6e45ef1c44970f0b8d84b58654c4ec\": container with ID starting with fe00d566d26fe900a4ed7ab10295b08b2e6e45ef1c44970f0b8d84b58654c4ec not found: ID does not exist" containerID="fe00d566d26fe900a4ed7ab10295b08b2e6e45ef1c44970f0b8d84b58654c4ec" Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.152865 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe00d566d26fe900a4ed7ab10295b08b2e6e45ef1c44970f0b8d84b58654c4ec"} err="failed to get container status \"fe00d566d26fe900a4ed7ab10295b08b2e6e45ef1c44970f0b8d84b58654c4ec\": rpc error: code = NotFound desc = could not find container \"fe00d566d26fe900a4ed7ab10295b08b2e6e45ef1c44970f0b8d84b58654c4ec\": container with ID starting with fe00d566d26fe900a4ed7ab10295b08b2e6e45ef1c44970f0b8d84b58654c4ec not found: ID does not exist" Nov 21 11:06:49 crc kubenswrapper[4972]: I1121 11:06:49.778615 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7141e518-859b-4204-b438-d69482a2e562" path="/var/lib/kubelet/pods/7141e518-859b-4204-b438-d69482a2e562/volumes" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.625058 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gskzt"] Nov 21 11:06:52 crc kubenswrapper[4972]: E1121 11:06:52.628210 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7141e518-859b-4204-b438-d69482a2e562" containerName="extract-utilities" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.628243 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7141e518-859b-4204-b438-d69482a2e562" containerName="extract-utilities" Nov 21 11:06:52 crc kubenswrapper[4972]: E1121 11:06:52.628275 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7141e518-859b-4204-b438-d69482a2e562" containerName="registry-server" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.628284 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7141e518-859b-4204-b438-d69482a2e562" containerName="registry-server" Nov 21 11:06:52 crc kubenswrapper[4972]: E1121 11:06:52.628303 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7141e518-859b-4204-b438-d69482a2e562" containerName="extract-content" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.628311 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7141e518-859b-4204-b438-d69482a2e562" containerName="extract-content" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.628515 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="7141e518-859b-4204-b438-d69482a2e562" containerName="registry-server" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.631318 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.632306 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gskzt"] Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.748346 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0472ee0e-775b-4811-b6c4-b8fec0706108-catalog-content\") pod \"redhat-marketplace-gskzt\" (UID: \"0472ee0e-775b-4811-b6c4-b8fec0706108\") " pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.748629 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0472ee0e-775b-4811-b6c4-b8fec0706108-utilities\") pod \"redhat-marketplace-gskzt\" (UID: \"0472ee0e-775b-4811-b6c4-b8fec0706108\") " pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.748895 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vfxp\" (UniqueName: \"kubernetes.io/projected/0472ee0e-775b-4811-b6c4-b8fec0706108-kube-api-access-2vfxp\") pod \"redhat-marketplace-gskzt\" (UID: \"0472ee0e-775b-4811-b6c4-b8fec0706108\") " pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.850339 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vfxp\" (UniqueName: \"kubernetes.io/projected/0472ee0e-775b-4811-b6c4-b8fec0706108-kube-api-access-2vfxp\") pod \"redhat-marketplace-gskzt\" (UID: \"0472ee0e-775b-4811-b6c4-b8fec0706108\") " pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.850472 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0472ee0e-775b-4811-b6c4-b8fec0706108-catalog-content\") pod \"redhat-marketplace-gskzt\" (UID: \"0472ee0e-775b-4811-b6c4-b8fec0706108\") " pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.850573 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0472ee0e-775b-4811-b6c4-b8fec0706108-utilities\") pod \"redhat-marketplace-gskzt\" (UID: \"0472ee0e-775b-4811-b6c4-b8fec0706108\") " pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.851555 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0472ee0e-775b-4811-b6c4-b8fec0706108-catalog-content\") pod \"redhat-marketplace-gskzt\" (UID: \"0472ee0e-775b-4811-b6c4-b8fec0706108\") " pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.851790 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0472ee0e-775b-4811-b6c4-b8fec0706108-utilities\") pod \"redhat-marketplace-gskzt\" (UID: \"0472ee0e-775b-4811-b6c4-b8fec0706108\") " pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.869914 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vfxp\" (UniqueName: \"kubernetes.io/projected/0472ee0e-775b-4811-b6c4-b8fec0706108-kube-api-access-2vfxp\") pod \"redhat-marketplace-gskzt\" (UID: \"0472ee0e-775b-4811-b6c4-b8fec0706108\") " pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:06:52 crc kubenswrapper[4972]: I1121 11:06:52.961327 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:06:53 crc kubenswrapper[4972]: I1121 11:06:53.399613 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gskzt"] Nov 21 11:06:54 crc kubenswrapper[4972]: I1121 11:06:54.102000 4972 generic.go:334] "Generic (PLEG): container finished" podID="0472ee0e-775b-4811-b6c4-b8fec0706108" containerID="60de23c60a8d8ec688aaf74754cab65e15fa29a2b86daa4abeece119e2f6c171" exitCode=0 Nov 21 11:06:54 crc kubenswrapper[4972]: I1121 11:06:54.102060 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gskzt" event={"ID":"0472ee0e-775b-4811-b6c4-b8fec0706108","Type":"ContainerDied","Data":"60de23c60a8d8ec688aaf74754cab65e15fa29a2b86daa4abeece119e2f6c171"} Nov 21 11:06:54 crc kubenswrapper[4972]: I1121 11:06:54.102098 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gskzt" event={"ID":"0472ee0e-775b-4811-b6c4-b8fec0706108","Type":"ContainerStarted","Data":"a09caddba2e8e221cbc7ca679223da0dc332c7589381b7b3e4196f254983aca4"} Nov 21 11:06:55 crc kubenswrapper[4972]: I1121 11:06:55.113527 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gskzt" event={"ID":"0472ee0e-775b-4811-b6c4-b8fec0706108","Type":"ContainerStarted","Data":"95345f1876cbebd0f8041eb2a54ef9ba7f9630325de1e31e431a0042a5324817"} Nov 21 11:06:56 crc kubenswrapper[4972]: I1121 11:06:56.122903 4972 generic.go:334] "Generic (PLEG): container finished" podID="0472ee0e-775b-4811-b6c4-b8fec0706108" containerID="95345f1876cbebd0f8041eb2a54ef9ba7f9630325de1e31e431a0042a5324817" exitCode=0 Nov 21 11:06:56 crc kubenswrapper[4972]: I1121 11:06:56.123008 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gskzt" event={"ID":"0472ee0e-775b-4811-b6c4-b8fec0706108","Type":"ContainerDied","Data":"95345f1876cbebd0f8041eb2a54ef9ba7f9630325de1e31e431a0042a5324817"} Nov 21 11:06:57 crc kubenswrapper[4972]: I1121 11:06:57.136939 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gskzt" event={"ID":"0472ee0e-775b-4811-b6c4-b8fec0706108","Type":"ContainerStarted","Data":"888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8"} Nov 21 11:06:57 crc kubenswrapper[4972]: I1121 11:06:57.159455 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gskzt" podStartSLOduration=2.68325347 podStartE2EDuration="5.159429933s" podCreationTimestamp="2025-11-21 11:06:52 +0000 UTC" firstStartedPulling="2025-11-21 11:06:54.104027334 +0000 UTC m=+5159.213169832" lastFinishedPulling="2025-11-21 11:06:56.580203787 +0000 UTC m=+5161.689346295" observedRunningTime="2025-11-21 11:06:57.156115665 +0000 UTC m=+5162.265258183" watchObservedRunningTime="2025-11-21 11:06:57.159429933 +0000 UTC m=+5162.268572451" Nov 21 11:06:57 crc kubenswrapper[4972]: I1121 11:06:57.760444 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:06:57 crc kubenswrapper[4972]: E1121 11:06:57.760651 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:07:02 crc kubenswrapper[4972]: I1121 11:07:02.961450 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:07:02 crc kubenswrapper[4972]: I1121 11:07:02.964012 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:07:03 crc kubenswrapper[4972]: I1121 11:07:03.041148 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:07:03 crc kubenswrapper[4972]: I1121 11:07:03.265746 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:07:03 crc kubenswrapper[4972]: I1121 11:07:03.335371 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gskzt"] Nov 21 11:07:05 crc kubenswrapper[4972]: I1121 11:07:05.214970 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gskzt" podUID="0472ee0e-775b-4811-b6c4-b8fec0706108" containerName="registry-server" containerID="cri-o://888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8" gracePeriod=2 Nov 21 11:07:05 crc kubenswrapper[4972]: I1121 11:07:05.625990 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:07:05 crc kubenswrapper[4972]: I1121 11:07:05.715017 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0472ee0e-775b-4811-b6c4-b8fec0706108-catalog-content\") pod \"0472ee0e-775b-4811-b6c4-b8fec0706108\" (UID: \"0472ee0e-775b-4811-b6c4-b8fec0706108\") " Nov 21 11:07:05 crc kubenswrapper[4972]: I1121 11:07:05.715067 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0472ee0e-775b-4811-b6c4-b8fec0706108-utilities\") pod \"0472ee0e-775b-4811-b6c4-b8fec0706108\" (UID: \"0472ee0e-775b-4811-b6c4-b8fec0706108\") " Nov 21 11:07:05 crc kubenswrapper[4972]: I1121 11:07:05.715123 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vfxp\" (UniqueName: \"kubernetes.io/projected/0472ee0e-775b-4811-b6c4-b8fec0706108-kube-api-access-2vfxp\") pod \"0472ee0e-775b-4811-b6c4-b8fec0706108\" (UID: \"0472ee0e-775b-4811-b6c4-b8fec0706108\") " Nov 21 11:07:05 crc kubenswrapper[4972]: I1121 11:07:05.716960 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0472ee0e-775b-4811-b6c4-b8fec0706108-utilities" (OuterVolumeSpecName: "utilities") pod "0472ee0e-775b-4811-b6c4-b8fec0706108" (UID: "0472ee0e-775b-4811-b6c4-b8fec0706108"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:07:05 crc kubenswrapper[4972]: I1121 11:07:05.726253 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0472ee0e-775b-4811-b6c4-b8fec0706108-kube-api-access-2vfxp" (OuterVolumeSpecName: "kube-api-access-2vfxp") pod "0472ee0e-775b-4811-b6c4-b8fec0706108" (UID: "0472ee0e-775b-4811-b6c4-b8fec0706108"). InnerVolumeSpecName "kube-api-access-2vfxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:07:05 crc kubenswrapper[4972]: I1121 11:07:05.743327 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0472ee0e-775b-4811-b6c4-b8fec0706108-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0472ee0e-775b-4811-b6c4-b8fec0706108" (UID: "0472ee0e-775b-4811-b6c4-b8fec0706108"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:07:05 crc kubenswrapper[4972]: I1121 11:07:05.816623 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0472ee0e-775b-4811-b6c4-b8fec0706108-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:07:05 crc kubenswrapper[4972]: I1121 11:07:05.816666 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0472ee0e-775b-4811-b6c4-b8fec0706108-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:07:05 crc kubenswrapper[4972]: I1121 11:07:05.816679 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vfxp\" (UniqueName: \"kubernetes.io/projected/0472ee0e-775b-4811-b6c4-b8fec0706108-kube-api-access-2vfxp\") on node \"crc\" DevicePath \"\"" Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.228014 4972 generic.go:334] "Generic (PLEG): container finished" podID="0472ee0e-775b-4811-b6c4-b8fec0706108" containerID="888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8" exitCode=0 Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.228099 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gskzt" event={"ID":"0472ee0e-775b-4811-b6c4-b8fec0706108","Type":"ContainerDied","Data":"888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8"} Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.228144 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gskzt" Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.228178 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gskzt" event={"ID":"0472ee0e-775b-4811-b6c4-b8fec0706108","Type":"ContainerDied","Data":"a09caddba2e8e221cbc7ca679223da0dc332c7589381b7b3e4196f254983aca4"} Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.228202 4972 scope.go:117] "RemoveContainer" containerID="888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8" Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.252236 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gskzt"] Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.259760 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gskzt"] Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.263176 4972 scope.go:117] "RemoveContainer" containerID="95345f1876cbebd0f8041eb2a54ef9ba7f9630325de1e31e431a0042a5324817" Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.296584 4972 scope.go:117] "RemoveContainer" containerID="60de23c60a8d8ec688aaf74754cab65e15fa29a2b86daa4abeece119e2f6c171" Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.321659 4972 scope.go:117] "RemoveContainer" containerID="888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8" Nov 21 11:07:06 crc kubenswrapper[4972]: E1121 11:07:06.322391 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8\": container with ID starting with 888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8 not found: ID does not exist" containerID="888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8" Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.322422 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8"} err="failed to get container status \"888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8\": rpc error: code = NotFound desc = could not find container \"888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8\": container with ID starting with 888af8f9792819f46db99782eef6bd5332026f8d47a10f13d66577a5d166f8a8 not found: ID does not exist" Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.322444 4972 scope.go:117] "RemoveContainer" containerID="95345f1876cbebd0f8041eb2a54ef9ba7f9630325de1e31e431a0042a5324817" Nov 21 11:07:06 crc kubenswrapper[4972]: E1121 11:07:06.322827 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95345f1876cbebd0f8041eb2a54ef9ba7f9630325de1e31e431a0042a5324817\": container with ID starting with 95345f1876cbebd0f8041eb2a54ef9ba7f9630325de1e31e431a0042a5324817 not found: ID does not exist" containerID="95345f1876cbebd0f8041eb2a54ef9ba7f9630325de1e31e431a0042a5324817" Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.323020 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95345f1876cbebd0f8041eb2a54ef9ba7f9630325de1e31e431a0042a5324817"} err="failed to get container status \"95345f1876cbebd0f8041eb2a54ef9ba7f9630325de1e31e431a0042a5324817\": rpc error: code = NotFound desc = could not find container \"95345f1876cbebd0f8041eb2a54ef9ba7f9630325de1e31e431a0042a5324817\": container with ID starting with 95345f1876cbebd0f8041eb2a54ef9ba7f9630325de1e31e431a0042a5324817 not found: ID does not exist" Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.323229 4972 scope.go:117] "RemoveContainer" containerID="60de23c60a8d8ec688aaf74754cab65e15fa29a2b86daa4abeece119e2f6c171" Nov 21 11:07:06 crc kubenswrapper[4972]: E1121 11:07:06.323795 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60de23c60a8d8ec688aaf74754cab65e15fa29a2b86daa4abeece119e2f6c171\": container with ID starting with 60de23c60a8d8ec688aaf74754cab65e15fa29a2b86daa4abeece119e2f6c171 not found: ID does not exist" containerID="60de23c60a8d8ec688aaf74754cab65e15fa29a2b86daa4abeece119e2f6c171" Nov 21 11:07:06 crc kubenswrapper[4972]: I1121 11:07:06.323823 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60de23c60a8d8ec688aaf74754cab65e15fa29a2b86daa4abeece119e2f6c171"} err="failed to get container status \"60de23c60a8d8ec688aaf74754cab65e15fa29a2b86daa4abeece119e2f6c171\": rpc error: code = NotFound desc = could not find container \"60de23c60a8d8ec688aaf74754cab65e15fa29a2b86daa4abeece119e2f6c171\": container with ID starting with 60de23c60a8d8ec688aaf74754cab65e15fa29a2b86daa4abeece119e2f6c171 not found: ID does not exist" Nov 21 11:07:07 crc kubenswrapper[4972]: I1121 11:07:07.772511 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0472ee0e-775b-4811-b6c4-b8fec0706108" path="/var/lib/kubelet/pods/0472ee0e-775b-4811-b6c4-b8fec0706108/volumes" Nov 21 11:07:09 crc kubenswrapper[4972]: I1121 11:07:09.760312 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:07:09 crc kubenswrapper[4972]: E1121 11:07:09.761236 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.736456 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mzdzm"] Nov 21 11:07:21 crc kubenswrapper[4972]: E1121 11:07:21.737692 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0472ee0e-775b-4811-b6c4-b8fec0706108" containerName="extract-content" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.737710 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0472ee0e-775b-4811-b6c4-b8fec0706108" containerName="extract-content" Nov 21 11:07:21 crc kubenswrapper[4972]: E1121 11:07:21.737750 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0472ee0e-775b-4811-b6c4-b8fec0706108" containerName="registry-server" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.737759 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0472ee0e-775b-4811-b6c4-b8fec0706108" containerName="registry-server" Nov 21 11:07:21 crc kubenswrapper[4972]: E1121 11:07:21.737770 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0472ee0e-775b-4811-b6c4-b8fec0706108" containerName="extract-utilities" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.737779 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0472ee0e-775b-4811-b6c4-b8fec0706108" containerName="extract-utilities" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.737970 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="0472ee0e-775b-4811-b6c4-b8fec0706108" containerName="registry-server" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.739237 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.746870 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzdzm"] Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.813242 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f82de57e-cc13-44fb-b393-74e3cab92a01-utilities\") pod \"certified-operators-mzdzm\" (UID: \"f82de57e-cc13-44fb-b393-74e3cab92a01\") " pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.813399 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f82de57e-cc13-44fb-b393-74e3cab92a01-catalog-content\") pod \"certified-operators-mzdzm\" (UID: \"f82de57e-cc13-44fb-b393-74e3cab92a01\") " pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.813436 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7tv\" (UniqueName: \"kubernetes.io/projected/f82de57e-cc13-44fb-b393-74e3cab92a01-kube-api-access-kb7tv\") pod \"certified-operators-mzdzm\" (UID: \"f82de57e-cc13-44fb-b393-74e3cab92a01\") " pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.915051 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f82de57e-cc13-44fb-b393-74e3cab92a01-utilities\") pod \"certified-operators-mzdzm\" (UID: \"f82de57e-cc13-44fb-b393-74e3cab92a01\") " pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.915544 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f82de57e-cc13-44fb-b393-74e3cab92a01-catalog-content\") pod \"certified-operators-mzdzm\" (UID: \"f82de57e-cc13-44fb-b393-74e3cab92a01\") " pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.915588 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb7tv\" (UniqueName: \"kubernetes.io/projected/f82de57e-cc13-44fb-b393-74e3cab92a01-kube-api-access-kb7tv\") pod \"certified-operators-mzdzm\" (UID: \"f82de57e-cc13-44fb-b393-74e3cab92a01\") " pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.916022 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f82de57e-cc13-44fb-b393-74e3cab92a01-utilities\") pod \"certified-operators-mzdzm\" (UID: \"f82de57e-cc13-44fb-b393-74e3cab92a01\") " pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.916079 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f82de57e-cc13-44fb-b393-74e3cab92a01-catalog-content\") pod \"certified-operators-mzdzm\" (UID: \"f82de57e-cc13-44fb-b393-74e3cab92a01\") " pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:21 crc kubenswrapper[4972]: I1121 11:07:21.938130 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb7tv\" (UniqueName: \"kubernetes.io/projected/f82de57e-cc13-44fb-b393-74e3cab92a01-kube-api-access-kb7tv\") pod \"certified-operators-mzdzm\" (UID: \"f82de57e-cc13-44fb-b393-74e3cab92a01\") " pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:22 crc kubenswrapper[4972]: I1121 11:07:22.067570 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:22 crc kubenswrapper[4972]: I1121 11:07:22.575558 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzdzm"] Nov 21 11:07:23 crc kubenswrapper[4972]: I1121 11:07:23.416430 4972 generic.go:334] "Generic (PLEG): container finished" podID="f82de57e-cc13-44fb-b393-74e3cab92a01" containerID="1a844b8d09573d9c09a454ca32b28371a4c0f499579d65b64bd8f5448a089b42" exitCode=0 Nov 21 11:07:23 crc kubenswrapper[4972]: I1121 11:07:23.416671 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzdzm" event={"ID":"f82de57e-cc13-44fb-b393-74e3cab92a01","Type":"ContainerDied","Data":"1a844b8d09573d9c09a454ca32b28371a4c0f499579d65b64bd8f5448a089b42"} Nov 21 11:07:23 crc kubenswrapper[4972]: I1121 11:07:23.420072 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzdzm" event={"ID":"f82de57e-cc13-44fb-b393-74e3cab92a01","Type":"ContainerStarted","Data":"135658449fa6c67dcb0e7a3007d90a20c98258c3625dbd3bf58b44398bf85ade"} Nov 21 11:07:24 crc kubenswrapper[4972]: I1121 11:07:24.430779 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzdzm" event={"ID":"f82de57e-cc13-44fb-b393-74e3cab92a01","Type":"ContainerStarted","Data":"f3113c6c862b6eed0449fe7519479cc9a53915cba48d10642554e2e8a5d51a22"} Nov 21 11:07:24 crc kubenswrapper[4972]: I1121 11:07:24.760287 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:07:24 crc kubenswrapper[4972]: E1121 11:07:24.760795 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:07:25 crc kubenswrapper[4972]: I1121 11:07:25.439913 4972 generic.go:334] "Generic (PLEG): container finished" podID="f82de57e-cc13-44fb-b393-74e3cab92a01" containerID="f3113c6c862b6eed0449fe7519479cc9a53915cba48d10642554e2e8a5d51a22" exitCode=0 Nov 21 11:07:25 crc kubenswrapper[4972]: I1121 11:07:25.439962 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzdzm" event={"ID":"f82de57e-cc13-44fb-b393-74e3cab92a01","Type":"ContainerDied","Data":"f3113c6c862b6eed0449fe7519479cc9a53915cba48d10642554e2e8a5d51a22"} Nov 21 11:07:27 crc kubenswrapper[4972]: I1121 11:07:27.572363 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzdzm" event={"ID":"f82de57e-cc13-44fb-b393-74e3cab92a01","Type":"ContainerStarted","Data":"8f52e0889341e84165c5bb996d3c4968e5e9fdf9f66fec131e9811e949815244"} Nov 21 11:07:27 crc kubenswrapper[4972]: I1121 11:07:27.598374 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mzdzm" podStartSLOduration=3.956490312 podStartE2EDuration="6.598357013s" podCreationTimestamp="2025-11-21 11:07:21 +0000 UTC" firstStartedPulling="2025-11-21 11:07:23.420063725 +0000 UTC m=+5188.529206273" lastFinishedPulling="2025-11-21 11:07:26.061930476 +0000 UTC m=+5191.171072974" observedRunningTime="2025-11-21 11:07:27.596808211 +0000 UTC m=+5192.705950729" watchObservedRunningTime="2025-11-21 11:07:27.598357013 +0000 UTC m=+5192.707499511" Nov 21 11:07:32 crc kubenswrapper[4972]: I1121 11:07:32.067947 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:32 crc kubenswrapper[4972]: I1121 11:07:32.068582 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:32 crc kubenswrapper[4972]: I1121 11:07:32.138407 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:32 crc kubenswrapper[4972]: I1121 11:07:32.711128 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:32 crc kubenswrapper[4972]: I1121 11:07:32.768638 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzdzm"] Nov 21 11:07:34 crc kubenswrapper[4972]: I1121 11:07:34.655506 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mzdzm" podUID="f82de57e-cc13-44fb-b393-74e3cab92a01" containerName="registry-server" containerID="cri-o://8f52e0889341e84165c5bb996d3c4968e5e9fdf9f66fec131e9811e949815244" gracePeriod=2 Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.665629 4972 generic.go:334] "Generic (PLEG): container finished" podID="f82de57e-cc13-44fb-b393-74e3cab92a01" containerID="8f52e0889341e84165c5bb996d3c4968e5e9fdf9f66fec131e9811e949815244" exitCode=0 Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.665807 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzdzm" event={"ID":"f82de57e-cc13-44fb-b393-74e3cab92a01","Type":"ContainerDied","Data":"8f52e0889341e84165c5bb996d3c4968e5e9fdf9f66fec131e9811e949815244"} Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.666138 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzdzm" event={"ID":"f82de57e-cc13-44fb-b393-74e3cab92a01","Type":"ContainerDied","Data":"135658449fa6c67dcb0e7a3007d90a20c98258c3625dbd3bf58b44398bf85ade"} Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.666161 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="135658449fa6c67dcb0e7a3007d90a20c98258c3625dbd3bf58b44398bf85ade" Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.737326 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.760724 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:07:35 crc kubenswrapper[4972]: E1121 11:07:35.761032 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.872849 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f82de57e-cc13-44fb-b393-74e3cab92a01-catalog-content\") pod \"f82de57e-cc13-44fb-b393-74e3cab92a01\" (UID: \"f82de57e-cc13-44fb-b393-74e3cab92a01\") " Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.872997 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb7tv\" (UniqueName: \"kubernetes.io/projected/f82de57e-cc13-44fb-b393-74e3cab92a01-kube-api-access-kb7tv\") pod \"f82de57e-cc13-44fb-b393-74e3cab92a01\" (UID: \"f82de57e-cc13-44fb-b393-74e3cab92a01\") " Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.873085 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f82de57e-cc13-44fb-b393-74e3cab92a01-utilities\") pod \"f82de57e-cc13-44fb-b393-74e3cab92a01\" (UID: \"f82de57e-cc13-44fb-b393-74e3cab92a01\") " Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.874478 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f82de57e-cc13-44fb-b393-74e3cab92a01-utilities" (OuterVolumeSpecName: "utilities") pod "f82de57e-cc13-44fb-b393-74e3cab92a01" (UID: "f82de57e-cc13-44fb-b393-74e3cab92a01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.878349 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f82de57e-cc13-44fb-b393-74e3cab92a01-kube-api-access-kb7tv" (OuterVolumeSpecName: "kube-api-access-kb7tv") pod "f82de57e-cc13-44fb-b393-74e3cab92a01" (UID: "f82de57e-cc13-44fb-b393-74e3cab92a01"). InnerVolumeSpecName "kube-api-access-kb7tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.921529 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f82de57e-cc13-44fb-b393-74e3cab92a01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f82de57e-cc13-44fb-b393-74e3cab92a01" (UID: "f82de57e-cc13-44fb-b393-74e3cab92a01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.975235 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f82de57e-cc13-44fb-b393-74e3cab92a01-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.975276 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f82de57e-cc13-44fb-b393-74e3cab92a01-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:07:35 crc kubenswrapper[4972]: I1121 11:07:35.975403 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb7tv\" (UniqueName: \"kubernetes.io/projected/f82de57e-cc13-44fb-b393-74e3cab92a01-kube-api-access-kb7tv\") on node \"crc\" DevicePath \"\"" Nov 21 11:07:36 crc kubenswrapper[4972]: I1121 11:07:36.676712 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzdzm" Nov 21 11:07:36 crc kubenswrapper[4972]: I1121 11:07:36.729786 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzdzm"] Nov 21 11:07:36 crc kubenswrapper[4972]: I1121 11:07:36.738745 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mzdzm"] Nov 21 11:07:37 crc kubenswrapper[4972]: I1121 11:07:37.771896 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f82de57e-cc13-44fb-b393-74e3cab92a01" path="/var/lib/kubelet/pods/f82de57e-cc13-44fb-b393-74e3cab92a01/volumes" Nov 21 11:07:46 crc kubenswrapper[4972]: I1121 11:07:46.759718 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:07:46 crc kubenswrapper[4972]: E1121 11:07:46.760765 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:07:58 crc kubenswrapper[4972]: I1121 11:07:58.759754 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:07:58 crc kubenswrapper[4972]: E1121 11:07:58.760688 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:08:09 crc kubenswrapper[4972]: I1121 11:08:09.759239 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:08:09 crc kubenswrapper[4972]: E1121 11:08:09.759888 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:08:23 crc kubenswrapper[4972]: I1121 11:08:23.761216 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:08:23 crc kubenswrapper[4972]: E1121 11:08:23.762347 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:08:26 crc kubenswrapper[4972]: I1121 11:08:26.118306 4972 scope.go:117] "RemoveContainer" containerID="0b203a41d0e248541371ea3557ea64ac272e62c5d997d40e77a4041303b8e8ac" Nov 21 11:08:38 crc kubenswrapper[4972]: I1121 11:08:38.759871 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:08:38 crc kubenswrapper[4972]: E1121 11:08:38.760524 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:08:52 crc kubenswrapper[4972]: I1121 11:08:52.759931 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:08:52 crc kubenswrapper[4972]: E1121 11:08:52.761063 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.261111 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Nov 21 11:08:53 crc kubenswrapper[4972]: E1121 11:08:53.261672 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f82de57e-cc13-44fb-b393-74e3cab92a01" containerName="extract-utilities" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.261706 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82de57e-cc13-44fb-b393-74e3cab92a01" containerName="extract-utilities" Nov 21 11:08:53 crc kubenswrapper[4972]: E1121 11:08:53.261784 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f82de57e-cc13-44fb-b393-74e3cab92a01" containerName="registry-server" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.261802 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82de57e-cc13-44fb-b393-74e3cab92a01" containerName="registry-server" Nov 21 11:08:53 crc kubenswrapper[4972]: E1121 11:08:53.261871 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f82de57e-cc13-44fb-b393-74e3cab92a01" containerName="extract-content" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.261892 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82de57e-cc13-44fb-b393-74e3cab92a01" containerName="extract-content" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.262205 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f82de57e-cc13-44fb-b393-74e3cab92a01" containerName="registry-server" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.263253 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.265347 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-65tvb" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.280686 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.299938 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-34cf9df7-7f22-40b8-8798-ce85bee6cad5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34cf9df7-7f22-40b8-8798-ce85bee6cad5\") pod \"mariadb-copy-data\" (UID: \"9c79ff96-e437-4d94-8749-a3c53ff6a366\") " pod="openstack/mariadb-copy-data" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.300221 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv8lk\" (UniqueName: \"kubernetes.io/projected/9c79ff96-e437-4d94-8749-a3c53ff6a366-kube-api-access-rv8lk\") pod \"mariadb-copy-data\" (UID: \"9c79ff96-e437-4d94-8749-a3c53ff6a366\") " pod="openstack/mariadb-copy-data" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.401548 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-34cf9df7-7f22-40b8-8798-ce85bee6cad5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34cf9df7-7f22-40b8-8798-ce85bee6cad5\") pod \"mariadb-copy-data\" (UID: \"9c79ff96-e437-4d94-8749-a3c53ff6a366\") " pod="openstack/mariadb-copy-data" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.401655 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv8lk\" (UniqueName: \"kubernetes.io/projected/9c79ff96-e437-4d94-8749-a3c53ff6a366-kube-api-access-rv8lk\") pod \"mariadb-copy-data\" (UID: \"9c79ff96-e437-4d94-8749-a3c53ff6a366\") " pod="openstack/mariadb-copy-data" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.406074 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.406513 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-34cf9df7-7f22-40b8-8798-ce85bee6cad5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34cf9df7-7f22-40b8-8798-ce85bee6cad5\") pod \"mariadb-copy-data\" (UID: \"9c79ff96-e437-4d94-8749-a3c53ff6a366\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/841ab492425fcfaa2b1b988e9b2b579fe4b37dc61e1f175af08e5b52b6055859/globalmount\"" pod="openstack/mariadb-copy-data" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.440678 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv8lk\" (UniqueName: \"kubernetes.io/projected/9c79ff96-e437-4d94-8749-a3c53ff6a366-kube-api-access-rv8lk\") pod \"mariadb-copy-data\" (UID: \"9c79ff96-e437-4d94-8749-a3c53ff6a366\") " pod="openstack/mariadb-copy-data" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.456679 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-34cf9df7-7f22-40b8-8798-ce85bee6cad5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34cf9df7-7f22-40b8-8798-ce85bee6cad5\") pod \"mariadb-copy-data\" (UID: \"9c79ff96-e437-4d94-8749-a3c53ff6a366\") " pod="openstack/mariadb-copy-data" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.588201 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 21 11:08:53 crc kubenswrapper[4972]: I1121 11:08:53.917294 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 21 11:08:54 crc kubenswrapper[4972]: I1121 11:08:54.443524 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"9c79ff96-e437-4d94-8749-a3c53ff6a366","Type":"ContainerStarted","Data":"413698195f11e5a19af6004a36e80b89339171157e08d0a1d6efa7dff196dfd1"} Nov 21 11:08:54 crc kubenswrapper[4972]: I1121 11:08:54.443786 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"9c79ff96-e437-4d94-8749-a3c53ff6a366","Type":"ContainerStarted","Data":"307cf43032bdad3742e1a1b0a7788155645fb5984f5fb99da2276e6e1eff7dce"} Nov 21 11:08:54 crc kubenswrapper[4972]: I1121 11:08:54.466074 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.466048298 podStartE2EDuration="2.466048298s" podCreationTimestamp="2025-11-21 11:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:08:54.464481207 +0000 UTC m=+5279.573623725" watchObservedRunningTime="2025-11-21 11:08:54.466048298 +0000 UTC m=+5279.575190846" Nov 21 11:08:57 crc kubenswrapper[4972]: I1121 11:08:57.703593 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 21 11:08:57 crc kubenswrapper[4972]: I1121 11:08:57.706017 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 21 11:08:57 crc kubenswrapper[4972]: I1121 11:08:57.712402 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 21 11:08:57 crc kubenswrapper[4972]: I1121 11:08:57.874008 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llrt5\" (UniqueName: \"kubernetes.io/projected/cb148b3b-2d21-45ff-9f22-7d472b7c6e08-kube-api-access-llrt5\") pod \"mariadb-client\" (UID: \"cb148b3b-2d21-45ff-9f22-7d472b7c6e08\") " pod="openstack/mariadb-client" Nov 21 11:08:57 crc kubenswrapper[4972]: I1121 11:08:57.977046 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llrt5\" (UniqueName: \"kubernetes.io/projected/cb148b3b-2d21-45ff-9f22-7d472b7c6e08-kube-api-access-llrt5\") pod \"mariadb-client\" (UID: \"cb148b3b-2d21-45ff-9f22-7d472b7c6e08\") " pod="openstack/mariadb-client" Nov 21 11:08:58 crc kubenswrapper[4972]: I1121 11:08:58.013691 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llrt5\" (UniqueName: \"kubernetes.io/projected/cb148b3b-2d21-45ff-9f22-7d472b7c6e08-kube-api-access-llrt5\") pod \"mariadb-client\" (UID: \"cb148b3b-2d21-45ff-9f22-7d472b7c6e08\") " pod="openstack/mariadb-client" Nov 21 11:08:58 crc kubenswrapper[4972]: I1121 11:08:58.022930 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 21 11:08:58 crc kubenswrapper[4972]: I1121 11:08:58.445012 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 21 11:08:58 crc kubenswrapper[4972]: I1121 11:08:58.484677 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"cb148b3b-2d21-45ff-9f22-7d472b7c6e08","Type":"ContainerStarted","Data":"a6b36b0258d9c928505a7ed6ecdef7c71b4ba1073f33473df7216bca06283592"} Nov 21 11:08:59 crc kubenswrapper[4972]: I1121 11:08:59.493894 4972 generic.go:334] "Generic (PLEG): container finished" podID="cb148b3b-2d21-45ff-9f22-7d472b7c6e08" containerID="654a2b3aef63ad326a51c2e672e35464d88ffd577b778285db493f7d532fdddb" exitCode=0 Nov 21 11:08:59 crc kubenswrapper[4972]: I1121 11:08:59.494005 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"cb148b3b-2d21-45ff-9f22-7d472b7c6e08","Type":"ContainerDied","Data":"654a2b3aef63ad326a51c2e672e35464d88ffd577b778285db493f7d532fdddb"} Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.250120 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-psg8b"] Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.253100 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.264919 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-psg8b"] Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.431208 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-catalog-content\") pod \"community-operators-psg8b\" (UID: \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\") " pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.431349 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-utilities\") pod \"community-operators-psg8b\" (UID: \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\") " pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.431402 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8s4\" (UniqueName: \"kubernetes.io/projected/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-kube-api-access-7n8s4\") pod \"community-operators-psg8b\" (UID: \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\") " pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.532321 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-catalog-content\") pod \"community-operators-psg8b\" (UID: \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\") " pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.532392 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-utilities\") pod \"community-operators-psg8b\" (UID: \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\") " pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.532417 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n8s4\" (UniqueName: \"kubernetes.io/projected/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-kube-api-access-7n8s4\") pod \"community-operators-psg8b\" (UID: \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\") " pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.533377 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-catalog-content\") pod \"community-operators-psg8b\" (UID: \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\") " pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.533604 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-utilities\") pod \"community-operators-psg8b\" (UID: \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\") " pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.561247 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n8s4\" (UniqueName: \"kubernetes.io/projected/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-kube-api-access-7n8s4\") pod \"community-operators-psg8b\" (UID: \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\") " pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.591807 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.943080 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.963084 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_cb148b3b-2d21-45ff-9f22-7d472b7c6e08/mariadb-client/0.log" Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.988236 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 21 11:09:00 crc kubenswrapper[4972]: I1121 11:09:00.993277 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.040373 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llrt5\" (UniqueName: \"kubernetes.io/projected/cb148b3b-2d21-45ff-9f22-7d472b7c6e08-kube-api-access-llrt5\") pod \"cb148b3b-2d21-45ff-9f22-7d472b7c6e08\" (UID: \"cb148b3b-2d21-45ff-9f22-7d472b7c6e08\") " Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.047538 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb148b3b-2d21-45ff-9f22-7d472b7c6e08-kube-api-access-llrt5" (OuterVolumeSpecName: "kube-api-access-llrt5") pod "cb148b3b-2d21-45ff-9f22-7d472b7c6e08" (UID: "cb148b3b-2d21-45ff-9f22-7d472b7c6e08"). InnerVolumeSpecName "kube-api-access-llrt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.123562 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 21 11:09:01 crc kubenswrapper[4972]: E1121 11:09:01.123885 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb148b3b-2d21-45ff-9f22-7d472b7c6e08" containerName="mariadb-client" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.123903 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb148b3b-2d21-45ff-9f22-7d472b7c6e08" containerName="mariadb-client" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.124086 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb148b3b-2d21-45ff-9f22-7d472b7c6e08" containerName="mariadb-client" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.124593 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.133130 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.141875 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llrt5\" (UniqueName: \"kubernetes.io/projected/cb148b3b-2d21-45ff-9f22-7d472b7c6e08-kube-api-access-llrt5\") on node \"crc\" DevicePath \"\"" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.194897 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-psg8b"] Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.242907 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fnqf\" (UniqueName: \"kubernetes.io/projected/d0e04be9-fd09-4328-82e3-09f281044ed5-kube-api-access-8fnqf\") pod \"mariadb-client\" (UID: \"d0e04be9-fd09-4328-82e3-09f281044ed5\") " pod="openstack/mariadb-client" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.344179 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fnqf\" (UniqueName: \"kubernetes.io/projected/d0e04be9-fd09-4328-82e3-09f281044ed5-kube-api-access-8fnqf\") pod \"mariadb-client\" (UID: \"d0e04be9-fd09-4328-82e3-09f281044ed5\") " pod="openstack/mariadb-client" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.367895 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fnqf\" (UniqueName: \"kubernetes.io/projected/d0e04be9-fd09-4328-82e3-09f281044ed5-kube-api-access-8fnqf\") pod \"mariadb-client\" (UID: \"d0e04be9-fd09-4328-82e3-09f281044ed5\") " pod="openstack/mariadb-client" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.443428 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.517508 4972 generic.go:334] "Generic (PLEG): container finished" podID="ac3dbca7-7ad9-4058-8250-85c6185e5f0b" containerID="24525964afe455031a3d8ec14eb2829e524375958e58fd47a74c1a6f08e6de3a" exitCode=0 Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.517608 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psg8b" event={"ID":"ac3dbca7-7ad9-4058-8250-85c6185e5f0b","Type":"ContainerDied","Data":"24525964afe455031a3d8ec14eb2829e524375958e58fd47a74c1a6f08e6de3a"} Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.517679 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psg8b" event={"ID":"ac3dbca7-7ad9-4058-8250-85c6185e5f0b","Type":"ContainerStarted","Data":"d10a7145bfeb729aa7eff0c66a1a8269b9df72d387bfb053e44771d970063865"} Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.520554 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b36b0258d9c928505a7ed6ecdef7c71b4ba1073f33473df7216bca06283592" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.520741 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.558316 4972 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="cb148b3b-2d21-45ff-9f22-7d472b7c6e08" podUID="d0e04be9-fd09-4328-82e3-09f281044ed5" Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.775697 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb148b3b-2d21-45ff-9f22-7d472b7c6e08" path="/var/lib/kubelet/pods/cb148b3b-2d21-45ff-9f22-7d472b7c6e08/volumes" Nov 21 11:09:01 crc kubenswrapper[4972]: W1121 11:09:01.978261 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0e04be9_fd09_4328_82e3_09f281044ed5.slice/crio-f4d804a6a8be1c9af5bf8a8cefd425539dc155a95fa925f32057f1443f34fa87 WatchSource:0}: Error finding container f4d804a6a8be1c9af5bf8a8cefd425539dc155a95fa925f32057f1443f34fa87: Status 404 returned error can't find the container with id f4d804a6a8be1c9af5bf8a8cefd425539dc155a95fa925f32057f1443f34fa87 Nov 21 11:09:01 crc kubenswrapper[4972]: I1121 11:09:01.980119 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 21 11:09:02 crc kubenswrapper[4972]: I1121 11:09:02.532646 4972 generic.go:334] "Generic (PLEG): container finished" podID="d0e04be9-fd09-4328-82e3-09f281044ed5" containerID="ea97997b580b03c35c3ce2afba3df5d22a4afa054b6c2c2dfff85743f7607c82" exitCode=0 Nov 21 11:09:02 crc kubenswrapper[4972]: I1121 11:09:02.532705 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d0e04be9-fd09-4328-82e3-09f281044ed5","Type":"ContainerDied","Data":"ea97997b580b03c35c3ce2afba3df5d22a4afa054b6c2c2dfff85743f7607c82"} Nov 21 11:09:02 crc kubenswrapper[4972]: I1121 11:09:02.533097 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d0e04be9-fd09-4328-82e3-09f281044ed5","Type":"ContainerStarted","Data":"f4d804a6a8be1c9af5bf8a8cefd425539dc155a95fa925f32057f1443f34fa87"} Nov 21 11:09:02 crc kubenswrapper[4972]: I1121 11:09:02.535950 4972 generic.go:334] "Generic (PLEG): container finished" podID="ac3dbca7-7ad9-4058-8250-85c6185e5f0b" containerID="165783e9c80249a1a55d3dc63164158671a2d127ce9422ba56a8ee475bb1a4ee" exitCode=0 Nov 21 11:09:02 crc kubenswrapper[4972]: I1121 11:09:02.536010 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psg8b" event={"ID":"ac3dbca7-7ad9-4058-8250-85c6185e5f0b","Type":"ContainerDied","Data":"165783e9c80249a1a55d3dc63164158671a2d127ce9422ba56a8ee475bb1a4ee"} Nov 21 11:09:03 crc kubenswrapper[4972]: I1121 11:09:03.551042 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psg8b" event={"ID":"ac3dbca7-7ad9-4058-8250-85c6185e5f0b","Type":"ContainerStarted","Data":"a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9"} Nov 21 11:09:03 crc kubenswrapper[4972]: I1121 11:09:03.578564 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-psg8b" podStartSLOduration=1.927550134 podStartE2EDuration="3.578541623s" podCreationTimestamp="2025-11-21 11:09:00 +0000 UTC" firstStartedPulling="2025-11-21 11:09:01.522062147 +0000 UTC m=+5286.631204675" lastFinishedPulling="2025-11-21 11:09:03.173053636 +0000 UTC m=+5288.282196164" observedRunningTime="2025-11-21 11:09:03.576126179 +0000 UTC m=+5288.685268727" watchObservedRunningTime="2025-11-21 11:09:03.578541623 +0000 UTC m=+5288.687684131" Nov 21 11:09:03 crc kubenswrapper[4972]: I1121 11:09:03.991056 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 21 11:09:04 crc kubenswrapper[4972]: I1121 11:09:04.004722 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fnqf\" (UniqueName: \"kubernetes.io/projected/d0e04be9-fd09-4328-82e3-09f281044ed5-kube-api-access-8fnqf\") pod \"d0e04be9-fd09-4328-82e3-09f281044ed5\" (UID: \"d0e04be9-fd09-4328-82e3-09f281044ed5\") " Nov 21 11:09:04 crc kubenswrapper[4972]: I1121 11:09:04.011803 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_d0e04be9-fd09-4328-82e3-09f281044ed5/mariadb-client/0.log" Nov 21 11:09:04 crc kubenswrapper[4972]: I1121 11:09:04.012552 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e04be9-fd09-4328-82e3-09f281044ed5-kube-api-access-8fnqf" (OuterVolumeSpecName: "kube-api-access-8fnqf") pod "d0e04be9-fd09-4328-82e3-09f281044ed5" (UID: "d0e04be9-fd09-4328-82e3-09f281044ed5"). InnerVolumeSpecName "kube-api-access-8fnqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:09:04 crc kubenswrapper[4972]: I1121 11:09:04.035533 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 21 11:09:04 crc kubenswrapper[4972]: I1121 11:09:04.039922 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 21 11:09:04 crc kubenswrapper[4972]: I1121 11:09:04.106668 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fnqf\" (UniqueName: \"kubernetes.io/projected/d0e04be9-fd09-4328-82e3-09f281044ed5-kube-api-access-8fnqf\") on node \"crc\" DevicePath \"\"" Nov 21 11:09:04 crc kubenswrapper[4972]: I1121 11:09:04.563382 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4d804a6a8be1c9af5bf8a8cefd425539dc155a95fa925f32057f1443f34fa87" Nov 21 11:09:04 crc kubenswrapper[4972]: I1121 11:09:04.563465 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 21 11:09:05 crc kubenswrapper[4972]: I1121 11:09:05.771306 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e04be9-fd09-4328-82e3-09f281044ed5" path="/var/lib/kubelet/pods/d0e04be9-fd09-4328-82e3-09f281044ed5/volumes" Nov 21 11:09:07 crc kubenswrapper[4972]: I1121 11:09:07.759636 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:09:07 crc kubenswrapper[4972]: E1121 11:09:07.760145 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:09:10 crc kubenswrapper[4972]: I1121 11:09:10.593022 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:10 crc kubenswrapper[4972]: I1121 11:09:10.593433 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:10 crc kubenswrapper[4972]: I1121 11:09:10.668369 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:10 crc kubenswrapper[4972]: I1121 11:09:10.737633 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:10 crc kubenswrapper[4972]: I1121 11:09:10.919167 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-psg8b"] Nov 21 11:09:12 crc kubenswrapper[4972]: I1121 11:09:12.638786 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-psg8b" podUID="ac3dbca7-7ad9-4058-8250-85c6185e5f0b" containerName="registry-server" containerID="cri-o://a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9" gracePeriod=2 Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.169270 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.369162 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n8s4\" (UniqueName: \"kubernetes.io/projected/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-kube-api-access-7n8s4\") pod \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\" (UID: \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\") " Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.369338 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-catalog-content\") pod \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\" (UID: \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\") " Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.369508 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-utilities\") pod \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\" (UID: \"ac3dbca7-7ad9-4058-8250-85c6185e5f0b\") " Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.371295 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-utilities" (OuterVolumeSpecName: "utilities") pod "ac3dbca7-7ad9-4058-8250-85c6185e5f0b" (UID: "ac3dbca7-7ad9-4058-8250-85c6185e5f0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.379334 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-kube-api-access-7n8s4" (OuterVolumeSpecName: "kube-api-access-7n8s4") pod "ac3dbca7-7ad9-4058-8250-85c6185e5f0b" (UID: "ac3dbca7-7ad9-4058-8250-85c6185e5f0b"). InnerVolumeSpecName "kube-api-access-7n8s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.438148 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac3dbca7-7ad9-4058-8250-85c6185e5f0b" (UID: "ac3dbca7-7ad9-4058-8250-85c6185e5f0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.472151 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.472212 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n8s4\" (UniqueName: \"kubernetes.io/projected/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-kube-api-access-7n8s4\") on node \"crc\" DevicePath \"\"" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.472236 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3dbca7-7ad9-4058-8250-85c6185e5f0b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.654639 4972 generic.go:334] "Generic (PLEG): container finished" podID="ac3dbca7-7ad9-4058-8250-85c6185e5f0b" containerID="a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9" exitCode=0 Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.654705 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psg8b" event={"ID":"ac3dbca7-7ad9-4058-8250-85c6185e5f0b","Type":"ContainerDied","Data":"a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9"} Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.654741 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psg8b" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.654771 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psg8b" event={"ID":"ac3dbca7-7ad9-4058-8250-85c6185e5f0b","Type":"ContainerDied","Data":"d10a7145bfeb729aa7eff0c66a1a8269b9df72d387bfb053e44771d970063865"} Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.654814 4972 scope.go:117] "RemoveContainer" containerID="a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.695616 4972 scope.go:117] "RemoveContainer" containerID="165783e9c80249a1a55d3dc63164158671a2d127ce9422ba56a8ee475bb1a4ee" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.721517 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-psg8b"] Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.731214 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-psg8b"] Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.747201 4972 scope.go:117] "RemoveContainer" containerID="24525964afe455031a3d8ec14eb2829e524375958e58fd47a74c1a6f08e6de3a" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.776420 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac3dbca7-7ad9-4058-8250-85c6185e5f0b" path="/var/lib/kubelet/pods/ac3dbca7-7ad9-4058-8250-85c6185e5f0b/volumes" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.787141 4972 scope.go:117] "RemoveContainer" containerID="a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9" Nov 21 11:09:13 crc kubenswrapper[4972]: E1121 11:09:13.788130 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9\": container with ID starting with a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9 not found: ID does not exist" containerID="a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.788205 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9"} err="failed to get container status \"a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9\": rpc error: code = NotFound desc = could not find container \"a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9\": container with ID starting with a2274b77878047215e1e35fb7971208d515c221e1b4669ea9a8e263f3639f0a9 not found: ID does not exist" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.788250 4972 scope.go:117] "RemoveContainer" containerID="165783e9c80249a1a55d3dc63164158671a2d127ce9422ba56a8ee475bb1a4ee" Nov 21 11:09:13 crc kubenswrapper[4972]: E1121 11:09:13.788895 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"165783e9c80249a1a55d3dc63164158671a2d127ce9422ba56a8ee475bb1a4ee\": container with ID starting with 165783e9c80249a1a55d3dc63164158671a2d127ce9422ba56a8ee475bb1a4ee not found: ID does not exist" containerID="165783e9c80249a1a55d3dc63164158671a2d127ce9422ba56a8ee475bb1a4ee" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.788985 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"165783e9c80249a1a55d3dc63164158671a2d127ce9422ba56a8ee475bb1a4ee"} err="failed to get container status \"165783e9c80249a1a55d3dc63164158671a2d127ce9422ba56a8ee475bb1a4ee\": rpc error: code = NotFound desc = could not find container \"165783e9c80249a1a55d3dc63164158671a2d127ce9422ba56a8ee475bb1a4ee\": container with ID starting with 165783e9c80249a1a55d3dc63164158671a2d127ce9422ba56a8ee475bb1a4ee not found: ID does not exist" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.789062 4972 scope.go:117] "RemoveContainer" containerID="24525964afe455031a3d8ec14eb2829e524375958e58fd47a74c1a6f08e6de3a" Nov 21 11:09:13 crc kubenswrapper[4972]: E1121 11:09:13.789726 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24525964afe455031a3d8ec14eb2829e524375958e58fd47a74c1a6f08e6de3a\": container with ID starting with 24525964afe455031a3d8ec14eb2829e524375958e58fd47a74c1a6f08e6de3a not found: ID does not exist" containerID="24525964afe455031a3d8ec14eb2829e524375958e58fd47a74c1a6f08e6de3a" Nov 21 11:09:13 crc kubenswrapper[4972]: I1121 11:09:13.789791 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24525964afe455031a3d8ec14eb2829e524375958e58fd47a74c1a6f08e6de3a"} err="failed to get container status \"24525964afe455031a3d8ec14eb2829e524375958e58fd47a74c1a6f08e6de3a\": rpc error: code = NotFound desc = could not find container \"24525964afe455031a3d8ec14eb2829e524375958e58fd47a74c1a6f08e6de3a\": container with ID starting with 24525964afe455031a3d8ec14eb2829e524375958e58fd47a74c1a6f08e6de3a not found: ID does not exist" Nov 21 11:09:22 crc kubenswrapper[4972]: I1121 11:09:22.760244 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:09:22 crc kubenswrapper[4972]: E1121 11:09:22.761555 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:09:26 crc kubenswrapper[4972]: I1121 11:09:26.200599 4972 scope.go:117] "RemoveContainer" containerID="8b8dc9ad424bdcd83869725ac7fa68fbbfef9537a427cec407b28c0b1f1700cb" Nov 21 11:09:34 crc kubenswrapper[4972]: I1121 11:09:34.759594 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:09:34 crc kubenswrapper[4972]: E1121 11:09:34.760497 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.867893 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 11:09:42 crc kubenswrapper[4972]: E1121 11:09:42.869210 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3dbca7-7ad9-4058-8250-85c6185e5f0b" containerName="registry-server" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.869238 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3dbca7-7ad9-4058-8250-85c6185e5f0b" containerName="registry-server" Nov 21 11:09:42 crc kubenswrapper[4972]: E1121 11:09:42.869265 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3dbca7-7ad9-4058-8250-85c6185e5f0b" containerName="extract-utilities" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.869280 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3dbca7-7ad9-4058-8250-85c6185e5f0b" containerName="extract-utilities" Nov 21 11:09:42 crc kubenswrapper[4972]: E1121 11:09:42.869312 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3dbca7-7ad9-4058-8250-85c6185e5f0b" containerName="extract-content" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.869328 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3dbca7-7ad9-4058-8250-85c6185e5f0b" containerName="extract-content" Nov 21 11:09:42 crc kubenswrapper[4972]: E1121 11:09:42.869382 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e04be9-fd09-4328-82e3-09f281044ed5" containerName="mariadb-client" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.869398 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e04be9-fd09-4328-82e3-09f281044ed5" containerName="mariadb-client" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.869668 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3dbca7-7ad9-4058-8250-85c6185e5f0b" containerName="registry-server" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.869694 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e04be9-fd09-4328-82e3-09f281044ed5" containerName="mariadb-client" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.871306 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.874767 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-xvrjn" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.875961 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.880056 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.888033 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.890897 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.899062 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.901335 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.908958 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.917815 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.954981 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996519 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f1fd91b3-54a9-4ec1-8ed4-ff412b3aeec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1fd91b3-54a9-4ec1-8ed4-ff412b3aeec2\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996585 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bfc218b3-99e0-4b47-b6ca-1bd32fbc898d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfc218b3-99e0-4b47-b6ca-1bd32fbc898d\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996614 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd85acf3-8c24-4056-a908-2430f3f12bb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd85acf3-8c24-4056-a908-2430f3f12bb4\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996638 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2419184d-1b5f-43ea-8184-5856c198c4fa-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996664 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11962c16-54cd-4cd8-8794-49e5e30520c8-config\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996692 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11962c16-54cd-4cd8-8794-49e5e30520c8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996718 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3f1b59de-109b-4f8b-9104-6b93a7beea77-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996745 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2419184d-1b5f-43ea-8184-5856c198c4fa-config\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996766 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2419184d-1b5f-43ea-8184-5856c198c4fa-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996797 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f1b59de-109b-4f8b-9104-6b93a7beea77-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996820 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qr6l\" (UniqueName: \"kubernetes.io/projected/11962c16-54cd-4cd8-8794-49e5e30520c8-kube-api-access-2qr6l\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996861 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2419184d-1b5f-43ea-8184-5856c198c4fa-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996914 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5tsn\" (UniqueName: \"kubernetes.io/projected/3f1b59de-109b-4f8b-9104-6b93a7beea77-kube-api-access-c5tsn\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.996979 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl92k\" (UniqueName: \"kubernetes.io/projected/2419184d-1b5f-43ea-8184-5856c198c4fa-kube-api-access-kl92k\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.997028 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1b59de-109b-4f8b-9104-6b93a7beea77-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.997051 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11962c16-54cd-4cd8-8794-49e5e30520c8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.997281 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11962c16-54cd-4cd8-8794-49e5e30520c8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:42 crc kubenswrapper[4972]: I1121 11:09:42.997345 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1b59de-109b-4f8b-9104-6b93a7beea77-config\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.056534 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.058618 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.062650 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.065742 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.065927 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.066142 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-79phf" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.098820 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.099391 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11962c16-54cd-4cd8-8794-49e5e30520c8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.099461 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1b59de-109b-4f8b-9104-6b93a7beea77-config\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.099512 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f1fd91b3-54a9-4ec1-8ed4-ff412b3aeec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1fd91b3-54a9-4ec1-8ed4-ff412b3aeec2\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.099607 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.099721 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bfc218b3-99e0-4b47-b6ca-1bd32fbc898d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfc218b3-99e0-4b47-b6ca-1bd32fbc898d\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.099765 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cd85acf3-8c24-4056-a908-2430f3f12bb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd85acf3-8c24-4056-a908-2430f3f12bb4\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.099804 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2419184d-1b5f-43ea-8184-5856c198c4fa-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.099866 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11962c16-54cd-4cd8-8794-49e5e30520c8-config\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.099909 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11962c16-54cd-4cd8-8794-49e5e30520c8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.099947 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3f1b59de-109b-4f8b-9104-6b93a7beea77-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.099983 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-config\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100027 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2419184d-1b5f-43ea-8184-5856c198c4fa-config\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100061 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2419184d-1b5f-43ea-8184-5856c198c4fa-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100090 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100124 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng4wk\" (UniqueName: \"kubernetes.io/projected/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-kube-api-access-ng4wk\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100172 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f1b59de-109b-4f8b-9104-6b93a7beea77-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100205 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qr6l\" (UniqueName: \"kubernetes.io/projected/11962c16-54cd-4cd8-8794-49e5e30520c8-kube-api-access-2qr6l\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100248 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2419184d-1b5f-43ea-8184-5856c198c4fa-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100329 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5tsn\" (UniqueName: \"kubernetes.io/projected/3f1b59de-109b-4f8b-9104-6b93a7beea77-kube-api-access-c5tsn\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100367 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl92k\" (UniqueName: \"kubernetes.io/projected/2419184d-1b5f-43ea-8184-5856c198c4fa-kube-api-access-kl92k\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100413 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b2d0590-53ce-4b37-b34a-3ec5ea684b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b2d0590-53ce-4b37-b34a-3ec5ea684b11\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100446 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1b59de-109b-4f8b-9104-6b93a7beea77-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100477 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11962c16-54cd-4cd8-8794-49e5e30520c8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100521 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100664 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f1b59de-109b-4f8b-9104-6b93a7beea77-config\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.100757 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11962c16-54cd-4cd8-8794-49e5e30520c8-config\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.101227 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2419184d-1b5f-43ea-8184-5856c198c4fa-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.101301 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11962c16-54cd-4cd8-8794-49e5e30520c8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.101766 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3f1b59de-109b-4f8b-9104-6b93a7beea77-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.101880 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f1b59de-109b-4f8b-9104-6b93a7beea77-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.102340 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2419184d-1b5f-43ea-8184-5856c198c4fa-config\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.102523 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2419184d-1b5f-43ea-8184-5856c198c4fa-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.103991 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.108644 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.108692 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f1fd91b3-54a9-4ec1-8ed4-ff412b3aeec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1fd91b3-54a9-4ec1-8ed4-ff412b3aeec2\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ba92fa53b4284b00461baa33ab3f3a97c769248293a39d05f11e45f414574ffd/globalmount\"" pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.112270 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.112329 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cd85acf3-8c24-4056-a908-2430f3f12bb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd85acf3-8c24-4056-a908-2430f3f12bb4\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ec67a973a54a1d7360ec8247386a4466303a19511b57da3d1d88afb6eac42a3b/globalmount\"" pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.112274 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.112449 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bfc218b3-99e0-4b47-b6ca-1bd32fbc898d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfc218b3-99e0-4b47-b6ca-1bd32fbc898d\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0d0f4c0076f74a5776ce1ecf17d441da97f37c19ee0a41af80d8ca7d89d09d63/globalmount\"" pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.114378 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2419184d-1b5f-43ea-8184-5856c198c4fa-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.115136 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11962c16-54cd-4cd8-8794-49e5e30520c8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.122062 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1b59de-109b-4f8b-9104-6b93a7beea77-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.122130 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11962c16-54cd-4cd8-8794-49e5e30520c8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.122272 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.124495 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.124998 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl92k\" (UniqueName: \"kubernetes.io/projected/2419184d-1b5f-43ea-8184-5856c198c4fa-kube-api-access-kl92k\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.128298 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5tsn\" (UniqueName: \"kubernetes.io/projected/3f1b59de-109b-4f8b-9104-6b93a7beea77-kube-api-access-c5tsn\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.137994 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qr6l\" (UniqueName: \"kubernetes.io/projected/11962c16-54cd-4cd8-8794-49e5e30520c8-kube-api-access-2qr6l\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.147562 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.187936 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.192256 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cd85acf3-8c24-4056-a908-2430f3f12bb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd85acf3-8c24-4056-a908-2430f3f12bb4\") pod \"ovsdbserver-nb-1\" (UID: \"11962c16-54cd-4cd8-8794-49e5e30520c8\") " pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.192779 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bfc218b3-99e0-4b47-b6ca-1bd32fbc898d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfc218b3-99e0-4b47-b6ca-1bd32fbc898d\") pod \"ovsdbserver-nb-2\" (UID: \"2419184d-1b5f-43ea-8184-5856c198c4fa\") " pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.197415 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f1fd91b3-54a9-4ec1-8ed4-ff412b3aeec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1fd91b3-54a9-4ec1-8ed4-ff412b3aeec2\") pod \"ovsdbserver-nb-0\" (UID: \"3f1b59de-109b-4f8b-9104-6b93a7beea77\") " pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.201414 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.201482 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-config\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.201516 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.201540 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng4wk\" (UniqueName: \"kubernetes.io/projected/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-kube-api-access-ng4wk\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.202323 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b2d0590-53ce-4b37-b34a-3ec5ea684b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b2d0590-53ce-4b37-b34a-3ec5ea684b11\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.202379 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.202522 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-config\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.202689 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.204032 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.204673 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.204700 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b2d0590-53ce-4b37-b34a-3ec5ea684b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b2d0590-53ce-4b37-b34a-3ec5ea684b11\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fa18ddeb5865a2460d3212ed9b37935bec67d6b7d8935829922dd27ae7745a9d/globalmount\"" pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.207930 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.211537 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.216019 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng4wk\" (UniqueName: \"kubernetes.io/projected/ddd35b90-b72d-4b9c-8380-71c6b39a8a75-kube-api-access-ng4wk\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.230050 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b2d0590-53ce-4b37-b34a-3ec5ea684b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b2d0590-53ce-4b37-b34a-3ec5ea684b11\") pod \"ovsdbserver-sb-0\" (UID: \"ddd35b90-b72d-4b9c-8380-71c6b39a8a75\") " pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.237780 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.261072 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.303555 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa07300-1e16-406a-9376-362cf3324e4d-config\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.303883 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/799f30f3-66a3-440a-891c-fb28258284f1-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.303908 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/799f30f3-66a3-440a-891c-fb28258284f1-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.303932 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-06fa9b8f-933c-486b-ac0f-a181b9c328f9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06fa9b8f-933c-486b-ac0f-a181b9c328f9\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.303974 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfa07300-1e16-406a-9376-362cf3324e4d-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.304031 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nwld\" (UniqueName: \"kubernetes.io/projected/cfa07300-1e16-406a-9376-362cf3324e4d-kube-api-access-5nwld\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.304051 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/799f30f3-66a3-440a-891c-fb28258284f1-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.304074 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1492ac6f-5801-4b0e-a5b1-7252c72e27d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1492ac6f-5801-4b0e-a5b1-7252c72e27d2\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.304090 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2sbr\" (UniqueName: \"kubernetes.io/projected/799f30f3-66a3-440a-891c-fb28258284f1-kube-api-access-j2sbr\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.304105 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cfa07300-1e16-406a-9376-362cf3324e4d-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.304125 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa07300-1e16-406a-9376-362cf3324e4d-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.304159 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799f30f3-66a3-440a-891c-fb28258284f1-config\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.381854 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.406112 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/799f30f3-66a3-440a-891c-fb28258284f1-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.406159 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/799f30f3-66a3-440a-891c-fb28258284f1-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.406183 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-06fa9b8f-933c-486b-ac0f-a181b9c328f9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06fa9b8f-933c-486b-ac0f-a181b9c328f9\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.406235 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfa07300-1e16-406a-9376-362cf3324e4d-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.406268 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nwld\" (UniqueName: \"kubernetes.io/projected/cfa07300-1e16-406a-9376-362cf3324e4d-kube-api-access-5nwld\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.406285 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/799f30f3-66a3-440a-891c-fb28258284f1-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.406321 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1492ac6f-5801-4b0e-a5b1-7252c72e27d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1492ac6f-5801-4b0e-a5b1-7252c72e27d2\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.406340 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2sbr\" (UniqueName: \"kubernetes.io/projected/799f30f3-66a3-440a-891c-fb28258284f1-kube-api-access-j2sbr\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.406354 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cfa07300-1e16-406a-9376-362cf3324e4d-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.407091 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/799f30f3-66a3-440a-891c-fb28258284f1-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.408608 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/799f30f3-66a3-440a-891c-fb28258284f1-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.410008 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cfa07300-1e16-406a-9376-362cf3324e4d-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.410501 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa07300-1e16-406a-9376-362cf3324e4d-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.410725 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799f30f3-66a3-440a-891c-fb28258284f1-config\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.411710 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799f30f3-66a3-440a-891c-fb28258284f1-config\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.411866 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa07300-1e16-406a-9376-362cf3324e4d-config\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.413796 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/799f30f3-66a3-440a-891c-fb28258284f1-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.414999 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.415067 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1492ac6f-5801-4b0e-a5b1-7252c72e27d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1492ac6f-5801-4b0e-a5b1-7252c72e27d2\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/47898e9d8e4626aa3dde87cf24df855b5c9d8d6b478f7808a67c6a7ff16c67ef/globalmount\"" pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.415184 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.415217 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-06fa9b8f-933c-486b-ac0f-a181b9c328f9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06fa9b8f-933c-486b-ac0f-a181b9c328f9\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/48bd7dc8fc41ed5dae30009dd82716d607fe889265dcbc030c7681dd447de758/globalmount\"" pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.420305 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfa07300-1e16-406a-9376-362cf3324e4d-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.421722 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa07300-1e16-406a-9376-362cf3324e4d-config\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.423812 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa07300-1e16-406a-9376-362cf3324e4d-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.426465 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nwld\" (UniqueName: \"kubernetes.io/projected/cfa07300-1e16-406a-9376-362cf3324e4d-kube-api-access-5nwld\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.426903 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2sbr\" (UniqueName: \"kubernetes.io/projected/799f30f3-66a3-440a-891c-fb28258284f1-kube-api-access-j2sbr\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.454731 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1492ac6f-5801-4b0e-a5b1-7252c72e27d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1492ac6f-5801-4b0e-a5b1-7252c72e27d2\") pod \"ovsdbserver-sb-1\" (UID: \"799f30f3-66a3-440a-891c-fb28258284f1\") " pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.460423 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-06fa9b8f-933c-486b-ac0f-a181b9c328f9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06fa9b8f-933c-486b-ac0f-a181b9c328f9\") pod \"ovsdbserver-sb-2\" (UID: \"cfa07300-1e16-406a-9376-362cf3324e4d\") " pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.595420 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.605536 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.801985 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.877499 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.966793 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ddd35b90-b72d-4b9c-8380-71c6b39a8a75","Type":"ContainerStarted","Data":"aed1d1d2c15e439cd7e0ec40b41f81322d8dfde83626d4da3eee67659b6fb4d3"} Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.966852 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ddd35b90-b72d-4b9c-8380-71c6b39a8a75","Type":"ContainerStarted","Data":"a85496cbe2dbd877a20ce2422dacae2353de08951cdf436c6addc100a6419445"} Nov 21 11:09:43 crc kubenswrapper[4972]: I1121 11:09:43.973045 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3f1b59de-109b-4f8b-9104-6b93a7beea77","Type":"ContainerStarted","Data":"f4556035bd8955ee5e1ce4866c2704919431b4d9a1072a94040ecaf5a461e2ff"} Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.143049 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 21 11:09:44 crc kubenswrapper[4972]: W1121 11:09:44.152546 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfa07300_1e16_406a_9376_362cf3324e4d.slice/crio-263d3f91b75fc4725ef193d2bcc4e1c42db8901e807c8fa4acb2f6ec5e718692 WatchSource:0}: Error finding container 263d3f91b75fc4725ef193d2bcc4e1c42db8901e807c8fa4acb2f6ec5e718692: Status 404 returned error can't find the container with id 263d3f91b75fc4725ef193d2bcc4e1c42db8901e807c8fa4acb2f6ec5e718692 Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.236549 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 21 11:09:44 crc kubenswrapper[4972]: W1121 11:09:44.247854 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod799f30f3_66a3_440a_891c_fb28258284f1.slice/crio-68036e82a83f39f9db1a2ce86156e0ee19bd6d2f0f9d22578d7d55d48894f939 WatchSource:0}: Error finding container 68036e82a83f39f9db1a2ce86156e0ee19bd6d2f0f9d22578d7d55d48894f939: Status 404 returned error can't find the container with id 68036e82a83f39f9db1a2ce86156e0ee19bd6d2f0f9d22578d7d55d48894f939 Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.581273 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 21 11:09:44 crc kubenswrapper[4972]: W1121 11:09:44.595032 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2419184d_1b5f_43ea_8184_5856c198c4fa.slice/crio-f76b7cc55809f793b3c6fd9304f09134d6d71609b68a17442879d4b38dc84b74 WatchSource:0}: Error finding container f76b7cc55809f793b3c6fd9304f09134d6d71609b68a17442879d4b38dc84b74: Status 404 returned error can't find the container with id f76b7cc55809f793b3c6fd9304f09134d6d71609b68a17442879d4b38dc84b74 Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.726688 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 21 11:09:44 crc kubenswrapper[4972]: W1121 11:09:44.755721 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11962c16_54cd_4cd8_8794_49e5e30520c8.slice/crio-4d87af9e44c125faf346d5f7a61a116246073219dcf8488d9ccd1eebc66f6db1 WatchSource:0}: Error finding container 4d87af9e44c125faf346d5f7a61a116246073219dcf8488d9ccd1eebc66f6db1: Status 404 returned error can't find the container with id 4d87af9e44c125faf346d5f7a61a116246073219dcf8488d9ccd1eebc66f6db1 Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.982958 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"11962c16-54cd-4cd8-8794-49e5e30520c8","Type":"ContainerStarted","Data":"78cbf2d24f32936bf68d02c8c599fdaee8879d2d0d486405957e3ca130fb0dc4"} Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.983026 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"11962c16-54cd-4cd8-8794-49e5e30520c8","Type":"ContainerStarted","Data":"4d87af9e44c125faf346d5f7a61a116246073219dcf8488d9ccd1eebc66f6db1"} Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.988979 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"799f30f3-66a3-440a-891c-fb28258284f1","Type":"ContainerStarted","Data":"68c36b39033ead87290e23af5b211cd8e68f164c2689aee0cecf23337fedb318"} Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.989024 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"799f30f3-66a3-440a-891c-fb28258284f1","Type":"ContainerStarted","Data":"4a2f558eb95023fc17107b75d91829ac534bfc9d4be7957128caf690d5c480bb"} Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.989034 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"799f30f3-66a3-440a-891c-fb28258284f1","Type":"ContainerStarted","Data":"68036e82a83f39f9db1a2ce86156e0ee19bd6d2f0f9d22578d7d55d48894f939"} Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.993056 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ddd35b90-b72d-4b9c-8380-71c6b39a8a75","Type":"ContainerStarted","Data":"e21e68df944f29853f999fcb87f29eda64e29f79413e532d86a9db4e08adb351"} Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.994991 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"2419184d-1b5f-43ea-8184-5856c198c4fa","Type":"ContainerStarted","Data":"6d8436c693570f0346777eb9061aeda6e974a18db4a62a397cbdbc24015e3b6b"} Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.995016 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"2419184d-1b5f-43ea-8184-5856c198c4fa","Type":"ContainerStarted","Data":"42b0c76ac4de8190008caed7c429b003c05a8f3f009eb554e3b6a7c9f7f9c345"} Nov 21 11:09:44 crc kubenswrapper[4972]: I1121 11:09:44.995025 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"2419184d-1b5f-43ea-8184-5856c198c4fa","Type":"ContainerStarted","Data":"f76b7cc55809f793b3c6fd9304f09134d6d71609b68a17442879d4b38dc84b74"} Nov 21 11:09:45 crc kubenswrapper[4972]: I1121 11:09:45.002994 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3f1b59de-109b-4f8b-9104-6b93a7beea77","Type":"ContainerStarted","Data":"8342a36d5fd35d9d6bb576e7ea41b30a21a9a833bffc955f2077fcdc8d022d5f"} Nov 21 11:09:45 crc kubenswrapper[4972]: I1121 11:09:45.003054 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3f1b59de-109b-4f8b-9104-6b93a7beea77","Type":"ContainerStarted","Data":"349197423e353059ac03c45551f6bbe44f2df8facc8d7edba90c8157d486913d"} Nov 21 11:09:45 crc kubenswrapper[4972]: I1121 11:09:45.004301 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"cfa07300-1e16-406a-9376-362cf3324e4d","Type":"ContainerStarted","Data":"f4c43f6681bda0092add242ab928b714e4fc10754756ca139ecfddd57b90aa56"} Nov 21 11:09:45 crc kubenswrapper[4972]: I1121 11:09:45.004345 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"cfa07300-1e16-406a-9376-362cf3324e4d","Type":"ContainerStarted","Data":"1737ea8651076c32a2faa20aa504402f170acbbd5efefd1f9c64544bc4fd7db8"} Nov 21 11:09:45 crc kubenswrapper[4972]: I1121 11:09:45.004359 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"cfa07300-1e16-406a-9376-362cf3324e4d","Type":"ContainerStarted","Data":"263d3f91b75fc4725ef193d2bcc4e1c42db8901e807c8fa4acb2f6ec5e718692"} Nov 21 11:09:45 crc kubenswrapper[4972]: I1121 11:09:45.009103 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.00909232 podStartE2EDuration="3.00909232s" podCreationTimestamp="2025-11-21 11:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:09:45.006586834 +0000 UTC m=+5330.115729342" watchObservedRunningTime="2025-11-21 11:09:45.00909232 +0000 UTC m=+5330.118234818" Nov 21 11:09:45 crc kubenswrapper[4972]: I1121 11:09:45.029635 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.029615213 podStartE2EDuration="3.029615213s" podCreationTimestamp="2025-11-21 11:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:09:45.027917098 +0000 UTC m=+5330.137059636" watchObservedRunningTime="2025-11-21 11:09:45.029615213 +0000 UTC m=+5330.138757711" Nov 21 11:09:45 crc kubenswrapper[4972]: I1121 11:09:45.059544 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.059526935 podStartE2EDuration="3.059526935s" podCreationTimestamp="2025-11-21 11:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:09:45.053736222 +0000 UTC m=+5330.162878730" watchObservedRunningTime="2025-11-21 11:09:45.059526935 +0000 UTC m=+5330.168669443" Nov 21 11:09:45 crc kubenswrapper[4972]: I1121 11:09:45.081392 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.081371934 podStartE2EDuration="4.081371934s" podCreationTimestamp="2025-11-21 11:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:09:45.078038376 +0000 UTC m=+5330.187180944" watchObservedRunningTime="2025-11-21 11:09:45.081371934 +0000 UTC m=+5330.190514422" Nov 21 11:09:45 crc kubenswrapper[4972]: I1121 11:09:45.099746 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.09972388 podStartE2EDuration="4.09972388s" podCreationTimestamp="2025-11-21 11:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:09:45.096262798 +0000 UTC m=+5330.205405296" watchObservedRunningTime="2025-11-21 11:09:45.09972388 +0000 UTC m=+5330.208866378" Nov 21 11:09:45 crc kubenswrapper[4972]: I1121 11:09:45.764378 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:09:45 crc kubenswrapper[4972]: E1121 11:09:45.764780 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:09:46 crc kubenswrapper[4972]: I1121 11:09:46.014795 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"11962c16-54cd-4cd8-8794-49e5e30520c8","Type":"ContainerStarted","Data":"035b0965823e9154eea9077b90f09d6ee1b64a676a022a3d3c8c3934e4f1a449"} Nov 21 11:09:46 crc kubenswrapper[4972]: I1121 11:09:46.040744 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=5.040726699 podStartE2EDuration="5.040726699s" podCreationTimestamp="2025-11-21 11:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:09:46.034802602 +0000 UTC m=+5331.143945100" watchObservedRunningTime="2025-11-21 11:09:46.040726699 +0000 UTC m=+5331.149869197" Nov 21 11:09:46 crc kubenswrapper[4972]: I1121 11:09:46.212415 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:46 crc kubenswrapper[4972]: I1121 11:09:46.240908 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:46 crc kubenswrapper[4972]: I1121 11:09:46.262185 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:46 crc kubenswrapper[4972]: I1121 11:09:46.383393 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:46 crc kubenswrapper[4972]: I1121 11:09:46.437094 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:46 crc kubenswrapper[4972]: I1121 11:09:46.595677 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:46 crc kubenswrapper[4972]: I1121 11:09:46.605764 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:47 crc kubenswrapper[4972]: I1121 11:09:47.021489 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.069286 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.213013 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.240654 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.261381 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.359598 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-94896d7d7-7ddxq"] Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.360958 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.364850 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.368713 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-94896d7d7-7ddxq"] Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.404808 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-ovsdbserver-sb\") pod \"dnsmasq-dns-94896d7d7-7ddxq\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.404880 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-dns-svc\") pod \"dnsmasq-dns-94896d7d7-7ddxq\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.404899 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-config\") pod \"dnsmasq-dns-94896d7d7-7ddxq\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.405071 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt47p\" (UniqueName: \"kubernetes.io/projected/4ac05d93-a702-4862-9068-0ab99dcbaa2b-kube-api-access-tt47p\") pod \"dnsmasq-dns-94896d7d7-7ddxq\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.506745 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-ovsdbserver-sb\") pod \"dnsmasq-dns-94896d7d7-7ddxq\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.506803 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-dns-svc\") pod \"dnsmasq-dns-94896d7d7-7ddxq\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.506823 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-config\") pod \"dnsmasq-dns-94896d7d7-7ddxq\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.506931 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt47p\" (UniqueName: \"kubernetes.io/projected/4ac05d93-a702-4862-9068-0ab99dcbaa2b-kube-api-access-tt47p\") pod \"dnsmasq-dns-94896d7d7-7ddxq\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.507794 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-dns-svc\") pod \"dnsmasq-dns-94896d7d7-7ddxq\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.508142 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-config\") pod \"dnsmasq-dns-94896d7d7-7ddxq\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.508310 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-ovsdbserver-sb\") pod \"dnsmasq-dns-94896d7d7-7ddxq\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.525337 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt47p\" (UniqueName: \"kubernetes.io/projected/4ac05d93-a702-4862-9068-0ab99dcbaa2b-kube-api-access-tt47p\") pod \"dnsmasq-dns-94896d7d7-7ddxq\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.596481 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.605705 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.697409 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:48 crc kubenswrapper[4972]: W1121 11:09:48.978334 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ac05d93_a702_4862_9068_0ab99dcbaa2b.slice/crio-33a27572451c3f6d64030f199d0789960b909dc8ed39eac49bf305250bc07bc5 WatchSource:0}: Error finding container 33a27572451c3f6d64030f199d0789960b909dc8ed39eac49bf305250bc07bc5: Status 404 returned error can't find the container with id 33a27572451c3f6d64030f199d0789960b909dc8ed39eac49bf305250bc07bc5 Nov 21 11:09:48 crc kubenswrapper[4972]: I1121 11:09:48.980233 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-94896d7d7-7ddxq"] Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.039767 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" event={"ID":"4ac05d93-a702-4862-9068-0ab99dcbaa2b","Type":"ContainerStarted","Data":"33a27572451c3f6d64030f199d0789960b909dc8ed39eac49bf305250bc07bc5"} Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.247484 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.285270 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.289882 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.305730 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.348405 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.570003 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-94896d7d7-7ddxq"] Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.624498 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d6c89bb59-tbklb"] Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.625752 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.632488 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.698452 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d6c89bb59-tbklb"] Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.720526 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.728467 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.741592 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-config\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.741659 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgvr\" (UniqueName: \"kubernetes.io/projected/2b735382-51ce-491a-aacb-df2d626449d6-kube-api-access-rfgvr\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.741704 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-ovsdbserver-nb\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.741723 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-ovsdbserver-sb\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.741748 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-dns-svc\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.759535 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.787237 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.842786 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-ovsdbserver-nb\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.842868 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-ovsdbserver-sb\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.842913 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-dns-svc\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.843013 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-config\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.843092 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgvr\" (UniqueName: \"kubernetes.io/projected/2b735382-51ce-491a-aacb-df2d626449d6-kube-api-access-rfgvr\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.843897 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-ovsdbserver-sb\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.844115 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-ovsdbserver-nb\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.844441 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-config\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.844771 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-dns-svc\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:49 crc kubenswrapper[4972]: I1121 11:09:49.862611 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgvr\" (UniqueName: \"kubernetes.io/projected/2b735382-51ce-491a-aacb-df2d626449d6-kube-api-access-rfgvr\") pod \"dnsmasq-dns-d6c89bb59-tbklb\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:50 crc kubenswrapper[4972]: I1121 11:09:50.024528 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:50 crc kubenswrapper[4972]: I1121 11:09:50.046036 4972 generic.go:334] "Generic (PLEG): container finished" podID="4ac05d93-a702-4862-9068-0ab99dcbaa2b" containerID="d6fb143c930e5516d8f8fdfa21bce2c83dcac7792513877a36785d222e63960b" exitCode=0 Nov 21 11:09:50 crc kubenswrapper[4972]: I1121 11:09:50.047083 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" event={"ID":"4ac05d93-a702-4862-9068-0ab99dcbaa2b","Type":"ContainerDied","Data":"d6fb143c930e5516d8f8fdfa21bce2c83dcac7792513877a36785d222e63960b"} Nov 21 11:09:50 crc kubenswrapper[4972]: I1121 11:09:50.098073 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Nov 21 11:09:50 crc kubenswrapper[4972]: I1121 11:09:50.542103 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d6c89bb59-tbklb"] Nov 21 11:09:50 crc kubenswrapper[4972]: W1121 11:09:50.561796 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b735382_51ce_491a_aacb_df2d626449d6.slice/crio-ae78d5cfc3a4dcbf7a5c5a61abc572d2db1d497d77e45939419003291ce70614 WatchSource:0}: Error finding container ae78d5cfc3a4dcbf7a5c5a61abc572d2db1d497d77e45939419003291ce70614: Status 404 returned error can't find the container with id ae78d5cfc3a4dcbf7a5c5a61abc572d2db1d497d77e45939419003291ce70614 Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.058556 4972 generic.go:334] "Generic (PLEG): container finished" podID="2b735382-51ce-491a-aacb-df2d626449d6" containerID="2663bd11d9f1a608e9992f458ba3f706eb10f51c10193e9cabbf1cf1f6a5d65b" exitCode=0 Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.058702 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" event={"ID":"2b735382-51ce-491a-aacb-df2d626449d6","Type":"ContainerDied","Data":"2663bd11d9f1a608e9992f458ba3f706eb10f51c10193e9cabbf1cf1f6a5d65b"} Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.058969 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" event={"ID":"2b735382-51ce-491a-aacb-df2d626449d6","Type":"ContainerStarted","Data":"ae78d5cfc3a4dcbf7a5c5a61abc572d2db1d497d77e45939419003291ce70614"} Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.064507 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" event={"ID":"4ac05d93-a702-4862-9068-0ab99dcbaa2b","Type":"ContainerStarted","Data":"c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c"} Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.064769 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" podUID="4ac05d93-a702-4862-9068-0ab99dcbaa2b" containerName="dnsmasq-dns" containerID="cri-o://c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c" gracePeriod=10 Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.121380 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" podStartSLOduration=3.121347467 podStartE2EDuration="3.121347467s" podCreationTimestamp="2025-11-21 11:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:09:51.113054227 +0000 UTC m=+5336.222196775" watchObservedRunningTime="2025-11-21 11:09:51.121347467 +0000 UTC m=+5336.230489975" Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.473120 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.571415 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt47p\" (UniqueName: \"kubernetes.io/projected/4ac05d93-a702-4862-9068-0ab99dcbaa2b-kube-api-access-tt47p\") pod \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.571576 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-dns-svc\") pod \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.571659 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-ovsdbserver-sb\") pod \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.571693 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-config\") pod \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\" (UID: \"4ac05d93-a702-4862-9068-0ab99dcbaa2b\") " Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.582103 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac05d93-a702-4862-9068-0ab99dcbaa2b-kube-api-access-tt47p" (OuterVolumeSpecName: "kube-api-access-tt47p") pod "4ac05d93-a702-4862-9068-0ab99dcbaa2b" (UID: "4ac05d93-a702-4862-9068-0ab99dcbaa2b"). InnerVolumeSpecName "kube-api-access-tt47p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.616262 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-config" (OuterVolumeSpecName: "config") pod "4ac05d93-a702-4862-9068-0ab99dcbaa2b" (UID: "4ac05d93-a702-4862-9068-0ab99dcbaa2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.626149 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ac05d93-a702-4862-9068-0ab99dcbaa2b" (UID: "4ac05d93-a702-4862-9068-0ab99dcbaa2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.626887 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4ac05d93-a702-4862-9068-0ab99dcbaa2b" (UID: "4ac05d93-a702-4862-9068-0ab99dcbaa2b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.672929 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.673154 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.673167 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac05d93-a702-4862-9068-0ab99dcbaa2b-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:09:51 crc kubenswrapper[4972]: I1121 11:09:51.673175 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt47p\" (UniqueName: \"kubernetes.io/projected/4ac05d93-a702-4862-9068-0ab99dcbaa2b-kube-api-access-tt47p\") on node \"crc\" DevicePath \"\"" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.080449 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" event={"ID":"2b735382-51ce-491a-aacb-df2d626449d6","Type":"ContainerStarted","Data":"7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88"} Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.080804 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.083229 4972 generic.go:334] "Generic (PLEG): container finished" podID="4ac05d93-a702-4862-9068-0ab99dcbaa2b" containerID="c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c" exitCode=0 Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.083291 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" event={"ID":"4ac05d93-a702-4862-9068-0ab99dcbaa2b","Type":"ContainerDied","Data":"c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c"} Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.083375 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" event={"ID":"4ac05d93-a702-4862-9068-0ab99dcbaa2b","Type":"ContainerDied","Data":"33a27572451c3f6d64030f199d0789960b909dc8ed39eac49bf305250bc07bc5"} Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.083467 4972 scope.go:117] "RemoveContainer" containerID="c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.083672 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94896d7d7-7ddxq" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.107541 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" podStartSLOduration=3.10751171 podStartE2EDuration="3.10751171s" podCreationTimestamp="2025-11-21 11:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:09:52.101786179 +0000 UTC m=+5337.210928697" watchObservedRunningTime="2025-11-21 11:09:52.10751171 +0000 UTC m=+5337.216654238" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.131433 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-94896d7d7-7ddxq"] Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.140322 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-94896d7d7-7ddxq"] Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.144224 4972 scope.go:117] "RemoveContainer" containerID="d6fb143c930e5516d8f8fdfa21bce2c83dcac7792513877a36785d222e63960b" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.175927 4972 scope.go:117] "RemoveContainer" containerID="c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c" Nov 21 11:09:52 crc kubenswrapper[4972]: E1121 11:09:52.176593 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c\": container with ID starting with c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c not found: ID does not exist" containerID="c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.176702 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c"} err="failed to get container status \"c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c\": rpc error: code = NotFound desc = could not find container \"c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c\": container with ID starting with c201e552de6024bdaf8fa9b8415135de8480197a89999e7840bbe0e135d6558c not found: ID does not exist" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.176779 4972 scope.go:117] "RemoveContainer" containerID="d6fb143c930e5516d8f8fdfa21bce2c83dcac7792513877a36785d222e63960b" Nov 21 11:09:52 crc kubenswrapper[4972]: E1121 11:09:52.177350 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6fb143c930e5516d8f8fdfa21bce2c83dcac7792513877a36785d222e63960b\": container with ID starting with d6fb143c930e5516d8f8fdfa21bce2c83dcac7792513877a36785d222e63960b not found: ID does not exist" containerID="d6fb143c930e5516d8f8fdfa21bce2c83dcac7792513877a36785d222e63960b" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.177416 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6fb143c930e5516d8f8fdfa21bce2c83dcac7792513877a36785d222e63960b"} err="failed to get container status \"d6fb143c930e5516d8f8fdfa21bce2c83dcac7792513877a36785d222e63960b\": rpc error: code = NotFound desc = could not find container \"d6fb143c930e5516d8f8fdfa21bce2c83dcac7792513877a36785d222e63960b\": container with ID starting with d6fb143c930e5516d8f8fdfa21bce2c83dcac7792513877a36785d222e63960b not found: ID does not exist" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.222226 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Nov 21 11:09:52 crc kubenswrapper[4972]: E1121 11:09:52.223000 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac05d93-a702-4862-9068-0ab99dcbaa2b" containerName="dnsmasq-dns" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.223050 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac05d93-a702-4862-9068-0ab99dcbaa2b" containerName="dnsmasq-dns" Nov 21 11:09:52 crc kubenswrapper[4972]: E1121 11:09:52.223076 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac05d93-a702-4862-9068-0ab99dcbaa2b" containerName="init" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.223085 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac05d93-a702-4862-9068-0ab99dcbaa2b" containerName="init" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.223450 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac05d93-a702-4862-9068-0ab99dcbaa2b" containerName="dnsmasq-dns" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.224238 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.226740 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.231399 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.383728 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0488161c-9098-4c62-9860-e6c06608a1df-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"0488161c-9098-4c62-9860-e6c06608a1df\") " pod="openstack/ovn-copy-data" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.383884 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6e5a3101-af97-41cb-a932-a562a84f206e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e5a3101-af97-41cb-a932-a562a84f206e\") pod \"ovn-copy-data\" (UID: \"0488161c-9098-4c62-9860-e6c06608a1df\") " pod="openstack/ovn-copy-data" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.384015 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94c5w\" (UniqueName: \"kubernetes.io/projected/0488161c-9098-4c62-9860-e6c06608a1df-kube-api-access-94c5w\") pod \"ovn-copy-data\" (UID: \"0488161c-9098-4c62-9860-e6c06608a1df\") " pod="openstack/ovn-copy-data" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.486124 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6e5a3101-af97-41cb-a932-a562a84f206e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e5a3101-af97-41cb-a932-a562a84f206e\") pod \"ovn-copy-data\" (UID: \"0488161c-9098-4c62-9860-e6c06608a1df\") " pod="openstack/ovn-copy-data" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.486382 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94c5w\" (UniqueName: \"kubernetes.io/projected/0488161c-9098-4c62-9860-e6c06608a1df-kube-api-access-94c5w\") pod \"ovn-copy-data\" (UID: \"0488161c-9098-4c62-9860-e6c06608a1df\") " pod="openstack/ovn-copy-data" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.486466 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0488161c-9098-4c62-9860-e6c06608a1df-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"0488161c-9098-4c62-9860-e6c06608a1df\") " pod="openstack/ovn-copy-data" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.490312 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.490365 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6e5a3101-af97-41cb-a932-a562a84f206e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e5a3101-af97-41cb-a932-a562a84f206e\") pod \"ovn-copy-data\" (UID: \"0488161c-9098-4c62-9860-e6c06608a1df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/10eeb5cf396db466c845b3455f9755fa0a9253fd20467dc0d137ee10cf9f0835/globalmount\"" pod="openstack/ovn-copy-data" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.490798 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0488161c-9098-4c62-9860-e6c06608a1df-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"0488161c-9098-4c62-9860-e6c06608a1df\") " pod="openstack/ovn-copy-data" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.519556 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94c5w\" (UniqueName: \"kubernetes.io/projected/0488161c-9098-4c62-9860-e6c06608a1df-kube-api-access-94c5w\") pod \"ovn-copy-data\" (UID: \"0488161c-9098-4c62-9860-e6c06608a1df\") " pod="openstack/ovn-copy-data" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.530520 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6e5a3101-af97-41cb-a932-a562a84f206e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e5a3101-af97-41cb-a932-a562a84f206e\") pod \"ovn-copy-data\" (UID: \"0488161c-9098-4c62-9860-e6c06608a1df\") " pod="openstack/ovn-copy-data" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.552424 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.855334 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 21 11:09:52 crc kubenswrapper[4972]: I1121 11:09:52.864402 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 11:09:53 crc kubenswrapper[4972]: I1121 11:09:53.096097 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"0488161c-9098-4c62-9860-e6c06608a1df","Type":"ContainerStarted","Data":"ab80dff6bf311c9a449524f90ca2a72ed4e8c4a0b17db4c4420bc8e9bec6633b"} Nov 21 11:09:53 crc kubenswrapper[4972]: I1121 11:09:53.769610 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac05d93-a702-4862-9068-0ab99dcbaa2b" path="/var/lib/kubelet/pods/4ac05d93-a702-4862-9068-0ab99dcbaa2b/volumes" Nov 21 11:09:56 crc kubenswrapper[4972]: I1121 11:09:56.759532 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:09:57 crc kubenswrapper[4972]: I1121 11:09:57.153177 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"24e4e8c91bec69fac6579b1048275d2e2e1a69f272656a33d0af882dd887ca1f"} Nov 21 11:09:57 crc kubenswrapper[4972]: I1121 11:09:57.154744 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"0488161c-9098-4c62-9860-e6c06608a1df","Type":"ContainerStarted","Data":"3c82a1cea07a8e2458f24d14cb6ce39df4f5d1ec2722aeb1f9e7ca2f5a16e7ee"} Nov 21 11:09:57 crc kubenswrapper[4972]: I1121 11:09:57.208777 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.805758951 podStartE2EDuration="6.208757725s" podCreationTimestamp="2025-11-21 11:09:51 +0000 UTC" firstStartedPulling="2025-11-21 11:09:52.864137116 +0000 UTC m=+5337.973279614" lastFinishedPulling="2025-11-21 11:09:56.26713589 +0000 UTC m=+5341.376278388" observedRunningTime="2025-11-21 11:09:57.204581704 +0000 UTC m=+5342.313724212" watchObservedRunningTime="2025-11-21 11:09:57.208757725 +0000 UTC m=+5342.317900223" Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.028091 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.110809 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-665ff86d95-s75z4"] Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.111082 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" podUID="db11b16e-c1ba-47eb-90e0-d03f0b2412e3" containerName="dnsmasq-dns" containerID="cri-o://ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee" gracePeriod=10 Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.578891 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.733596 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-config\") pod \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\" (UID: \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\") " Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.733708 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stjh5\" (UniqueName: \"kubernetes.io/projected/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-kube-api-access-stjh5\") pod \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\" (UID: \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\") " Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.733757 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-dns-svc\") pod \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\" (UID: \"db11b16e-c1ba-47eb-90e0-d03f0b2412e3\") " Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.743943 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-kube-api-access-stjh5" (OuterVolumeSpecName: "kube-api-access-stjh5") pod "db11b16e-c1ba-47eb-90e0-d03f0b2412e3" (UID: "db11b16e-c1ba-47eb-90e0-d03f0b2412e3"). InnerVolumeSpecName "kube-api-access-stjh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.821587 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db11b16e-c1ba-47eb-90e0-d03f0b2412e3" (UID: "db11b16e-c1ba-47eb-90e0-d03f0b2412e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.827354 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-config" (OuterVolumeSpecName: "config") pod "db11b16e-c1ba-47eb-90e0-d03f0b2412e3" (UID: "db11b16e-c1ba-47eb-90e0-d03f0b2412e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.836817 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.836871 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stjh5\" (UniqueName: \"kubernetes.io/projected/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-kube-api-access-stjh5\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:00 crc kubenswrapper[4972]: I1121 11:10:00.836882 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db11b16e-c1ba-47eb-90e0-d03f0b2412e3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.196517 4972 generic.go:334] "Generic (PLEG): container finished" podID="db11b16e-c1ba-47eb-90e0-d03f0b2412e3" containerID="ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee" exitCode=0 Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.198081 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.198169 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" event={"ID":"db11b16e-c1ba-47eb-90e0-d03f0b2412e3","Type":"ContainerDied","Data":"ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee"} Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.198245 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665ff86d95-s75z4" event={"ID":"db11b16e-c1ba-47eb-90e0-d03f0b2412e3","Type":"ContainerDied","Data":"55e4d85a7fef33a6289b6f3238850cf580e855e978ef9ee5574787006efb3a5e"} Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.198278 4972 scope.go:117] "RemoveContainer" containerID="ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee" Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.231795 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-665ff86d95-s75z4"] Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.238086 4972 scope.go:117] "RemoveContainer" containerID="7192a9351c177a3c9dd2ebfe951a2f9b1cf83bc67efddca74aaee7f05c9ed78f" Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.238722 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-665ff86d95-s75z4"] Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.290612 4972 scope.go:117] "RemoveContainer" containerID="ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee" Nov 21 11:10:01 crc kubenswrapper[4972]: E1121 11:10:01.291015 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee\": container with ID starting with ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee not found: ID does not exist" containerID="ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee" Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.291044 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee"} err="failed to get container status \"ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee\": rpc error: code = NotFound desc = could not find container \"ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee\": container with ID starting with ac7cac814a83cc62d867c72ee26c7f0f67c067c6d63d15ce24a1e01d624a5dee not found: ID does not exist" Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.291064 4972 scope.go:117] "RemoveContainer" containerID="7192a9351c177a3c9dd2ebfe951a2f9b1cf83bc67efddca74aaee7f05c9ed78f" Nov 21 11:10:01 crc kubenswrapper[4972]: E1121 11:10:01.291349 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7192a9351c177a3c9dd2ebfe951a2f9b1cf83bc67efddca74aaee7f05c9ed78f\": container with ID starting with 7192a9351c177a3c9dd2ebfe951a2f9b1cf83bc67efddca74aaee7f05c9ed78f not found: ID does not exist" containerID="7192a9351c177a3c9dd2ebfe951a2f9b1cf83bc67efddca74aaee7f05c9ed78f" Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.291370 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7192a9351c177a3c9dd2ebfe951a2f9b1cf83bc67efddca74aaee7f05c9ed78f"} err="failed to get container status \"7192a9351c177a3c9dd2ebfe951a2f9b1cf83bc67efddca74aaee7f05c9ed78f\": rpc error: code = NotFound desc = could not find container \"7192a9351c177a3c9dd2ebfe951a2f9b1cf83bc67efddca74aaee7f05c9ed78f\": container with ID starting with 7192a9351c177a3c9dd2ebfe951a2f9b1cf83bc67efddca74aaee7f05c9ed78f not found: ID does not exist" Nov 21 11:10:01 crc kubenswrapper[4972]: I1121 11:10:01.788206 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db11b16e-c1ba-47eb-90e0-d03f0b2412e3" path="/var/lib/kubelet/pods/db11b16e-c1ba-47eb-90e0-d03f0b2412e3/volumes" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.190813 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 21 11:10:03 crc kubenswrapper[4972]: E1121 11:10:03.191214 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db11b16e-c1ba-47eb-90e0-d03f0b2412e3" containerName="init" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.191230 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="db11b16e-c1ba-47eb-90e0-d03f0b2412e3" containerName="init" Nov 21 11:10:03 crc kubenswrapper[4972]: E1121 11:10:03.191256 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db11b16e-c1ba-47eb-90e0-d03f0b2412e3" containerName="dnsmasq-dns" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.191262 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="db11b16e-c1ba-47eb-90e0-d03f0b2412e3" containerName="dnsmasq-dns" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.191409 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="db11b16e-c1ba-47eb-90e0-d03f0b2412e3" containerName="dnsmasq-dns" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.192244 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.198281 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7mcd8" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.198480 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.198605 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.216638 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.277104 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed97c80-e5e5-4056-8dbd-00d598824e0a-config\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.277181 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ed97c80-e5e5-4056-8dbd-00d598824e0a-scripts\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.277197 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ed97c80-e5e5-4056-8dbd-00d598824e0a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.277240 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9m9w\" (UniqueName: \"kubernetes.io/projected/9ed97c80-e5e5-4056-8dbd-00d598824e0a-kube-api-access-q9m9w\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.277274 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed97c80-e5e5-4056-8dbd-00d598824e0a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.378673 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed97c80-e5e5-4056-8dbd-00d598824e0a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.378750 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed97c80-e5e5-4056-8dbd-00d598824e0a-config\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.378798 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ed97c80-e5e5-4056-8dbd-00d598824e0a-scripts\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.378814 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ed97c80-e5e5-4056-8dbd-00d598824e0a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.378872 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9m9w\" (UniqueName: \"kubernetes.io/projected/9ed97c80-e5e5-4056-8dbd-00d598824e0a-kube-api-access-q9m9w\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.380446 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ed97c80-e5e5-4056-8dbd-00d598824e0a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.380634 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed97c80-e5e5-4056-8dbd-00d598824e0a-config\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.380722 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ed97c80-e5e5-4056-8dbd-00d598824e0a-scripts\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.384456 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed97c80-e5e5-4056-8dbd-00d598824e0a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.400014 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9m9w\" (UniqueName: \"kubernetes.io/projected/9ed97c80-e5e5-4056-8dbd-00d598824e0a-kube-api-access-q9m9w\") pod \"ovn-northd-0\" (UID: \"9ed97c80-e5e5-4056-8dbd-00d598824e0a\") " pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.523929 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 21 11:10:03 crc kubenswrapper[4972]: I1121 11:10:03.805124 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 11:10:04 crc kubenswrapper[4972]: I1121 11:10:04.224069 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9ed97c80-e5e5-4056-8dbd-00d598824e0a","Type":"ContainerStarted","Data":"7693b72969142361ba68df60cd5c5ccff45890e39b53530d4f761ffd66eda5de"} Nov 21 11:10:04 crc kubenswrapper[4972]: I1121 11:10:04.224337 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9ed97c80-e5e5-4056-8dbd-00d598824e0a","Type":"ContainerStarted","Data":"31b092c09773bb01b66a5f76ceb870015dc1e9a8a9d034b57373889d54307f56"} Nov 21 11:10:04 crc kubenswrapper[4972]: I1121 11:10:04.224348 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9ed97c80-e5e5-4056-8dbd-00d598824e0a","Type":"ContainerStarted","Data":"72711ac209436ffe0638c114f2807ce0ae1eadd29c08305b17fbe565336745b7"} Nov 21 11:10:05 crc kubenswrapper[4972]: I1121 11:10:05.234336 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 21 11:10:05 crc kubenswrapper[4972]: I1121 11:10:05.259277 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.259253217 podStartE2EDuration="2.259253217s" podCreationTimestamp="2025-11-21 11:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:10:05.256907485 +0000 UTC m=+5350.366050083" watchObservedRunningTime="2025-11-21 11:10:05.259253217 +0000 UTC m=+5350.368395735" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.579291 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mjvkp"] Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.580680 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mjvkp" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.588885 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-aa0c-account-create-lwx74"] Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.592948 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-aa0c-account-create-lwx74" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.595127 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.610594 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mjvkp"] Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.619752 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-aa0c-account-create-lwx74"] Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.682894 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vtbh\" (UniqueName: \"kubernetes.io/projected/4126bb96-8016-415d-8613-db7e0d8f5777-kube-api-access-8vtbh\") pod \"keystone-aa0c-account-create-lwx74\" (UID: \"4126bb96-8016-415d-8613-db7e0d8f5777\") " pod="openstack/keystone-aa0c-account-create-lwx74" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.682943 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4126bb96-8016-415d-8613-db7e0d8f5777-operator-scripts\") pod \"keystone-aa0c-account-create-lwx74\" (UID: \"4126bb96-8016-415d-8613-db7e0d8f5777\") " pod="openstack/keystone-aa0c-account-create-lwx74" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.682970 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2pkr\" (UniqueName: \"kubernetes.io/projected/f9fe8c69-9432-4abe-8517-d25773129b6b-kube-api-access-s2pkr\") pod \"keystone-db-create-mjvkp\" (UID: \"f9fe8c69-9432-4abe-8517-d25773129b6b\") " pod="openstack/keystone-db-create-mjvkp" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.682995 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9fe8c69-9432-4abe-8517-d25773129b6b-operator-scripts\") pod \"keystone-db-create-mjvkp\" (UID: \"f9fe8c69-9432-4abe-8517-d25773129b6b\") " pod="openstack/keystone-db-create-mjvkp" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.786824 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vtbh\" (UniqueName: \"kubernetes.io/projected/4126bb96-8016-415d-8613-db7e0d8f5777-kube-api-access-8vtbh\") pod \"keystone-aa0c-account-create-lwx74\" (UID: \"4126bb96-8016-415d-8613-db7e0d8f5777\") " pod="openstack/keystone-aa0c-account-create-lwx74" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.787001 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4126bb96-8016-415d-8613-db7e0d8f5777-operator-scripts\") pod \"keystone-aa0c-account-create-lwx74\" (UID: \"4126bb96-8016-415d-8613-db7e0d8f5777\") " pod="openstack/keystone-aa0c-account-create-lwx74" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.787083 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2pkr\" (UniqueName: \"kubernetes.io/projected/f9fe8c69-9432-4abe-8517-d25773129b6b-kube-api-access-s2pkr\") pod \"keystone-db-create-mjvkp\" (UID: \"f9fe8c69-9432-4abe-8517-d25773129b6b\") " pod="openstack/keystone-db-create-mjvkp" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.787148 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9fe8c69-9432-4abe-8517-d25773129b6b-operator-scripts\") pod \"keystone-db-create-mjvkp\" (UID: \"f9fe8c69-9432-4abe-8517-d25773129b6b\") " pod="openstack/keystone-db-create-mjvkp" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.787687 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4126bb96-8016-415d-8613-db7e0d8f5777-operator-scripts\") pod \"keystone-aa0c-account-create-lwx74\" (UID: \"4126bb96-8016-415d-8613-db7e0d8f5777\") " pod="openstack/keystone-aa0c-account-create-lwx74" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.788292 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9fe8c69-9432-4abe-8517-d25773129b6b-operator-scripts\") pod \"keystone-db-create-mjvkp\" (UID: \"f9fe8c69-9432-4abe-8517-d25773129b6b\") " pod="openstack/keystone-db-create-mjvkp" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.809980 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2pkr\" (UniqueName: \"kubernetes.io/projected/f9fe8c69-9432-4abe-8517-d25773129b6b-kube-api-access-s2pkr\") pod \"keystone-db-create-mjvkp\" (UID: \"f9fe8c69-9432-4abe-8517-d25773129b6b\") " pod="openstack/keystone-db-create-mjvkp" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.812642 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vtbh\" (UniqueName: \"kubernetes.io/projected/4126bb96-8016-415d-8613-db7e0d8f5777-kube-api-access-8vtbh\") pod \"keystone-aa0c-account-create-lwx74\" (UID: \"4126bb96-8016-415d-8613-db7e0d8f5777\") " pod="openstack/keystone-aa0c-account-create-lwx74" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.905616 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mjvkp" Nov 21 11:10:08 crc kubenswrapper[4972]: I1121 11:10:08.921548 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-aa0c-account-create-lwx74" Nov 21 11:10:09 crc kubenswrapper[4972]: I1121 11:10:09.437487 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-aa0c-account-create-lwx74"] Nov 21 11:10:09 crc kubenswrapper[4972]: I1121 11:10:09.495628 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mjvkp"] Nov 21 11:10:09 crc kubenswrapper[4972]: W1121 11:10:09.503106 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9fe8c69_9432_4abe_8517_d25773129b6b.slice/crio-77e23de13d476a531164fc4e6c8470cd2d808533302e5dad6c9243e4e1d05c5d WatchSource:0}: Error finding container 77e23de13d476a531164fc4e6c8470cd2d808533302e5dad6c9243e4e1d05c5d: Status 404 returned error can't find the container with id 77e23de13d476a531164fc4e6c8470cd2d808533302e5dad6c9243e4e1d05c5d Nov 21 11:10:10 crc kubenswrapper[4972]: I1121 11:10:10.287267 4972 generic.go:334] "Generic (PLEG): container finished" podID="4126bb96-8016-415d-8613-db7e0d8f5777" containerID="866206511a501bbc6f00c590944e8b6dc2b04a25ec1b7360b5a9b7087c40667a" exitCode=0 Nov 21 11:10:10 crc kubenswrapper[4972]: I1121 11:10:10.287613 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-aa0c-account-create-lwx74" event={"ID":"4126bb96-8016-415d-8613-db7e0d8f5777","Type":"ContainerDied","Data":"866206511a501bbc6f00c590944e8b6dc2b04a25ec1b7360b5a9b7087c40667a"} Nov 21 11:10:10 crc kubenswrapper[4972]: I1121 11:10:10.287652 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-aa0c-account-create-lwx74" event={"ID":"4126bb96-8016-415d-8613-db7e0d8f5777","Type":"ContainerStarted","Data":"d0f21c2d44d96b0c2ed1f61c4325ad2d6a4cdd7537e60d8a02ab30dc0bba0ef8"} Nov 21 11:10:10 crc kubenswrapper[4972]: I1121 11:10:10.289995 4972 generic.go:334] "Generic (PLEG): container finished" podID="f9fe8c69-9432-4abe-8517-d25773129b6b" containerID="84a126c3e179384b8ee1a05703614dc7f9fa32922073e77e021268e702f42199" exitCode=0 Nov 21 11:10:10 crc kubenswrapper[4972]: I1121 11:10:10.290030 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mjvkp" event={"ID":"f9fe8c69-9432-4abe-8517-d25773129b6b","Type":"ContainerDied","Data":"84a126c3e179384b8ee1a05703614dc7f9fa32922073e77e021268e702f42199"} Nov 21 11:10:10 crc kubenswrapper[4972]: I1121 11:10:10.290056 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mjvkp" event={"ID":"f9fe8c69-9432-4abe-8517-d25773129b6b","Type":"ContainerStarted","Data":"77e23de13d476a531164fc4e6c8470cd2d808533302e5dad6c9243e4e1d05c5d"} Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.748109 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mjvkp" Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.754354 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-aa0c-account-create-lwx74" Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.844767 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9fe8c69-9432-4abe-8517-d25773129b6b-operator-scripts\") pod \"f9fe8c69-9432-4abe-8517-d25773129b6b\" (UID: \"f9fe8c69-9432-4abe-8517-d25773129b6b\") " Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.844941 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vtbh\" (UniqueName: \"kubernetes.io/projected/4126bb96-8016-415d-8613-db7e0d8f5777-kube-api-access-8vtbh\") pod \"4126bb96-8016-415d-8613-db7e0d8f5777\" (UID: \"4126bb96-8016-415d-8613-db7e0d8f5777\") " Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.845011 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4126bb96-8016-415d-8613-db7e0d8f5777-operator-scripts\") pod \"4126bb96-8016-415d-8613-db7e0d8f5777\" (UID: \"4126bb96-8016-415d-8613-db7e0d8f5777\") " Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.845047 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2pkr\" (UniqueName: \"kubernetes.io/projected/f9fe8c69-9432-4abe-8517-d25773129b6b-kube-api-access-s2pkr\") pod \"f9fe8c69-9432-4abe-8517-d25773129b6b\" (UID: \"f9fe8c69-9432-4abe-8517-d25773129b6b\") " Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.845335 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fe8c69-9432-4abe-8517-d25773129b6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9fe8c69-9432-4abe-8517-d25773129b6b" (UID: "f9fe8c69-9432-4abe-8517-d25773129b6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.845511 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9fe8c69-9432-4abe-8517-d25773129b6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.846281 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4126bb96-8016-415d-8613-db7e0d8f5777-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4126bb96-8016-415d-8613-db7e0d8f5777" (UID: "4126bb96-8016-415d-8613-db7e0d8f5777"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.873359 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9fe8c69-9432-4abe-8517-d25773129b6b-kube-api-access-s2pkr" (OuterVolumeSpecName: "kube-api-access-s2pkr") pod "f9fe8c69-9432-4abe-8517-d25773129b6b" (UID: "f9fe8c69-9432-4abe-8517-d25773129b6b"). InnerVolumeSpecName "kube-api-access-s2pkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.888075 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4126bb96-8016-415d-8613-db7e0d8f5777-kube-api-access-8vtbh" (OuterVolumeSpecName: "kube-api-access-8vtbh") pod "4126bb96-8016-415d-8613-db7e0d8f5777" (UID: "4126bb96-8016-415d-8613-db7e0d8f5777"). InnerVolumeSpecName "kube-api-access-8vtbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.947463 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vtbh\" (UniqueName: \"kubernetes.io/projected/4126bb96-8016-415d-8613-db7e0d8f5777-kube-api-access-8vtbh\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.947499 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4126bb96-8016-415d-8613-db7e0d8f5777-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:11 crc kubenswrapper[4972]: I1121 11:10:11.947509 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2pkr\" (UniqueName: \"kubernetes.io/projected/f9fe8c69-9432-4abe-8517-d25773129b6b-kube-api-access-s2pkr\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:12 crc kubenswrapper[4972]: I1121 11:10:12.309333 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-aa0c-account-create-lwx74" Nov 21 11:10:12 crc kubenswrapper[4972]: I1121 11:10:12.309234 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-aa0c-account-create-lwx74" event={"ID":"4126bb96-8016-415d-8613-db7e0d8f5777","Type":"ContainerDied","Data":"d0f21c2d44d96b0c2ed1f61c4325ad2d6a4cdd7537e60d8a02ab30dc0bba0ef8"} Nov 21 11:10:12 crc kubenswrapper[4972]: I1121 11:10:12.309819 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0f21c2d44d96b0c2ed1f61c4325ad2d6a4cdd7537e60d8a02ab30dc0bba0ef8" Nov 21 11:10:12 crc kubenswrapper[4972]: I1121 11:10:12.311529 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mjvkp" event={"ID":"f9fe8c69-9432-4abe-8517-d25773129b6b","Type":"ContainerDied","Data":"77e23de13d476a531164fc4e6c8470cd2d808533302e5dad6c9243e4e1d05c5d"} Nov 21 11:10:12 crc kubenswrapper[4972]: I1121 11:10:12.311580 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77e23de13d476a531164fc4e6c8470cd2d808533302e5dad6c9243e4e1d05c5d" Nov 21 11:10:12 crc kubenswrapper[4972]: I1121 11:10:12.311604 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mjvkp" Nov 21 11:10:13 crc kubenswrapper[4972]: I1121 11:10:13.588237 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.074552 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-w6q6g"] Nov 21 11:10:14 crc kubenswrapper[4972]: E1121 11:10:14.075110 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fe8c69-9432-4abe-8517-d25773129b6b" containerName="mariadb-database-create" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.075144 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fe8c69-9432-4abe-8517-d25773129b6b" containerName="mariadb-database-create" Nov 21 11:10:14 crc kubenswrapper[4972]: E1121 11:10:14.075194 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4126bb96-8016-415d-8613-db7e0d8f5777" containerName="mariadb-account-create" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.075206 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4126bb96-8016-415d-8613-db7e0d8f5777" containerName="mariadb-account-create" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.075474 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="4126bb96-8016-415d-8613-db7e0d8f5777" containerName="mariadb-account-create" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.075507 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9fe8c69-9432-4abe-8517-d25773129b6b" containerName="mariadb-database-create" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.076340 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.078135 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vggzf" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.083175 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.083437 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.084022 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.088945 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-w6q6g"] Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.194313 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e08dff-7d3e-41a5-a992-6c9c7567fe97-combined-ca-bundle\") pod \"keystone-db-sync-w6q6g\" (UID: \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\") " pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.194398 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrd7d\" (UniqueName: \"kubernetes.io/projected/03e08dff-7d3e-41a5-a992-6c9c7567fe97-kube-api-access-nrd7d\") pod \"keystone-db-sync-w6q6g\" (UID: \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\") " pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.194532 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e08dff-7d3e-41a5-a992-6c9c7567fe97-config-data\") pod \"keystone-db-sync-w6q6g\" (UID: \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\") " pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.296659 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e08dff-7d3e-41a5-a992-6c9c7567fe97-combined-ca-bundle\") pod \"keystone-db-sync-w6q6g\" (UID: \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\") " pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.296808 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrd7d\" (UniqueName: \"kubernetes.io/projected/03e08dff-7d3e-41a5-a992-6c9c7567fe97-kube-api-access-nrd7d\") pod \"keystone-db-sync-w6q6g\" (UID: \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\") " pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.296907 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e08dff-7d3e-41a5-a992-6c9c7567fe97-config-data\") pod \"keystone-db-sync-w6q6g\" (UID: \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\") " pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.302503 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e08dff-7d3e-41a5-a992-6c9c7567fe97-config-data\") pod \"keystone-db-sync-w6q6g\" (UID: \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\") " pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.303627 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e08dff-7d3e-41a5-a992-6c9c7567fe97-combined-ca-bundle\") pod \"keystone-db-sync-w6q6g\" (UID: \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\") " pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.320770 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrd7d\" (UniqueName: \"kubernetes.io/projected/03e08dff-7d3e-41a5-a992-6c9c7567fe97-kube-api-access-nrd7d\") pod \"keystone-db-sync-w6q6g\" (UID: \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\") " pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.411754 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:14 crc kubenswrapper[4972]: I1121 11:10:14.658265 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-w6q6g"] Nov 21 11:10:15 crc kubenswrapper[4972]: I1121 11:10:15.349219 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w6q6g" event={"ID":"03e08dff-7d3e-41a5-a992-6c9c7567fe97","Type":"ContainerStarted","Data":"d436046ca3e17d6c7a535773e01b32751a938dfc4ea28ae8856bfbf683fb6e98"} Nov 21 11:10:15 crc kubenswrapper[4972]: I1121 11:10:15.349672 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w6q6g" event={"ID":"03e08dff-7d3e-41a5-a992-6c9c7567fe97","Type":"ContainerStarted","Data":"4cedc65e87b92f25257dfaa61dfb494ee905a08a4361a0966c991c81f5add814"} Nov 21 11:10:15 crc kubenswrapper[4972]: I1121 11:10:15.385390 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-w6q6g" podStartSLOduration=1.385366474 podStartE2EDuration="1.385366474s" podCreationTimestamp="2025-11-21 11:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:10:15.3795708 +0000 UTC m=+5360.488713308" watchObservedRunningTime="2025-11-21 11:10:15.385366474 +0000 UTC m=+5360.494508982" Nov 21 11:10:17 crc kubenswrapper[4972]: I1121 11:10:17.370873 4972 generic.go:334] "Generic (PLEG): container finished" podID="03e08dff-7d3e-41a5-a992-6c9c7567fe97" containerID="d436046ca3e17d6c7a535773e01b32751a938dfc4ea28ae8856bfbf683fb6e98" exitCode=0 Nov 21 11:10:17 crc kubenswrapper[4972]: I1121 11:10:17.370944 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w6q6g" event={"ID":"03e08dff-7d3e-41a5-a992-6c9c7567fe97","Type":"ContainerDied","Data":"d436046ca3e17d6c7a535773e01b32751a938dfc4ea28ae8856bfbf683fb6e98"} Nov 21 11:10:18 crc kubenswrapper[4972]: I1121 11:10:18.773334 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:18 crc kubenswrapper[4972]: I1121 11:10:18.881005 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrd7d\" (UniqueName: \"kubernetes.io/projected/03e08dff-7d3e-41a5-a992-6c9c7567fe97-kube-api-access-nrd7d\") pod \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\" (UID: \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\") " Nov 21 11:10:18 crc kubenswrapper[4972]: I1121 11:10:18.881127 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e08dff-7d3e-41a5-a992-6c9c7567fe97-combined-ca-bundle\") pod \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\" (UID: \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\") " Nov 21 11:10:18 crc kubenswrapper[4972]: I1121 11:10:18.881170 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e08dff-7d3e-41a5-a992-6c9c7567fe97-config-data\") pod \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\" (UID: \"03e08dff-7d3e-41a5-a992-6c9c7567fe97\") " Nov 21 11:10:18 crc kubenswrapper[4972]: I1121 11:10:18.888349 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e08dff-7d3e-41a5-a992-6c9c7567fe97-kube-api-access-nrd7d" (OuterVolumeSpecName: "kube-api-access-nrd7d") pod "03e08dff-7d3e-41a5-a992-6c9c7567fe97" (UID: "03e08dff-7d3e-41a5-a992-6c9c7567fe97"). InnerVolumeSpecName "kube-api-access-nrd7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:10:18 crc kubenswrapper[4972]: I1121 11:10:18.914965 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e08dff-7d3e-41a5-a992-6c9c7567fe97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03e08dff-7d3e-41a5-a992-6c9c7567fe97" (UID: "03e08dff-7d3e-41a5-a992-6c9c7567fe97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:10:18 crc kubenswrapper[4972]: I1121 11:10:18.943922 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e08dff-7d3e-41a5-a992-6c9c7567fe97-config-data" (OuterVolumeSpecName: "config-data") pod "03e08dff-7d3e-41a5-a992-6c9c7567fe97" (UID: "03e08dff-7d3e-41a5-a992-6c9c7567fe97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:10:18 crc kubenswrapper[4972]: I1121 11:10:18.984080 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e08dff-7d3e-41a5-a992-6c9c7567fe97-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:18 crc kubenswrapper[4972]: I1121 11:10:18.984107 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrd7d\" (UniqueName: \"kubernetes.io/projected/03e08dff-7d3e-41a5-a992-6c9c7567fe97-kube-api-access-nrd7d\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:18 crc kubenswrapper[4972]: I1121 11:10:18.984119 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e08dff-7d3e-41a5-a992-6c9c7567fe97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:19 crc kubenswrapper[4972]: I1121 11:10:19.400254 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w6q6g" event={"ID":"03e08dff-7d3e-41a5-a992-6c9c7567fe97","Type":"ContainerDied","Data":"4cedc65e87b92f25257dfaa61dfb494ee905a08a4361a0966c991c81f5add814"} Nov 21 11:10:19 crc kubenswrapper[4972]: I1121 11:10:19.400554 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cedc65e87b92f25257dfaa61dfb494ee905a08a4361a0966c991c81f5add814" Nov 21 11:10:19 crc kubenswrapper[4972]: I1121 11:10:19.400331 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w6q6g" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.030963 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dc647c5f5-mdsjj"] Nov 21 11:10:20 crc kubenswrapper[4972]: E1121 11:10:20.031412 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e08dff-7d3e-41a5-a992-6c9c7567fe97" containerName="keystone-db-sync" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.031435 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e08dff-7d3e-41a5-a992-6c9c7567fe97" containerName="keystone-db-sync" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.031685 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e08dff-7d3e-41a5-a992-6c9c7567fe97" containerName="keystone-db-sync" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.034143 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.055419 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dc647c5f5-mdsjj"] Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.079640 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5brdk"] Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.080918 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.084348 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.084562 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.084861 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.085221 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vggzf" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.086345 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.095856 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5brdk"] Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.105093 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-dns-svc\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.105148 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-fernet-keys\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.105253 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgq7h\" (UniqueName: \"kubernetes.io/projected/0e1a86ae-a392-4e3d-b81f-1033834025ff-kube-api-access-lgq7h\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.105284 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.105312 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-scripts\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.105383 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-credential-keys\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.105409 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-config\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.105438 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-config-data\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.105467 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-combined-ca-bundle\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.105515 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7594x\" (UniqueName: \"kubernetes.io/projected/6f234d50-67b2-4d5b-a696-cf60362a29b8-kube-api-access-7594x\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.105572 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.207427 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-config-data\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.207493 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-combined-ca-bundle\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.207549 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7594x\" (UniqueName: \"kubernetes.io/projected/6f234d50-67b2-4d5b-a696-cf60362a29b8-kube-api-access-7594x\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.207595 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.207638 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-dns-svc\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.207661 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-fernet-keys\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.207707 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgq7h\" (UniqueName: \"kubernetes.io/projected/0e1a86ae-a392-4e3d-b81f-1033834025ff-kube-api-access-lgq7h\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.207732 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.207759 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-scripts\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.207805 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-credential-keys\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.207898 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-config\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.208811 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.208923 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-config\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.209074 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-dns-svc\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.209149 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.212480 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-scripts\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.213454 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-config-data\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.223347 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-credential-keys\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.225579 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-combined-ca-bundle\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.225792 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-fernet-keys\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.226114 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgq7h\" (UniqueName: \"kubernetes.io/projected/0e1a86ae-a392-4e3d-b81f-1033834025ff-kube-api-access-lgq7h\") pod \"keystone-bootstrap-5brdk\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.226383 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7594x\" (UniqueName: \"kubernetes.io/projected/6f234d50-67b2-4d5b-a696-cf60362a29b8-kube-api-access-7594x\") pod \"dnsmasq-dns-7dc647c5f5-mdsjj\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.360824 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.404873 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.850698 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dc647c5f5-mdsjj"] Nov 21 11:10:20 crc kubenswrapper[4972]: W1121 11:10:20.861649 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f234d50_67b2_4d5b_a696_cf60362a29b8.slice/crio-3ec3e28573349f9d8e98b338b1b93248aeddeef60ca58305d34dbb1eaa6bb926 WatchSource:0}: Error finding container 3ec3e28573349f9d8e98b338b1b93248aeddeef60ca58305d34dbb1eaa6bb926: Status 404 returned error can't find the container with id 3ec3e28573349f9d8e98b338b1b93248aeddeef60ca58305d34dbb1eaa6bb926 Nov 21 11:10:20 crc kubenswrapper[4972]: I1121 11:10:20.927992 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5brdk"] Nov 21 11:10:20 crc kubenswrapper[4972]: W1121 11:10:20.931107 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e1a86ae_a392_4e3d_b81f_1033834025ff.slice/crio-4f94ff436d51f6603906f577303adfd3c27c93667cf3141570006fa534918e41 WatchSource:0}: Error finding container 4f94ff436d51f6603906f577303adfd3c27c93667cf3141570006fa534918e41: Status 404 returned error can't find the container with id 4f94ff436d51f6603906f577303adfd3c27c93667cf3141570006fa534918e41 Nov 21 11:10:21 crc kubenswrapper[4972]: I1121 11:10:21.422006 4972 generic.go:334] "Generic (PLEG): container finished" podID="6f234d50-67b2-4d5b-a696-cf60362a29b8" containerID="09729acdbda9532e7e62e120581b66c179c5a4b3c9064c0dff047bb0b9005f4d" exitCode=0 Nov 21 11:10:21 crc kubenswrapper[4972]: I1121 11:10:21.422107 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" event={"ID":"6f234d50-67b2-4d5b-a696-cf60362a29b8","Type":"ContainerDied","Data":"09729acdbda9532e7e62e120581b66c179c5a4b3c9064c0dff047bb0b9005f4d"} Nov 21 11:10:21 crc kubenswrapper[4972]: I1121 11:10:21.426347 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" event={"ID":"6f234d50-67b2-4d5b-a696-cf60362a29b8","Type":"ContainerStarted","Data":"3ec3e28573349f9d8e98b338b1b93248aeddeef60ca58305d34dbb1eaa6bb926"} Nov 21 11:10:21 crc kubenswrapper[4972]: I1121 11:10:21.427939 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5brdk" event={"ID":"0e1a86ae-a392-4e3d-b81f-1033834025ff","Type":"ContainerStarted","Data":"91f8d8bf62a1c54e267466696c3fd9a6378ff36ee1e4a0e8e53c5fe3bcc3b4ad"} Nov 21 11:10:21 crc kubenswrapper[4972]: I1121 11:10:21.427984 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5brdk" event={"ID":"0e1a86ae-a392-4e3d-b81f-1033834025ff","Type":"ContainerStarted","Data":"4f94ff436d51f6603906f577303adfd3c27c93667cf3141570006fa534918e41"} Nov 21 11:10:21 crc kubenswrapper[4972]: I1121 11:10:21.477023 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5brdk" podStartSLOduration=1.477006554 podStartE2EDuration="1.477006554s" podCreationTimestamp="2025-11-21 11:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:10:21.470212304 +0000 UTC m=+5366.579354802" watchObservedRunningTime="2025-11-21 11:10:21.477006554 +0000 UTC m=+5366.586149052" Nov 21 11:10:22 crc kubenswrapper[4972]: I1121 11:10:22.439379 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" event={"ID":"6f234d50-67b2-4d5b-a696-cf60362a29b8","Type":"ContainerStarted","Data":"05d7d374f83536c7936b4c830d717d30bc7576e982513a33ea4899fc15f5fd3d"} Nov 21 11:10:22 crc kubenswrapper[4972]: I1121 11:10:22.463879 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" podStartSLOduration=2.463856727 podStartE2EDuration="2.463856727s" podCreationTimestamp="2025-11-21 11:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:10:22.458705251 +0000 UTC m=+5367.567847769" watchObservedRunningTime="2025-11-21 11:10:22.463856727 +0000 UTC m=+5367.572999265" Nov 21 11:10:23 crc kubenswrapper[4972]: I1121 11:10:23.450191 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:24 crc kubenswrapper[4972]: I1121 11:10:24.462845 4972 generic.go:334] "Generic (PLEG): container finished" podID="0e1a86ae-a392-4e3d-b81f-1033834025ff" containerID="91f8d8bf62a1c54e267466696c3fd9a6378ff36ee1e4a0e8e53c5fe3bcc3b4ad" exitCode=0 Nov 21 11:10:24 crc kubenswrapper[4972]: I1121 11:10:24.462977 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5brdk" event={"ID":"0e1a86ae-a392-4e3d-b81f-1033834025ff","Type":"ContainerDied","Data":"91f8d8bf62a1c54e267466696c3fd9a6378ff36ee1e4a0e8e53c5fe3bcc3b4ad"} Nov 21 11:10:25 crc kubenswrapper[4972]: I1121 11:10:25.869423 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.001929 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-combined-ca-bundle\") pod \"0e1a86ae-a392-4e3d-b81f-1033834025ff\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.002159 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-scripts\") pod \"0e1a86ae-a392-4e3d-b81f-1033834025ff\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.002222 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-config-data\") pod \"0e1a86ae-a392-4e3d-b81f-1033834025ff\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.002279 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgq7h\" (UniqueName: \"kubernetes.io/projected/0e1a86ae-a392-4e3d-b81f-1033834025ff-kube-api-access-lgq7h\") pod \"0e1a86ae-a392-4e3d-b81f-1033834025ff\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.002335 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-credential-keys\") pod \"0e1a86ae-a392-4e3d-b81f-1033834025ff\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.002371 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-fernet-keys\") pod \"0e1a86ae-a392-4e3d-b81f-1033834025ff\" (UID: \"0e1a86ae-a392-4e3d-b81f-1033834025ff\") " Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.008010 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1a86ae-a392-4e3d-b81f-1033834025ff-kube-api-access-lgq7h" (OuterVolumeSpecName: "kube-api-access-lgq7h") pod "0e1a86ae-a392-4e3d-b81f-1033834025ff" (UID: "0e1a86ae-a392-4e3d-b81f-1033834025ff"). InnerVolumeSpecName "kube-api-access-lgq7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.008182 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-scripts" (OuterVolumeSpecName: "scripts") pod "0e1a86ae-a392-4e3d-b81f-1033834025ff" (UID: "0e1a86ae-a392-4e3d-b81f-1033834025ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.009312 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0e1a86ae-a392-4e3d-b81f-1033834025ff" (UID: "0e1a86ae-a392-4e3d-b81f-1033834025ff"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.010739 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0e1a86ae-a392-4e3d-b81f-1033834025ff" (UID: "0e1a86ae-a392-4e3d-b81f-1033834025ff"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.045060 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-config-data" (OuterVolumeSpecName: "config-data") pod "0e1a86ae-a392-4e3d-b81f-1033834025ff" (UID: "0e1a86ae-a392-4e3d-b81f-1033834025ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.047515 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e1a86ae-a392-4e3d-b81f-1033834025ff" (UID: "0e1a86ae-a392-4e3d-b81f-1033834025ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.104432 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.104465 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.104475 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgq7h\" (UniqueName: \"kubernetes.io/projected/0e1a86ae-a392-4e3d-b81f-1033834025ff-kube-api-access-lgq7h\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.104485 4972 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.104496 4972 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.104505 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1a86ae-a392-4e3d-b81f-1033834025ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.271933 4972 scope.go:117] "RemoveContainer" containerID="effeae86efcef0ae4e1df3cef6bbca9cca7451ef9d763437136ab366664dc128" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.308519 4972 scope.go:117] "RemoveContainer" containerID="fc9944ed424b2c2fbfe33da5d710ce87d487d51306137c82df35a82b5f33a105" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.364157 4972 scope.go:117] "RemoveContainer" containerID="38c98a3c28fc550a5a24a58b6c64818b67e901d3933fb2cf52788d3665e7983f" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.392500 4972 scope.go:117] "RemoveContainer" containerID="f4b83b42392410e2ac0a02ef58dc750a91743f9dcad538f54ed819f21b8f1000" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.503317 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5brdk" event={"ID":"0e1a86ae-a392-4e3d-b81f-1033834025ff","Type":"ContainerDied","Data":"4f94ff436d51f6603906f577303adfd3c27c93667cf3141570006fa534918e41"} Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.503404 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f94ff436d51f6603906f577303adfd3c27c93667cf3141570006fa534918e41" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.503601 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5brdk" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.599548 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5brdk"] Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.617134 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5brdk"] Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.665438 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l6gbg"] Nov 21 11:10:26 crc kubenswrapper[4972]: E1121 11:10:26.665979 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1a86ae-a392-4e3d-b81f-1033834025ff" containerName="keystone-bootstrap" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.666013 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1a86ae-a392-4e3d-b81f-1033834025ff" containerName="keystone-bootstrap" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.666390 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1a86ae-a392-4e3d-b81f-1033834025ff" containerName="keystone-bootstrap" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.667441 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.670610 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.670950 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.671157 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vggzf" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.671770 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.672957 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.686151 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l6gbg"] Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.729340 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-combined-ca-bundle\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.729678 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-scripts\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.729785 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-fernet-keys\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.729895 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mlvk\" (UniqueName: \"kubernetes.io/projected/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-kube-api-access-2mlvk\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.729990 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-config-data\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.730139 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-credential-keys\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.832239 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-scripts\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.832316 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-fernet-keys\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.832359 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mlvk\" (UniqueName: \"kubernetes.io/projected/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-kube-api-access-2mlvk\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.832405 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-config-data\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.832622 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-credential-keys\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.832708 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-combined-ca-bundle\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.838784 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-config-data\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.840426 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-credential-keys\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.840951 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-fernet-keys\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.841880 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-combined-ca-bundle\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.842264 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-scripts\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:26 crc kubenswrapper[4972]: I1121 11:10:26.862252 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mlvk\" (UniqueName: \"kubernetes.io/projected/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-kube-api-access-2mlvk\") pod \"keystone-bootstrap-l6gbg\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:27 crc kubenswrapper[4972]: I1121 11:10:27.032531 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:27 crc kubenswrapper[4972]: I1121 11:10:27.780424 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1a86ae-a392-4e3d-b81f-1033834025ff" path="/var/lib/kubelet/pods/0e1a86ae-a392-4e3d-b81f-1033834025ff/volumes" Nov 21 11:10:27 crc kubenswrapper[4972]: I1121 11:10:27.782042 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l6gbg"] Nov 21 11:10:27 crc kubenswrapper[4972]: W1121 11:10:27.788182 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c15a3b4_b9fa_4e8f_ba73_ac9c4732c291.slice/crio-3a872a58a1ca302546c558f47b06a5d40b43ef4873b3e161b32b732cfccb5b41 WatchSource:0}: Error finding container 3a872a58a1ca302546c558f47b06a5d40b43ef4873b3e161b32b732cfccb5b41: Status 404 returned error can't find the container with id 3a872a58a1ca302546c558f47b06a5d40b43ef4873b3e161b32b732cfccb5b41 Nov 21 11:10:28 crc kubenswrapper[4972]: I1121 11:10:28.551426 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l6gbg" event={"ID":"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291","Type":"ContainerStarted","Data":"4b19c77648130a930f2f596a4ec1e966bf8705b6f9d05e5c94a1c733eb4d3892"} Nov 21 11:10:28 crc kubenswrapper[4972]: I1121 11:10:28.552151 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l6gbg" event={"ID":"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291","Type":"ContainerStarted","Data":"3a872a58a1ca302546c558f47b06a5d40b43ef4873b3e161b32b732cfccb5b41"} Nov 21 11:10:28 crc kubenswrapper[4972]: I1121 11:10:28.596567 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l6gbg" podStartSLOduration=2.5965352040000003 podStartE2EDuration="2.596535204s" podCreationTimestamp="2025-11-21 11:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:10:28.584929456 +0000 UTC m=+5373.694071974" watchObservedRunningTime="2025-11-21 11:10:28.596535204 +0000 UTC m=+5373.705677742" Nov 21 11:10:30 crc kubenswrapper[4972]: I1121 11:10:30.362074 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:10:30 crc kubenswrapper[4972]: I1121 11:10:30.464333 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d6c89bb59-tbklb"] Nov 21 11:10:30 crc kubenswrapper[4972]: I1121 11:10:30.466027 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" podUID="2b735382-51ce-491a-aacb-df2d626449d6" containerName="dnsmasq-dns" containerID="cri-o://7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88" gracePeriod=10 Nov 21 11:10:30 crc kubenswrapper[4972]: I1121 11:10:30.585274 4972 generic.go:334] "Generic (PLEG): container finished" podID="3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291" containerID="4b19c77648130a930f2f596a4ec1e966bf8705b6f9d05e5c94a1c733eb4d3892" exitCode=0 Nov 21 11:10:30 crc kubenswrapper[4972]: I1121 11:10:30.585362 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l6gbg" event={"ID":"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291","Type":"ContainerDied","Data":"4b19c77648130a930f2f596a4ec1e966bf8705b6f9d05e5c94a1c733eb4d3892"} Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.002716 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.032322 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-dns-svc\") pod \"2b735382-51ce-491a-aacb-df2d626449d6\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.032402 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfgvr\" (UniqueName: \"kubernetes.io/projected/2b735382-51ce-491a-aacb-df2d626449d6-kube-api-access-rfgvr\") pod \"2b735382-51ce-491a-aacb-df2d626449d6\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.032422 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-ovsdbserver-sb\") pod \"2b735382-51ce-491a-aacb-df2d626449d6\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.032471 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-config\") pod \"2b735382-51ce-491a-aacb-df2d626449d6\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.032503 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-ovsdbserver-nb\") pod \"2b735382-51ce-491a-aacb-df2d626449d6\" (UID: \"2b735382-51ce-491a-aacb-df2d626449d6\") " Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.039454 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b735382-51ce-491a-aacb-df2d626449d6-kube-api-access-rfgvr" (OuterVolumeSpecName: "kube-api-access-rfgvr") pod "2b735382-51ce-491a-aacb-df2d626449d6" (UID: "2b735382-51ce-491a-aacb-df2d626449d6"). InnerVolumeSpecName "kube-api-access-rfgvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.073516 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b735382-51ce-491a-aacb-df2d626449d6" (UID: "2b735382-51ce-491a-aacb-df2d626449d6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.078823 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-config" (OuterVolumeSpecName: "config") pod "2b735382-51ce-491a-aacb-df2d626449d6" (UID: "2b735382-51ce-491a-aacb-df2d626449d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.082531 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b735382-51ce-491a-aacb-df2d626449d6" (UID: "2b735382-51ce-491a-aacb-df2d626449d6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.083046 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b735382-51ce-491a-aacb-df2d626449d6" (UID: "2b735382-51ce-491a-aacb-df2d626449d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.134701 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfgvr\" (UniqueName: \"kubernetes.io/projected/2b735382-51ce-491a-aacb-df2d626449d6-kube-api-access-rfgvr\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.134742 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.134756 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.134767 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.134779 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b735382-51ce-491a-aacb-df2d626449d6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.605501 4972 generic.go:334] "Generic (PLEG): container finished" podID="2b735382-51ce-491a-aacb-df2d626449d6" containerID="7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88" exitCode=0 Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.606281 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.607370 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" event={"ID":"2b735382-51ce-491a-aacb-df2d626449d6","Type":"ContainerDied","Data":"7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88"} Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.607438 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d6c89bb59-tbklb" event={"ID":"2b735382-51ce-491a-aacb-df2d626449d6","Type":"ContainerDied","Data":"ae78d5cfc3a4dcbf7a5c5a61abc572d2db1d497d77e45939419003291ce70614"} Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.607471 4972 scope.go:117] "RemoveContainer" containerID="7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.675309 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d6c89bb59-tbklb"] Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.678926 4972 scope.go:117] "RemoveContainer" containerID="2663bd11d9f1a608e9992f458ba3f706eb10f51c10193e9cabbf1cf1f6a5d65b" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.682537 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d6c89bb59-tbklb"] Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.709432 4972 scope.go:117] "RemoveContainer" containerID="7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88" Nov 21 11:10:31 crc kubenswrapper[4972]: E1121 11:10:31.711203 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88\": container with ID starting with 7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88 not found: ID does not exist" containerID="7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.711250 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88"} err="failed to get container status \"7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88\": rpc error: code = NotFound desc = could not find container \"7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88\": container with ID starting with 7a89df33e27294c10cfde4b452785b9c16468a5ef8b48f96a450be5540e96a88 not found: ID does not exist" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.711283 4972 scope.go:117] "RemoveContainer" containerID="2663bd11d9f1a608e9992f458ba3f706eb10f51c10193e9cabbf1cf1f6a5d65b" Nov 21 11:10:31 crc kubenswrapper[4972]: E1121 11:10:31.711955 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2663bd11d9f1a608e9992f458ba3f706eb10f51c10193e9cabbf1cf1f6a5d65b\": container with ID starting with 2663bd11d9f1a608e9992f458ba3f706eb10f51c10193e9cabbf1cf1f6a5d65b not found: ID does not exist" containerID="2663bd11d9f1a608e9992f458ba3f706eb10f51c10193e9cabbf1cf1f6a5d65b" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.711979 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2663bd11d9f1a608e9992f458ba3f706eb10f51c10193e9cabbf1cf1f6a5d65b"} err="failed to get container status \"2663bd11d9f1a608e9992f458ba3f706eb10f51c10193e9cabbf1cf1f6a5d65b\": rpc error: code = NotFound desc = could not find container \"2663bd11d9f1a608e9992f458ba3f706eb10f51c10193e9cabbf1cf1f6a5d65b\": container with ID starting with 2663bd11d9f1a608e9992f458ba3f706eb10f51c10193e9cabbf1cf1f6a5d65b not found: ID does not exist" Nov 21 11:10:31 crc kubenswrapper[4972]: I1121 11:10:31.798061 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b735382-51ce-491a-aacb-df2d626449d6" path="/var/lib/kubelet/pods/2b735382-51ce-491a-aacb-df2d626449d6/volumes" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.010381 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.189311 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-scripts\") pod \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.189481 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-combined-ca-bundle\") pod \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.189552 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-config-data\") pod \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.189655 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-credential-keys\") pod \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.189719 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-fernet-keys\") pod \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.189810 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mlvk\" (UniqueName: \"kubernetes.io/projected/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-kube-api-access-2mlvk\") pod \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\" (UID: \"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291\") " Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.198291 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291" (UID: "3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.198382 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291" (UID: "3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.198404 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-kube-api-access-2mlvk" (OuterVolumeSpecName: "kube-api-access-2mlvk") pod "3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291" (UID: "3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291"). InnerVolumeSpecName "kube-api-access-2mlvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.199485 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-scripts" (OuterVolumeSpecName: "scripts") pod "3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291" (UID: "3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.231212 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-config-data" (OuterVolumeSpecName: "config-data") pod "3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291" (UID: "3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.232261 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291" (UID: "3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.293983 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.294029 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.294049 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.294065 4972 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.294082 4972 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.294098 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mlvk\" (UniqueName: \"kubernetes.io/projected/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291-kube-api-access-2mlvk\") on node \"crc\" DevicePath \"\"" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.624227 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l6gbg" event={"ID":"3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291","Type":"ContainerDied","Data":"3a872a58a1ca302546c558f47b06a5d40b43ef4873b3e161b32b732cfccb5b41"} Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.624302 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a872a58a1ca302546c558f47b06a5d40b43ef4873b3e161b32b732cfccb5b41" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.624360 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l6gbg" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.719009 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-745ff65c64-zdzmt"] Nov 21 11:10:32 crc kubenswrapper[4972]: E1121 11:10:32.719406 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b735382-51ce-491a-aacb-df2d626449d6" containerName="init" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.719430 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b735382-51ce-491a-aacb-df2d626449d6" containerName="init" Nov 21 11:10:32 crc kubenswrapper[4972]: E1121 11:10:32.719447 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291" containerName="keystone-bootstrap" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.719455 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291" containerName="keystone-bootstrap" Nov 21 11:10:32 crc kubenswrapper[4972]: E1121 11:10:32.719492 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b735382-51ce-491a-aacb-df2d626449d6" containerName="dnsmasq-dns" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.719501 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b735382-51ce-491a-aacb-df2d626449d6" containerName="dnsmasq-dns" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.719713 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291" containerName="keystone-bootstrap" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.719728 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b735382-51ce-491a-aacb-df2d626449d6" containerName="dnsmasq-dns" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.720390 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.722760 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.722960 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.723104 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.724618 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vggzf" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.737345 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-745ff65c64-zdzmt"] Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.808289 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-config-data\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.808399 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-combined-ca-bundle\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.808481 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-scripts\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.809193 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-686t7\" (UniqueName: \"kubernetes.io/projected/100b4921-687c-4d6e-97a7-e191b0b8c7d8-kube-api-access-686t7\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.809355 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-credential-keys\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.809512 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-fernet-keys\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.911145 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-686t7\" (UniqueName: \"kubernetes.io/projected/100b4921-687c-4d6e-97a7-e191b0b8c7d8-kube-api-access-686t7\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.911197 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-credential-keys\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.911234 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-fernet-keys\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.911275 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-config-data\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.911337 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-combined-ca-bundle\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.911362 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-scripts\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.915983 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-credential-keys\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.916040 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-scripts\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.917646 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-combined-ca-bundle\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.917988 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-fernet-keys\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.920481 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100b4921-687c-4d6e-97a7-e191b0b8c7d8-config-data\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:32 crc kubenswrapper[4972]: I1121 11:10:32.931009 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-686t7\" (UniqueName: \"kubernetes.io/projected/100b4921-687c-4d6e-97a7-e191b0b8c7d8-kube-api-access-686t7\") pod \"keystone-745ff65c64-zdzmt\" (UID: \"100b4921-687c-4d6e-97a7-e191b0b8c7d8\") " pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:33 crc kubenswrapper[4972]: I1121 11:10:33.042175 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:33 crc kubenswrapper[4972]: I1121 11:10:33.524518 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-745ff65c64-zdzmt"] Nov 21 11:10:33 crc kubenswrapper[4972]: I1121 11:10:33.654484 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-745ff65c64-zdzmt" event={"ID":"100b4921-687c-4d6e-97a7-e191b0b8c7d8","Type":"ContainerStarted","Data":"41dacbbd647c0852ea0a50ffd2de56ca0526429e2e3bc6b7738f9757ede43ae1"} Nov 21 11:10:34 crc kubenswrapper[4972]: I1121 11:10:34.669860 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-745ff65c64-zdzmt" event={"ID":"100b4921-687c-4d6e-97a7-e191b0b8c7d8","Type":"ContainerStarted","Data":"6595193a58aa686d3415439d801d15ebafab159678534bf9248d486318bfc978"} Nov 21 11:10:34 crc kubenswrapper[4972]: I1121 11:10:34.670072 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:10:34 crc kubenswrapper[4972]: I1121 11:10:34.702620 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-745ff65c64-zdzmt" podStartSLOduration=2.7026009970000002 podStartE2EDuration="2.702600997s" podCreationTimestamp="2025-11-21 11:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:10:34.694966145 +0000 UTC m=+5379.804108663" watchObservedRunningTime="2025-11-21 11:10:34.702600997 +0000 UTC m=+5379.811743495" Nov 21 11:11:04 crc kubenswrapper[4972]: I1121 11:11:04.380783 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-745ff65c64-zdzmt" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.638968 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.642234 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.647453 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.647670 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-j2xfp" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.647463 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.654159 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e167940-0b5b-425d-8974-0e209eb06a0c-openstack-config\") pod \"openstackclient\" (UID: \"8e167940-0b5b-425d-8974-0e209eb06a0c\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.654255 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e167940-0b5b-425d-8974-0e209eb06a0c-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e167940-0b5b-425d-8974-0e209eb06a0c\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.654574 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7msh8\" (UniqueName: \"kubernetes.io/projected/8e167940-0b5b-425d-8974-0e209eb06a0c-kube-api-access-7msh8\") pod \"openstackclient\" (UID: \"8e167940-0b5b-425d-8974-0e209eb06a0c\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.661184 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.695998 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 21 11:11:08 crc kubenswrapper[4972]: E1121 11:11:08.696714 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-7msh8 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="8e167940-0b5b-425d-8974-0e209eb06a0c" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.720892 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.738685 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.739804 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.756252 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.756322 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e167940-0b5b-425d-8974-0e209eb06a0c-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e167940-0b5b-425d-8974-0e209eb06a0c\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.757191 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7msh8\" (UniqueName: \"kubernetes.io/projected/8e167940-0b5b-425d-8974-0e209eb06a0c-kube-api-access-7msh8\") pod \"openstackclient\" (UID: \"8e167940-0b5b-425d-8974-0e209eb06a0c\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.757241 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b41adfd0-57cb-4109-bcff-6594da13bf09-openstack-config-secret\") pod \"openstackclient\" (UID: \"b41adfd0-57cb-4109-bcff-6594da13bf09\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.757358 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frrjm\" (UniqueName: \"kubernetes.io/projected/b41adfd0-57cb-4109-bcff-6594da13bf09-kube-api-access-frrjm\") pod \"openstackclient\" (UID: \"b41adfd0-57cb-4109-bcff-6594da13bf09\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.757417 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b41adfd0-57cb-4109-bcff-6594da13bf09-openstack-config\") pod \"openstackclient\" (UID: \"b41adfd0-57cb-4109-bcff-6594da13bf09\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.757491 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e167940-0b5b-425d-8974-0e209eb06a0c-openstack-config\") pod \"openstackclient\" (UID: \"8e167940-0b5b-425d-8974-0e209eb06a0c\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: E1121 11:11:08.761195 4972 projected.go:194] Error preparing data for projected volume kube-api-access-7msh8 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8e167940-0b5b-425d-8974-0e209eb06a0c) does not match the UID in record. The object might have been deleted and then recreated Nov 21 11:11:08 crc kubenswrapper[4972]: E1121 11:11:08.761448 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e167940-0b5b-425d-8974-0e209eb06a0c-kube-api-access-7msh8 podName:8e167940-0b5b-425d-8974-0e209eb06a0c nodeName:}" failed. No retries permitted until 2025-11-21 11:11:09.261413477 +0000 UTC m=+5414.370555985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7msh8" (UniqueName: "kubernetes.io/projected/8e167940-0b5b-425d-8974-0e209eb06a0c-kube-api-access-7msh8") pod "openstackclient" (UID: "8e167940-0b5b-425d-8974-0e209eb06a0c") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8e167940-0b5b-425d-8974-0e209eb06a0c) does not match the UID in record. The object might have been deleted and then recreated Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.766141 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e167940-0b5b-425d-8974-0e209eb06a0c-openstack-config\") pod \"openstackclient\" (UID: \"8e167940-0b5b-425d-8974-0e209eb06a0c\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.771229 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e167940-0b5b-425d-8974-0e209eb06a0c-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e167940-0b5b-425d-8974-0e209eb06a0c\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.858445 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b41adfd0-57cb-4109-bcff-6594da13bf09-openstack-config-secret\") pod \"openstackclient\" (UID: \"b41adfd0-57cb-4109-bcff-6594da13bf09\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.858554 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frrjm\" (UniqueName: \"kubernetes.io/projected/b41adfd0-57cb-4109-bcff-6594da13bf09-kube-api-access-frrjm\") pod \"openstackclient\" (UID: \"b41adfd0-57cb-4109-bcff-6594da13bf09\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.858587 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b41adfd0-57cb-4109-bcff-6594da13bf09-openstack-config\") pod \"openstackclient\" (UID: \"b41adfd0-57cb-4109-bcff-6594da13bf09\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.861662 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b41adfd0-57cb-4109-bcff-6594da13bf09-openstack-config-secret\") pod \"openstackclient\" (UID: \"b41adfd0-57cb-4109-bcff-6594da13bf09\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.861698 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b41adfd0-57cb-4109-bcff-6594da13bf09-openstack-config\") pod \"openstackclient\" (UID: \"b41adfd0-57cb-4109-bcff-6594da13bf09\") " pod="openstack/openstackclient" Nov 21 11:11:08 crc kubenswrapper[4972]: I1121 11:11:08.878039 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frrjm\" (UniqueName: \"kubernetes.io/projected/b41adfd0-57cb-4109-bcff-6594da13bf09-kube-api-access-frrjm\") pod \"openstackclient\" (UID: \"b41adfd0-57cb-4109-bcff-6594da13bf09\") " pod="openstack/openstackclient" Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.057906 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.061468 4972 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8e167940-0b5b-425d-8974-0e209eb06a0c" podUID="b41adfd0-57cb-4109-bcff-6594da13bf09" Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.064164 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.069226 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.268075 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e167940-0b5b-425d-8974-0e209eb06a0c-openstack-config\") pod \"8e167940-0b5b-425d-8974-0e209eb06a0c\" (UID: \"8e167940-0b5b-425d-8974-0e209eb06a0c\") " Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.269118 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e167940-0b5b-425d-8974-0e209eb06a0c-openstack-config-secret\") pod \"8e167940-0b5b-425d-8974-0e209eb06a0c\" (UID: \"8e167940-0b5b-425d-8974-0e209eb06a0c\") " Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.269325 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e167940-0b5b-425d-8974-0e209eb06a0c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8e167940-0b5b-425d-8974-0e209eb06a0c" (UID: "8e167940-0b5b-425d-8974-0e209eb06a0c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.269917 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7msh8\" (UniqueName: \"kubernetes.io/projected/8e167940-0b5b-425d-8974-0e209eb06a0c-kube-api-access-7msh8\") on node \"crc\" DevicePath \"\"" Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.269974 4972 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e167940-0b5b-425d-8974-0e209eb06a0c-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.276536 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e167940-0b5b-425d-8974-0e209eb06a0c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8e167940-0b5b-425d-8974-0e209eb06a0c" (UID: "8e167940-0b5b-425d-8974-0e209eb06a0c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.371094 4972 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e167940-0b5b-425d-8974-0e209eb06a0c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.519382 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 11:11:09 crc kubenswrapper[4972]: I1121 11:11:09.776210 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e167940-0b5b-425d-8974-0e209eb06a0c" path="/var/lib/kubelet/pods/8e167940-0b5b-425d-8974-0e209eb06a0c/volumes" Nov 21 11:11:10 crc kubenswrapper[4972]: I1121 11:11:10.072984 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:11:10 crc kubenswrapper[4972]: I1121 11:11:10.073035 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b41adfd0-57cb-4109-bcff-6594da13bf09","Type":"ContainerStarted","Data":"57e6e52ceef13dc38b963dce1bf562cded2c4580091c1508938b28a1f91e375b"} Nov 21 11:11:10 crc kubenswrapper[4972]: I1121 11:11:10.073101 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b41adfd0-57cb-4109-bcff-6594da13bf09","Type":"ContainerStarted","Data":"4528bc355ec9cde887749c7c818bc8db509e048219864c2c92e4d744b7a9854e"} Nov 21 11:11:10 crc kubenswrapper[4972]: I1121 11:11:10.097777 4972 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8e167940-0b5b-425d-8974-0e209eb06a0c" podUID="b41adfd0-57cb-4109-bcff-6594da13bf09" Nov 21 11:11:10 crc kubenswrapper[4972]: I1121 11:11:10.101084 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.101058152 podStartE2EDuration="2.101058152s" podCreationTimestamp="2025-11-21 11:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:11:10.094210821 +0000 UTC m=+5415.203353389" watchObservedRunningTime="2025-11-21 11:11:10.101058152 +0000 UTC m=+5415.210200680" Nov 21 11:11:26 crc kubenswrapper[4972]: I1121 11:11:26.604650 4972 scope.go:117] "RemoveContainer" containerID="5f995e592040a5cfe6bee650f679ce02b698749e5c06feb964e2f83d010b7b51" Nov 21 11:11:26 crc kubenswrapper[4972]: I1121 11:11:26.631207 4972 scope.go:117] "RemoveContainer" containerID="1101eb487b54b716e141b32ba24e14e83fbd631b144261a239f896a232f017c1" Nov 21 11:11:26 crc kubenswrapper[4972]: I1121 11:11:26.700030 4972 scope.go:117] "RemoveContainer" containerID="957756c855e8a87fa6b6fe32a4042ce95378fdec89b516cce16c801c3cbf039a" Nov 21 11:11:26 crc kubenswrapper[4972]: I1121 11:11:26.737898 4972 scope.go:117] "RemoveContainer" containerID="7f34cab9a2afef988a944172a47899318a93ee59f9196f5e8ffc3044e747762f" Nov 21 11:11:56 crc kubenswrapper[4972]: I1121 11:11:56.178945 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:11:56 crc kubenswrapper[4972]: I1121 11:11:56.181006 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:12:26 crc kubenswrapper[4972]: I1121 11:12:26.178663 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:12:26 crc kubenswrapper[4972]: I1121 11:12:26.179417 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.470199 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-z69mt"] Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.471617 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z69mt" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.482804 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z69mt"] Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.567701 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8b15-account-create-qgzk8"] Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.568933 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8b15-account-create-qgzk8" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.571584 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.586223 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8b15-account-create-qgzk8"] Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.599529 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed42ffd0-2593-4c1f-a442-4b0d0c607c93-operator-scripts\") pod \"barbican-db-create-z69mt\" (UID: \"ed42ffd0-2593-4c1f-a442-4b0d0c607c93\") " pod="openstack/barbican-db-create-z69mt" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.599593 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg2lm\" (UniqueName: \"kubernetes.io/projected/ed42ffd0-2593-4c1f-a442-4b0d0c607c93-kube-api-access-tg2lm\") pod \"barbican-db-create-z69mt\" (UID: \"ed42ffd0-2593-4c1f-a442-4b0d0c607c93\") " pod="openstack/barbican-db-create-z69mt" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.701691 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17-operator-scripts\") pod \"barbican-8b15-account-create-qgzk8\" (UID: \"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17\") " pod="openstack/barbican-8b15-account-create-qgzk8" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.701765 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed42ffd0-2593-4c1f-a442-4b0d0c607c93-operator-scripts\") pod \"barbican-db-create-z69mt\" (UID: \"ed42ffd0-2593-4c1f-a442-4b0d0c607c93\") " pod="openstack/barbican-db-create-z69mt" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.701792 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg2lm\" (UniqueName: \"kubernetes.io/projected/ed42ffd0-2593-4c1f-a442-4b0d0c607c93-kube-api-access-tg2lm\") pod \"barbican-db-create-z69mt\" (UID: \"ed42ffd0-2593-4c1f-a442-4b0d0c607c93\") " pod="openstack/barbican-db-create-z69mt" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.701866 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmhlg\" (UniqueName: \"kubernetes.io/projected/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17-kube-api-access-bmhlg\") pod \"barbican-8b15-account-create-qgzk8\" (UID: \"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17\") " pod="openstack/barbican-8b15-account-create-qgzk8" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.702820 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed42ffd0-2593-4c1f-a442-4b0d0c607c93-operator-scripts\") pod \"barbican-db-create-z69mt\" (UID: \"ed42ffd0-2593-4c1f-a442-4b0d0c607c93\") " pod="openstack/barbican-db-create-z69mt" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.729097 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg2lm\" (UniqueName: \"kubernetes.io/projected/ed42ffd0-2593-4c1f-a442-4b0d0c607c93-kube-api-access-tg2lm\") pod \"barbican-db-create-z69mt\" (UID: \"ed42ffd0-2593-4c1f-a442-4b0d0c607c93\") " pod="openstack/barbican-db-create-z69mt" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.794979 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z69mt" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.803352 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17-operator-scripts\") pod \"barbican-8b15-account-create-qgzk8\" (UID: \"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17\") " pod="openstack/barbican-8b15-account-create-qgzk8" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.803459 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmhlg\" (UniqueName: \"kubernetes.io/projected/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17-kube-api-access-bmhlg\") pod \"barbican-8b15-account-create-qgzk8\" (UID: \"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17\") " pod="openstack/barbican-8b15-account-create-qgzk8" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.804432 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17-operator-scripts\") pod \"barbican-8b15-account-create-qgzk8\" (UID: \"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17\") " pod="openstack/barbican-8b15-account-create-qgzk8" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.828480 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmhlg\" (UniqueName: \"kubernetes.io/projected/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17-kube-api-access-bmhlg\") pod \"barbican-8b15-account-create-qgzk8\" (UID: \"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17\") " pod="openstack/barbican-8b15-account-create-qgzk8" Nov 21 11:12:46 crc kubenswrapper[4972]: I1121 11:12:46.884357 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8b15-account-create-qgzk8" Nov 21 11:12:47 crc kubenswrapper[4972]: I1121 11:12:47.246217 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z69mt"] Nov 21 11:12:47 crc kubenswrapper[4972]: I1121 11:12:47.354337 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8b15-account-create-qgzk8"] Nov 21 11:12:47 crc kubenswrapper[4972]: W1121 11:12:47.366550 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e03bc85_e0b6_45ad_86a0_27a49e0cfe17.slice/crio-3b49e7bd940baee5d48f9933cfbe167fdb74f692f8f3c733f9947590ba436292 WatchSource:0}: Error finding container 3b49e7bd940baee5d48f9933cfbe167fdb74f692f8f3c733f9947590ba436292: Status 404 returned error can't find the container with id 3b49e7bd940baee5d48f9933cfbe167fdb74f692f8f3c733f9947590ba436292 Nov 21 11:12:48 crc kubenswrapper[4972]: I1121 11:12:48.044156 4972 generic.go:334] "Generic (PLEG): container finished" podID="ed42ffd0-2593-4c1f-a442-4b0d0c607c93" containerID="74e8a80c154ef35f68cd02969f75785f0d672ab4c284095630df145274551056" exitCode=0 Nov 21 11:12:48 crc kubenswrapper[4972]: I1121 11:12:48.044298 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z69mt" event={"ID":"ed42ffd0-2593-4c1f-a442-4b0d0c607c93","Type":"ContainerDied","Data":"74e8a80c154ef35f68cd02969f75785f0d672ab4c284095630df145274551056"} Nov 21 11:12:48 crc kubenswrapper[4972]: I1121 11:12:48.044389 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z69mt" event={"ID":"ed42ffd0-2593-4c1f-a442-4b0d0c607c93","Type":"ContainerStarted","Data":"1fc4eb43e26c72cc99bf44f24182899f6a5123566c843a964d2259ac891bc27e"} Nov 21 11:12:48 crc kubenswrapper[4972]: I1121 11:12:48.047978 4972 generic.go:334] "Generic (PLEG): container finished" podID="1e03bc85-e0b6-45ad-86a0-27a49e0cfe17" containerID="57c9ba4c55760949588a274b63a1546a259f82708556d8ea3a27fb7e03cc5373" exitCode=0 Nov 21 11:12:48 crc kubenswrapper[4972]: I1121 11:12:48.048048 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8b15-account-create-qgzk8" event={"ID":"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17","Type":"ContainerDied","Data":"57c9ba4c55760949588a274b63a1546a259f82708556d8ea3a27fb7e03cc5373"} Nov 21 11:12:48 crc kubenswrapper[4972]: I1121 11:12:48.048089 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8b15-account-create-qgzk8" event={"ID":"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17","Type":"ContainerStarted","Data":"3b49e7bd940baee5d48f9933cfbe167fdb74f692f8f3c733f9947590ba436292"} Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.489395 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8b15-account-create-qgzk8" Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.496455 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z69mt" Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.663931 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmhlg\" (UniqueName: \"kubernetes.io/projected/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17-kube-api-access-bmhlg\") pod \"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17\" (UID: \"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17\") " Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.664049 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17-operator-scripts\") pod \"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17\" (UID: \"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17\") " Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.664169 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg2lm\" (UniqueName: \"kubernetes.io/projected/ed42ffd0-2593-4c1f-a442-4b0d0c607c93-kube-api-access-tg2lm\") pod \"ed42ffd0-2593-4c1f-a442-4b0d0c607c93\" (UID: \"ed42ffd0-2593-4c1f-a442-4b0d0c607c93\") " Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.664257 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed42ffd0-2593-4c1f-a442-4b0d0c607c93-operator-scripts\") pod \"ed42ffd0-2593-4c1f-a442-4b0d0c607c93\" (UID: \"ed42ffd0-2593-4c1f-a442-4b0d0c607c93\") " Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.665017 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed42ffd0-2593-4c1f-a442-4b0d0c607c93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed42ffd0-2593-4c1f-a442-4b0d0c607c93" (UID: "ed42ffd0-2593-4c1f-a442-4b0d0c607c93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.665048 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e03bc85-e0b6-45ad-86a0-27a49e0cfe17" (UID: "1e03bc85-e0b6-45ad-86a0-27a49e0cfe17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.668730 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed42ffd0-2593-4c1f-a442-4b0d0c607c93-kube-api-access-tg2lm" (OuterVolumeSpecName: "kube-api-access-tg2lm") pod "ed42ffd0-2593-4c1f-a442-4b0d0c607c93" (UID: "ed42ffd0-2593-4c1f-a442-4b0d0c607c93"). InnerVolumeSpecName "kube-api-access-tg2lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.669804 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17-kube-api-access-bmhlg" (OuterVolumeSpecName: "kube-api-access-bmhlg") pod "1e03bc85-e0b6-45ad-86a0-27a49e0cfe17" (UID: "1e03bc85-e0b6-45ad-86a0-27a49e0cfe17"). InnerVolumeSpecName "kube-api-access-bmhlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.766873 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg2lm\" (UniqueName: \"kubernetes.io/projected/ed42ffd0-2593-4c1f-a442-4b0d0c607c93-kube-api-access-tg2lm\") on node \"crc\" DevicePath \"\"" Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.767154 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed42ffd0-2593-4c1f-a442-4b0d0c607c93-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.767255 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmhlg\" (UniqueName: \"kubernetes.io/projected/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17-kube-api-access-bmhlg\") on node \"crc\" DevicePath \"\"" Nov 21 11:12:49 crc kubenswrapper[4972]: I1121 11:12:49.767371 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:12:50 crc kubenswrapper[4972]: I1121 11:12:50.070888 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8b15-account-create-qgzk8" Nov 21 11:12:50 crc kubenswrapper[4972]: I1121 11:12:50.071243 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8b15-account-create-qgzk8" event={"ID":"1e03bc85-e0b6-45ad-86a0-27a49e0cfe17","Type":"ContainerDied","Data":"3b49e7bd940baee5d48f9933cfbe167fdb74f692f8f3c733f9947590ba436292"} Nov 21 11:12:50 crc kubenswrapper[4972]: I1121 11:12:50.071294 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b49e7bd940baee5d48f9933cfbe167fdb74f692f8f3c733f9947590ba436292" Nov 21 11:12:50 crc kubenswrapper[4972]: I1121 11:12:50.074882 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z69mt" event={"ID":"ed42ffd0-2593-4c1f-a442-4b0d0c607c93","Type":"ContainerDied","Data":"1fc4eb43e26c72cc99bf44f24182899f6a5123566c843a964d2259ac891bc27e"} Nov 21 11:12:50 crc kubenswrapper[4972]: I1121 11:12:50.074928 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z69mt" Nov 21 11:12:50 crc kubenswrapper[4972]: I1121 11:12:50.074937 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc4eb43e26c72cc99bf44f24182899f6a5123566c843a964d2259ac891bc27e" Nov 21 11:12:51 crc kubenswrapper[4972]: I1121 11:12:51.843061 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xgq7v"] Nov 21 11:12:51 crc kubenswrapper[4972]: E1121 11:12:51.843563 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e03bc85-e0b6-45ad-86a0-27a49e0cfe17" containerName="mariadb-account-create" Nov 21 11:12:51 crc kubenswrapper[4972]: I1121 11:12:51.843584 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e03bc85-e0b6-45ad-86a0-27a49e0cfe17" containerName="mariadb-account-create" Nov 21 11:12:51 crc kubenswrapper[4972]: E1121 11:12:51.843623 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed42ffd0-2593-4c1f-a442-4b0d0c607c93" containerName="mariadb-database-create" Nov 21 11:12:51 crc kubenswrapper[4972]: I1121 11:12:51.843635 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed42ffd0-2593-4c1f-a442-4b0d0c607c93" containerName="mariadb-database-create" Nov 21 11:12:51 crc kubenswrapper[4972]: I1121 11:12:51.843947 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e03bc85-e0b6-45ad-86a0-27a49e0cfe17" containerName="mariadb-account-create" Nov 21 11:12:51 crc kubenswrapper[4972]: I1121 11:12:51.843973 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed42ffd0-2593-4c1f-a442-4b0d0c607c93" containerName="mariadb-database-create" Nov 21 11:12:51 crc kubenswrapper[4972]: I1121 11:12:51.844779 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:51 crc kubenswrapper[4972]: I1121 11:12:51.852713 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xpgmr" Nov 21 11:12:51 crc kubenswrapper[4972]: I1121 11:12:51.853087 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 21 11:12:51 crc kubenswrapper[4972]: I1121 11:12:51.858841 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xgq7v"] Nov 21 11:12:52 crc kubenswrapper[4972]: I1121 11:12:52.008652 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/192af3fd-3fa6-43b6-ac79-7e613ff1845d-db-sync-config-data\") pod \"barbican-db-sync-xgq7v\" (UID: \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\") " pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:52 crc kubenswrapper[4972]: I1121 11:12:52.008798 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcsr5\" (UniqueName: \"kubernetes.io/projected/192af3fd-3fa6-43b6-ac79-7e613ff1845d-kube-api-access-pcsr5\") pod \"barbican-db-sync-xgq7v\" (UID: \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\") " pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:52 crc kubenswrapper[4972]: I1121 11:12:52.008879 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192af3fd-3fa6-43b6-ac79-7e613ff1845d-combined-ca-bundle\") pod \"barbican-db-sync-xgq7v\" (UID: \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\") " pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:52 crc kubenswrapper[4972]: I1121 11:12:52.110258 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/192af3fd-3fa6-43b6-ac79-7e613ff1845d-db-sync-config-data\") pod \"barbican-db-sync-xgq7v\" (UID: \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\") " pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:52 crc kubenswrapper[4972]: I1121 11:12:52.110335 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcsr5\" (UniqueName: \"kubernetes.io/projected/192af3fd-3fa6-43b6-ac79-7e613ff1845d-kube-api-access-pcsr5\") pod \"barbican-db-sync-xgq7v\" (UID: \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\") " pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:52 crc kubenswrapper[4972]: I1121 11:12:52.110426 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192af3fd-3fa6-43b6-ac79-7e613ff1845d-combined-ca-bundle\") pod \"barbican-db-sync-xgq7v\" (UID: \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\") " pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:52 crc kubenswrapper[4972]: I1121 11:12:52.115854 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192af3fd-3fa6-43b6-ac79-7e613ff1845d-combined-ca-bundle\") pod \"barbican-db-sync-xgq7v\" (UID: \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\") " pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:52 crc kubenswrapper[4972]: I1121 11:12:52.115972 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/192af3fd-3fa6-43b6-ac79-7e613ff1845d-db-sync-config-data\") pod \"barbican-db-sync-xgq7v\" (UID: \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\") " pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:52 crc kubenswrapper[4972]: I1121 11:12:52.154113 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcsr5\" (UniqueName: \"kubernetes.io/projected/192af3fd-3fa6-43b6-ac79-7e613ff1845d-kube-api-access-pcsr5\") pod \"barbican-db-sync-xgq7v\" (UID: \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\") " pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:52 crc kubenswrapper[4972]: I1121 11:12:52.184215 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:52 crc kubenswrapper[4972]: I1121 11:12:52.615947 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xgq7v"] Nov 21 11:12:53 crc kubenswrapper[4972]: I1121 11:12:53.102462 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xgq7v" event={"ID":"192af3fd-3fa6-43b6-ac79-7e613ff1845d","Type":"ContainerStarted","Data":"c8dbbb9edde75dc71e5e6670f5226e67a2462689d68cade583e47f04b3b0f908"} Nov 21 11:12:53 crc kubenswrapper[4972]: I1121 11:12:53.102507 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xgq7v" event={"ID":"192af3fd-3fa6-43b6-ac79-7e613ff1845d","Type":"ContainerStarted","Data":"8ba0f567da11d25ef2d3ef4b8a5eb8dacd5daf21abe7910bc04ed3c31286566b"} Nov 21 11:12:53 crc kubenswrapper[4972]: I1121 11:12:53.123442 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xgq7v" podStartSLOduration=2.12341593 podStartE2EDuration="2.12341593s" podCreationTimestamp="2025-11-21 11:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:12:53.119496916 +0000 UTC m=+5518.228639444" watchObservedRunningTime="2025-11-21 11:12:53.12341593 +0000 UTC m=+5518.232558438" Nov 21 11:12:54 crc kubenswrapper[4972]: I1121 11:12:54.113069 4972 generic.go:334] "Generic (PLEG): container finished" podID="192af3fd-3fa6-43b6-ac79-7e613ff1845d" containerID="c8dbbb9edde75dc71e5e6670f5226e67a2462689d68cade583e47f04b3b0f908" exitCode=0 Nov 21 11:12:54 crc kubenswrapper[4972]: I1121 11:12:54.113126 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xgq7v" event={"ID":"192af3fd-3fa6-43b6-ac79-7e613ff1845d","Type":"ContainerDied","Data":"c8dbbb9edde75dc71e5e6670f5226e67a2462689d68cade583e47f04b3b0f908"} Nov 21 11:12:55 crc kubenswrapper[4972]: I1121 11:12:55.476540 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:55 crc kubenswrapper[4972]: I1121 11:12:55.570737 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/192af3fd-3fa6-43b6-ac79-7e613ff1845d-db-sync-config-data\") pod \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\" (UID: \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\") " Nov 21 11:12:55 crc kubenswrapper[4972]: I1121 11:12:55.571009 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192af3fd-3fa6-43b6-ac79-7e613ff1845d-combined-ca-bundle\") pod \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\" (UID: \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\") " Nov 21 11:12:55 crc kubenswrapper[4972]: I1121 11:12:55.571067 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcsr5\" (UniqueName: \"kubernetes.io/projected/192af3fd-3fa6-43b6-ac79-7e613ff1845d-kube-api-access-pcsr5\") pod \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\" (UID: \"192af3fd-3fa6-43b6-ac79-7e613ff1845d\") " Nov 21 11:12:55 crc kubenswrapper[4972]: I1121 11:12:55.578109 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192af3fd-3fa6-43b6-ac79-7e613ff1845d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "192af3fd-3fa6-43b6-ac79-7e613ff1845d" (UID: "192af3fd-3fa6-43b6-ac79-7e613ff1845d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:12:55 crc kubenswrapper[4972]: I1121 11:12:55.578494 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192af3fd-3fa6-43b6-ac79-7e613ff1845d-kube-api-access-pcsr5" (OuterVolumeSpecName: "kube-api-access-pcsr5") pod "192af3fd-3fa6-43b6-ac79-7e613ff1845d" (UID: "192af3fd-3fa6-43b6-ac79-7e613ff1845d"). InnerVolumeSpecName "kube-api-access-pcsr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:12:55 crc kubenswrapper[4972]: I1121 11:12:55.600788 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192af3fd-3fa6-43b6-ac79-7e613ff1845d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "192af3fd-3fa6-43b6-ac79-7e613ff1845d" (UID: "192af3fd-3fa6-43b6-ac79-7e613ff1845d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:12:55 crc kubenswrapper[4972]: I1121 11:12:55.673311 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192af3fd-3fa6-43b6-ac79-7e613ff1845d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:12:55 crc kubenswrapper[4972]: I1121 11:12:55.673337 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcsr5\" (UniqueName: \"kubernetes.io/projected/192af3fd-3fa6-43b6-ac79-7e613ff1845d-kube-api-access-pcsr5\") on node \"crc\" DevicePath \"\"" Nov 21 11:12:55 crc kubenswrapper[4972]: I1121 11:12:55.673347 4972 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/192af3fd-3fa6-43b6-ac79-7e613ff1845d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.143812 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xgq7v" event={"ID":"192af3fd-3fa6-43b6-ac79-7e613ff1845d","Type":"ContainerDied","Data":"8ba0f567da11d25ef2d3ef4b8a5eb8dacd5daf21abe7910bc04ed3c31286566b"} Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.143875 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ba0f567da11d25ef2d3ef4b8a5eb8dacd5daf21abe7910bc04ed3c31286566b" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.143919 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xgq7v" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.179383 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.179720 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.179970 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.181031 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24e4e8c91bec69fac6579b1048275d2e2e1a69f272656a33d0af882dd887ca1f"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.181286 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://24e4e8c91bec69fac6579b1048275d2e2e1a69f272656a33d0af882dd887ca1f" gracePeriod=600 Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.470432 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5ddd898675-msv98"] Nov 21 11:12:56 crc kubenswrapper[4972]: E1121 11:12:56.470803 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192af3fd-3fa6-43b6-ac79-7e613ff1845d" containerName="barbican-db-sync" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.470824 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="192af3fd-3fa6-43b6-ac79-7e613ff1845d" containerName="barbican-db-sync" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.471013 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="192af3fd-3fa6-43b6-ac79-7e613ff1845d" containerName="barbican-db-sync" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.471816 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.475329 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.475355 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xpgmr" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.475918 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.492600 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6946b977f8-97tkv"] Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.494064 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.501348 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.507908 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5ddd898675-msv98"] Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.539038 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6946b977f8-97tkv"] Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.570147 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7fdfc899-ncvwx"] Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.572400 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.599722 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfrr2\" (UniqueName: \"kubernetes.io/projected/956664ed-d3c8-467a-ba7e-ced0e72d00a4-kube-api-access-kfrr2\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.599788 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-config-data\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.599866 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-combined-ca-bundle\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.599940 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbm6m\" (UniqueName: \"kubernetes.io/projected/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-kube-api-access-bbm6m\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.599994 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956664ed-d3c8-467a-ba7e-ced0e72d00a4-combined-ca-bundle\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.600034 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/956664ed-d3c8-467a-ba7e-ced0e72d00a4-logs\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.600096 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-logs\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.600133 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956664ed-d3c8-467a-ba7e-ced0e72d00a4-config-data\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.600191 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-config-data-custom\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.600211 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/956664ed-d3c8-467a-ba7e-ced0e72d00a4-config-data-custom\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.601669 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7fdfc899-ncvwx"] Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.629959 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-747d6fb59b-5vqj5"] Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.646221 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.651234 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.653873 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-747d6fb59b-5vqj5"] Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.701621 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/956664ed-d3c8-467a-ba7e-ced0e72d00a4-config-data-custom\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.701678 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-config-data-custom\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.701713 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.701755 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfrr2\" (UniqueName: \"kubernetes.io/projected/956664ed-d3c8-467a-ba7e-ced0e72d00a4-kube-api-access-kfrr2\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.701784 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-config-data\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.701823 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-dns-svc\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.701879 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-config\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.703260 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-combined-ca-bundle\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.708209 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbm6m\" (UniqueName: \"kubernetes.io/projected/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-kube-api-access-bbm6m\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.708300 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956664ed-d3c8-467a-ba7e-ced0e72d00a4-combined-ca-bundle\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.708355 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbf97\" (UniqueName: \"kubernetes.io/projected/20682c16-9a92-4ca0-bb7e-ad0b023153f0-kube-api-access-jbf97\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.708380 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.708399 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/956664ed-d3c8-467a-ba7e-ced0e72d00a4-logs\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.708473 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-logs\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.708505 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956664ed-d3c8-467a-ba7e-ced0e72d00a4-config-data\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.709183 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/956664ed-d3c8-467a-ba7e-ced0e72d00a4-logs\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.710022 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/956664ed-d3c8-467a-ba7e-ced0e72d00a4-config-data-custom\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.710712 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-logs\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.711189 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-combined-ca-bundle\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.711608 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-config-data-custom\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.712170 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-config-data\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.719399 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfrr2\" (UniqueName: \"kubernetes.io/projected/956664ed-d3c8-467a-ba7e-ced0e72d00a4-kube-api-access-kfrr2\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.728565 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956664ed-d3c8-467a-ba7e-ced0e72d00a4-config-data\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.730743 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbm6m\" (UniqueName: \"kubernetes.io/projected/ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5-kube-api-access-bbm6m\") pod \"barbican-keystone-listener-6946b977f8-97tkv\" (UID: \"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5\") " pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.743569 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956664ed-d3c8-467a-ba7e-ced0e72d00a4-combined-ca-bundle\") pod \"barbican-worker-5ddd898675-msv98\" (UID: \"956664ed-d3c8-467a-ba7e-ced0e72d00a4\") " pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.811754 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.811803 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-config-data\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.811826 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-logs\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.811902 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffvn5\" (UniqueName: \"kubernetes.io/projected/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-kube-api-access-ffvn5\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.811925 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-combined-ca-bundle\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.811954 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-dns-svc\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.811973 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-config\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.812030 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbf97\" (UniqueName: \"kubernetes.io/projected/20682c16-9a92-4ca0-bb7e-ad0b023153f0-kube-api-access-jbf97\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.812047 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.812083 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-config-data-custom\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.813046 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.815402 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-config\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.815553 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-dns-svc\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.815657 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.826701 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5ddd898675-msv98" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.833044 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbf97\" (UniqueName: \"kubernetes.io/projected/20682c16-9a92-4ca0-bb7e-ad0b023153f0-kube-api-access-jbf97\") pod \"dnsmasq-dns-5b7fdfc899-ncvwx\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.836818 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.913990 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-config-data-custom\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.914057 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-config-data\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.914081 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-logs\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.914099 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffvn5\" (UniqueName: \"kubernetes.io/projected/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-kube-api-access-ffvn5\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.914115 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-combined-ca-bundle\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.917664 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-logs\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.923221 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.930500 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-combined-ca-bundle\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.932115 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-config-data\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.932895 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-config-data-custom\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.937067 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffvn5\" (UniqueName: \"kubernetes.io/projected/5bdf9f42-bfbe-474f-86d1-3edbe94c09ac-kube-api-access-ffvn5\") pod \"barbican-api-747d6fb59b-5vqj5\" (UID: \"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac\") " pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:56 crc kubenswrapper[4972]: I1121 11:12:56.971289 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:57 crc kubenswrapper[4972]: I1121 11:12:57.157203 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="24e4e8c91bec69fac6579b1048275d2e2e1a69f272656a33d0af882dd887ca1f" exitCode=0 Nov 21 11:12:57 crc kubenswrapper[4972]: I1121 11:12:57.157272 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"24e4e8c91bec69fac6579b1048275d2e2e1a69f272656a33d0af882dd887ca1f"} Nov 21 11:12:57 crc kubenswrapper[4972]: I1121 11:12:57.158010 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1"} Nov 21 11:12:57 crc kubenswrapper[4972]: I1121 11:12:57.158061 4972 scope.go:117] "RemoveContainer" containerID="7d25833ed4b170be4abbf2478916210ff56e07cb745df7c708e061c935b49916" Nov 21 11:12:57 crc kubenswrapper[4972]: I1121 11:12:57.393892 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6946b977f8-97tkv"] Nov 21 11:12:57 crc kubenswrapper[4972]: W1121 11:12:57.396087 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad0adfda_1be9_4fdd_b1c5_6d33c0afcea5.slice/crio-e9421e76c435b3bcf5fe504f62a9b190a5380719680cfeaa8c25b25682d60ec0 WatchSource:0}: Error finding container e9421e76c435b3bcf5fe504f62a9b190a5380719680cfeaa8c25b25682d60ec0: Status 404 returned error can't find the container with id e9421e76c435b3bcf5fe504f62a9b190a5380719680cfeaa8c25b25682d60ec0 Nov 21 11:12:57 crc kubenswrapper[4972]: I1121 11:12:57.508383 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7fdfc899-ncvwx"] Nov 21 11:12:57 crc kubenswrapper[4972]: W1121 11:12:57.537727 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bdf9f42_bfbe_474f_86d1_3edbe94c09ac.slice/crio-9c616004747d5ad2c4d08c4a7c51cb86eccc1678a678576339e78b8b74cc8f06 WatchSource:0}: Error finding container 9c616004747d5ad2c4d08c4a7c51cb86eccc1678a678576339e78b8b74cc8f06: Status 404 returned error can't find the container with id 9c616004747d5ad2c4d08c4a7c51cb86eccc1678a678576339e78b8b74cc8f06 Nov 21 11:12:57 crc kubenswrapper[4972]: I1121 11:12:57.552915 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-747d6fb59b-5vqj5"] Nov 21 11:12:57 crc kubenswrapper[4972]: I1121 11:12:57.562003 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5ddd898675-msv98"] Nov 21 11:12:57 crc kubenswrapper[4972]: W1121 11:12:57.569691 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod956664ed_d3c8_467a_ba7e_ced0e72d00a4.slice/crio-6ec15ccae15c91aa70010a52556cc61467096ed57b6232cc36de0a8a105d1893 WatchSource:0}: Error finding container 6ec15ccae15c91aa70010a52556cc61467096ed57b6232cc36de0a8a105d1893: Status 404 returned error can't find the container with id 6ec15ccae15c91aa70010a52556cc61467096ed57b6232cc36de0a8a105d1893 Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.167088 4972 generic.go:334] "Generic (PLEG): container finished" podID="20682c16-9a92-4ca0-bb7e-ad0b023153f0" containerID="b843248d631ac71b9d45bde39fdaf70d1a266066e8ce427701beec09f6ff2e73" exitCode=0 Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.167304 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" event={"ID":"20682c16-9a92-4ca0-bb7e-ad0b023153f0","Type":"ContainerDied","Data":"b843248d631ac71b9d45bde39fdaf70d1a266066e8ce427701beec09f6ff2e73"} Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.167328 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" event={"ID":"20682c16-9a92-4ca0-bb7e-ad0b023153f0","Type":"ContainerStarted","Data":"ee78ef9b77e34f687a02234323878d79dcb6574e72d9cab77efd6b46190602fd"} Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.172655 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-747d6fb59b-5vqj5" event={"ID":"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac","Type":"ContainerStarted","Data":"dc755da146f7a3561ae67a4a323265961e974fb6c62c835b4129b996c060bab6"} Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.172709 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-747d6fb59b-5vqj5" event={"ID":"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac","Type":"ContainerStarted","Data":"ac703bd027dabd23a0feaa292ffce64a72a1a7bc1be32ecddddd7a9fab1ef84c"} Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.172719 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-747d6fb59b-5vqj5" event={"ID":"5bdf9f42-bfbe-474f-86d1-3edbe94c09ac","Type":"ContainerStarted","Data":"9c616004747d5ad2c4d08c4a7c51cb86eccc1678a678576339e78b8b74cc8f06"} Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.173488 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.178444 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5ddd898675-msv98" event={"ID":"956664ed-d3c8-467a-ba7e-ced0e72d00a4","Type":"ContainerStarted","Data":"f0094480bfb10778f1b00cbf38e91774358239ed009e97d37f7ee39032675e4f"} Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.178471 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5ddd898675-msv98" event={"ID":"956664ed-d3c8-467a-ba7e-ced0e72d00a4","Type":"ContainerStarted","Data":"1401b0ba383eff52932d26d87a280a1ac43174176cfa1f51bd0fe75e3d405aad"} Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.178481 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5ddd898675-msv98" event={"ID":"956664ed-d3c8-467a-ba7e-ced0e72d00a4","Type":"ContainerStarted","Data":"6ec15ccae15c91aa70010a52556cc61467096ed57b6232cc36de0a8a105d1893"} Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.180413 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" event={"ID":"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5","Type":"ContainerStarted","Data":"4ba59cc61693349558651f7874943e185f356bf020d0303ab21353af6c04c54d"} Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.180437 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" event={"ID":"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5","Type":"ContainerStarted","Data":"aea0ddf36a9fae7453fe5e3d7172952b4775c04f779434dababd50f1541d92b8"} Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.180447 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" event={"ID":"ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5","Type":"ContainerStarted","Data":"e9421e76c435b3bcf5fe504f62a9b190a5380719680cfeaa8c25b25682d60ec0"} Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.238291 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6946b977f8-97tkv" podStartSLOduration=2.238270924 podStartE2EDuration="2.238270924s" podCreationTimestamp="2025-11-21 11:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:12:58.237151734 +0000 UTC m=+5523.346294232" watchObservedRunningTime="2025-11-21 11:12:58.238270924 +0000 UTC m=+5523.347413422" Nov 21 11:12:58 crc kubenswrapper[4972]: I1121 11:12:58.275233 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-747d6fb59b-5vqj5" podStartSLOduration=2.2752142920000002 podStartE2EDuration="2.275214292s" podCreationTimestamp="2025-11-21 11:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:12:58.274075952 +0000 UTC m=+5523.383218460" watchObservedRunningTime="2025-11-21 11:12:58.275214292 +0000 UTC m=+5523.384356790" Nov 21 11:12:59 crc kubenswrapper[4972]: I1121 11:12:59.197720 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" event={"ID":"20682c16-9a92-4ca0-bb7e-ad0b023153f0","Type":"ContainerStarted","Data":"5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd"} Nov 21 11:12:59 crc kubenswrapper[4972]: I1121 11:12:59.198854 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:12:59 crc kubenswrapper[4972]: I1121 11:12:59.236194 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5ddd898675-msv98" podStartSLOduration=3.236175099 podStartE2EDuration="3.236175099s" podCreationTimestamp="2025-11-21 11:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:12:58.29852582 +0000 UTC m=+5523.407668338" watchObservedRunningTime="2025-11-21 11:12:59.236175099 +0000 UTC m=+5524.345317607" Nov 21 11:12:59 crc kubenswrapper[4972]: I1121 11:12:59.238537 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" podStartSLOduration=3.238529782 podStartE2EDuration="3.238529782s" podCreationTimestamp="2025-11-21 11:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:12:59.232407429 +0000 UTC m=+5524.341549947" watchObservedRunningTime="2025-11-21 11:12:59.238529782 +0000 UTC m=+5524.347672290" Nov 21 11:13:00 crc kubenswrapper[4972]: I1121 11:13:00.206279 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:13:06 crc kubenswrapper[4972]: I1121 11:13:06.930415 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.008142 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dc647c5f5-mdsjj"] Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.008437 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" podUID="6f234d50-67b2-4d5b-a696-cf60362a29b8" containerName="dnsmasq-dns" containerID="cri-o://05d7d374f83536c7936b4c830d717d30bc7576e982513a33ea4899fc15f5fd3d" gracePeriod=10 Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.281219 4972 generic.go:334] "Generic (PLEG): container finished" podID="6f234d50-67b2-4d5b-a696-cf60362a29b8" containerID="05d7d374f83536c7936b4c830d717d30bc7576e982513a33ea4899fc15f5fd3d" exitCode=0 Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.281377 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" event={"ID":"6f234d50-67b2-4d5b-a696-cf60362a29b8","Type":"ContainerDied","Data":"05d7d374f83536c7936b4c830d717d30bc7576e982513a33ea4899fc15f5fd3d"} Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.496617 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.523116 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-config\") pod \"6f234d50-67b2-4d5b-a696-cf60362a29b8\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.523191 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-ovsdbserver-sb\") pod \"6f234d50-67b2-4d5b-a696-cf60362a29b8\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.523269 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-ovsdbserver-nb\") pod \"6f234d50-67b2-4d5b-a696-cf60362a29b8\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.523331 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7594x\" (UniqueName: \"kubernetes.io/projected/6f234d50-67b2-4d5b-a696-cf60362a29b8-kube-api-access-7594x\") pod \"6f234d50-67b2-4d5b-a696-cf60362a29b8\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.523366 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-dns-svc\") pod \"6f234d50-67b2-4d5b-a696-cf60362a29b8\" (UID: \"6f234d50-67b2-4d5b-a696-cf60362a29b8\") " Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.533342 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f234d50-67b2-4d5b-a696-cf60362a29b8-kube-api-access-7594x" (OuterVolumeSpecName: "kube-api-access-7594x") pod "6f234d50-67b2-4d5b-a696-cf60362a29b8" (UID: "6f234d50-67b2-4d5b-a696-cf60362a29b8"). InnerVolumeSpecName "kube-api-access-7594x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.568607 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-config" (OuterVolumeSpecName: "config") pod "6f234d50-67b2-4d5b-a696-cf60362a29b8" (UID: "6f234d50-67b2-4d5b-a696-cf60362a29b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.568922 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f234d50-67b2-4d5b-a696-cf60362a29b8" (UID: "6f234d50-67b2-4d5b-a696-cf60362a29b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.588360 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f234d50-67b2-4d5b-a696-cf60362a29b8" (UID: "6f234d50-67b2-4d5b-a696-cf60362a29b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.590992 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f234d50-67b2-4d5b-a696-cf60362a29b8" (UID: "6f234d50-67b2-4d5b-a696-cf60362a29b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.632822 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.632882 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.632896 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.632908 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7594x\" (UniqueName: \"kubernetes.io/projected/6f234d50-67b2-4d5b-a696-cf60362a29b8-kube-api-access-7594x\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:07 crc kubenswrapper[4972]: I1121 11:13:07.632921 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f234d50-67b2-4d5b-a696-cf60362a29b8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:08 crc kubenswrapper[4972]: I1121 11:13:08.294024 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" event={"ID":"6f234d50-67b2-4d5b-a696-cf60362a29b8","Type":"ContainerDied","Data":"3ec3e28573349f9d8e98b338b1b93248aeddeef60ca58305d34dbb1eaa6bb926"} Nov 21 11:13:08 crc kubenswrapper[4972]: I1121 11:13:08.294086 4972 scope.go:117] "RemoveContainer" containerID="05d7d374f83536c7936b4c830d717d30bc7576e982513a33ea4899fc15f5fd3d" Nov 21 11:13:08 crc kubenswrapper[4972]: I1121 11:13:08.295915 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc647c5f5-mdsjj" Nov 21 11:13:08 crc kubenswrapper[4972]: I1121 11:13:08.338452 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dc647c5f5-mdsjj"] Nov 21 11:13:08 crc kubenswrapper[4972]: I1121 11:13:08.339758 4972 scope.go:117] "RemoveContainer" containerID="09729acdbda9532e7e62e120581b66c179c5a4b3c9064c0dff047bb0b9005f4d" Nov 21 11:13:08 crc kubenswrapper[4972]: I1121 11:13:08.345646 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dc647c5f5-mdsjj"] Nov 21 11:13:08 crc kubenswrapper[4972]: I1121 11:13:08.360960 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:13:08 crc kubenswrapper[4972]: I1121 11:13:08.402602 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-747d6fb59b-5vqj5" Nov 21 11:13:09 crc kubenswrapper[4972]: I1121 11:13:09.770631 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f234d50-67b2-4d5b-a696-cf60362a29b8" path="/var/lib/kubelet/pods/6f234d50-67b2-4d5b-a696-cf60362a29b8/volumes" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.686398 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2jcnh"] Nov 21 11:13:20 crc kubenswrapper[4972]: E1121 11:13:20.687531 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f234d50-67b2-4d5b-a696-cf60362a29b8" containerName="dnsmasq-dns" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.687554 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f234d50-67b2-4d5b-a696-cf60362a29b8" containerName="dnsmasq-dns" Nov 21 11:13:20 crc kubenswrapper[4972]: E1121 11:13:20.687587 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f234d50-67b2-4d5b-a696-cf60362a29b8" containerName="init" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.687596 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f234d50-67b2-4d5b-a696-cf60362a29b8" containerName="init" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.687872 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f234d50-67b2-4d5b-a696-cf60362a29b8" containerName="dnsmasq-dns" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.688718 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2jcnh" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.697396 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2jcnh"] Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.795964 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7007310e-51eb-46e9-9344-2458f4d82516-operator-scripts\") pod \"neutron-db-create-2jcnh\" (UID: \"7007310e-51eb-46e9-9344-2458f4d82516\") " pod="openstack/neutron-db-create-2jcnh" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.796208 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6nr2\" (UniqueName: \"kubernetes.io/projected/7007310e-51eb-46e9-9344-2458f4d82516-kube-api-access-m6nr2\") pod \"neutron-db-create-2jcnh\" (UID: \"7007310e-51eb-46e9-9344-2458f4d82516\") " pod="openstack/neutron-db-create-2jcnh" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.796613 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-daf3-account-create-b7tcc"] Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.797923 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-daf3-account-create-b7tcc" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.804243 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.812547 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-daf3-account-create-b7tcc"] Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.898070 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7007310e-51eb-46e9-9344-2458f4d82516-operator-scripts\") pod \"neutron-db-create-2jcnh\" (UID: \"7007310e-51eb-46e9-9344-2458f4d82516\") " pod="openstack/neutron-db-create-2jcnh" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.898119 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h89s\" (UniqueName: \"kubernetes.io/projected/e54069fb-21b9-4f93-92d7-677cd9490299-kube-api-access-9h89s\") pod \"neutron-daf3-account-create-b7tcc\" (UID: \"e54069fb-21b9-4f93-92d7-677cd9490299\") " pod="openstack/neutron-daf3-account-create-b7tcc" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.898198 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e54069fb-21b9-4f93-92d7-677cd9490299-operator-scripts\") pod \"neutron-daf3-account-create-b7tcc\" (UID: \"e54069fb-21b9-4f93-92d7-677cd9490299\") " pod="openstack/neutron-daf3-account-create-b7tcc" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.898257 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6nr2\" (UniqueName: \"kubernetes.io/projected/7007310e-51eb-46e9-9344-2458f4d82516-kube-api-access-m6nr2\") pod \"neutron-db-create-2jcnh\" (UID: \"7007310e-51eb-46e9-9344-2458f4d82516\") " pod="openstack/neutron-db-create-2jcnh" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.899459 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7007310e-51eb-46e9-9344-2458f4d82516-operator-scripts\") pod \"neutron-db-create-2jcnh\" (UID: \"7007310e-51eb-46e9-9344-2458f4d82516\") " pod="openstack/neutron-db-create-2jcnh" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.939317 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6nr2\" (UniqueName: \"kubernetes.io/projected/7007310e-51eb-46e9-9344-2458f4d82516-kube-api-access-m6nr2\") pod \"neutron-db-create-2jcnh\" (UID: \"7007310e-51eb-46e9-9344-2458f4d82516\") " pod="openstack/neutron-db-create-2jcnh" Nov 21 11:13:20 crc kubenswrapper[4972]: I1121 11:13:20.999532 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h89s\" (UniqueName: \"kubernetes.io/projected/e54069fb-21b9-4f93-92d7-677cd9490299-kube-api-access-9h89s\") pod \"neutron-daf3-account-create-b7tcc\" (UID: \"e54069fb-21b9-4f93-92d7-677cd9490299\") " pod="openstack/neutron-daf3-account-create-b7tcc" Nov 21 11:13:21 crc kubenswrapper[4972]: I1121 11:13:20.999955 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e54069fb-21b9-4f93-92d7-677cd9490299-operator-scripts\") pod \"neutron-daf3-account-create-b7tcc\" (UID: \"e54069fb-21b9-4f93-92d7-677cd9490299\") " pod="openstack/neutron-daf3-account-create-b7tcc" Nov 21 11:13:21 crc kubenswrapper[4972]: I1121 11:13:21.000794 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e54069fb-21b9-4f93-92d7-677cd9490299-operator-scripts\") pod \"neutron-daf3-account-create-b7tcc\" (UID: \"e54069fb-21b9-4f93-92d7-677cd9490299\") " pod="openstack/neutron-daf3-account-create-b7tcc" Nov 21 11:13:21 crc kubenswrapper[4972]: I1121 11:13:21.021950 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h89s\" (UniqueName: \"kubernetes.io/projected/e54069fb-21b9-4f93-92d7-677cd9490299-kube-api-access-9h89s\") pod \"neutron-daf3-account-create-b7tcc\" (UID: \"e54069fb-21b9-4f93-92d7-677cd9490299\") " pod="openstack/neutron-daf3-account-create-b7tcc" Nov 21 11:13:21 crc kubenswrapper[4972]: I1121 11:13:21.022420 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2jcnh" Nov 21 11:13:21 crc kubenswrapper[4972]: I1121 11:13:21.135991 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-daf3-account-create-b7tcc" Nov 21 11:13:21 crc kubenswrapper[4972]: I1121 11:13:21.535346 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2jcnh"] Nov 21 11:13:21 crc kubenswrapper[4972]: I1121 11:13:21.705223 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-daf3-account-create-b7tcc"] Nov 21 11:13:22 crc kubenswrapper[4972]: I1121 11:13:22.446988 4972 generic.go:334] "Generic (PLEG): container finished" podID="e54069fb-21b9-4f93-92d7-677cd9490299" containerID="70ed00bc71feab9b52b539958d8cdbcdef290a3352ef5be6bfdd9be17df9eeff" exitCode=0 Nov 21 11:13:22 crc kubenswrapper[4972]: I1121 11:13:22.447079 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-daf3-account-create-b7tcc" event={"ID":"e54069fb-21b9-4f93-92d7-677cd9490299","Type":"ContainerDied","Data":"70ed00bc71feab9b52b539958d8cdbcdef290a3352ef5be6bfdd9be17df9eeff"} Nov 21 11:13:22 crc kubenswrapper[4972]: I1121 11:13:22.447441 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-daf3-account-create-b7tcc" event={"ID":"e54069fb-21b9-4f93-92d7-677cd9490299","Type":"ContainerStarted","Data":"9e8b9bf0f5af4ead116a46068750934121ee7c131d05e439b820036b3431d347"} Nov 21 11:13:22 crc kubenswrapper[4972]: I1121 11:13:22.449595 4972 generic.go:334] "Generic (PLEG): container finished" podID="7007310e-51eb-46e9-9344-2458f4d82516" containerID="cab3fbe8a5160f5f0bacdb36d7310bcc9760138a6f7d6a7aae41a188e9811029" exitCode=0 Nov 21 11:13:22 crc kubenswrapper[4972]: I1121 11:13:22.449665 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2jcnh" event={"ID":"7007310e-51eb-46e9-9344-2458f4d82516","Type":"ContainerDied","Data":"cab3fbe8a5160f5f0bacdb36d7310bcc9760138a6f7d6a7aae41a188e9811029"} Nov 21 11:13:22 crc kubenswrapper[4972]: I1121 11:13:22.449691 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2jcnh" event={"ID":"7007310e-51eb-46e9-9344-2458f4d82516","Type":"ContainerStarted","Data":"ab33d245475258e1e254d249a42d0dd987d1bff34fd1d56ca8f4779a197183e8"} Nov 21 11:13:23 crc kubenswrapper[4972]: I1121 11:13:23.942176 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-daf3-account-create-b7tcc" Nov 21 11:13:23 crc kubenswrapper[4972]: I1121 11:13:23.953176 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2jcnh" Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.058340 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6nr2\" (UniqueName: \"kubernetes.io/projected/7007310e-51eb-46e9-9344-2458f4d82516-kube-api-access-m6nr2\") pod \"7007310e-51eb-46e9-9344-2458f4d82516\" (UID: \"7007310e-51eb-46e9-9344-2458f4d82516\") " Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.058634 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7007310e-51eb-46e9-9344-2458f4d82516-operator-scripts\") pod \"7007310e-51eb-46e9-9344-2458f4d82516\" (UID: \"7007310e-51eb-46e9-9344-2458f4d82516\") " Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.058774 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h89s\" (UniqueName: \"kubernetes.io/projected/e54069fb-21b9-4f93-92d7-677cd9490299-kube-api-access-9h89s\") pod \"e54069fb-21b9-4f93-92d7-677cd9490299\" (UID: \"e54069fb-21b9-4f93-92d7-677cd9490299\") " Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.058896 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e54069fb-21b9-4f93-92d7-677cd9490299-operator-scripts\") pod \"e54069fb-21b9-4f93-92d7-677cd9490299\" (UID: \"e54069fb-21b9-4f93-92d7-677cd9490299\") " Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.059717 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7007310e-51eb-46e9-9344-2458f4d82516-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7007310e-51eb-46e9-9344-2458f4d82516" (UID: "7007310e-51eb-46e9-9344-2458f4d82516"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.059723 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54069fb-21b9-4f93-92d7-677cd9490299-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e54069fb-21b9-4f93-92d7-677cd9490299" (UID: "e54069fb-21b9-4f93-92d7-677cd9490299"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.064405 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7007310e-51eb-46e9-9344-2458f4d82516-kube-api-access-m6nr2" (OuterVolumeSpecName: "kube-api-access-m6nr2") pod "7007310e-51eb-46e9-9344-2458f4d82516" (UID: "7007310e-51eb-46e9-9344-2458f4d82516"). InnerVolumeSpecName "kube-api-access-m6nr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.065971 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54069fb-21b9-4f93-92d7-677cd9490299-kube-api-access-9h89s" (OuterVolumeSpecName: "kube-api-access-9h89s") pod "e54069fb-21b9-4f93-92d7-677cd9490299" (UID: "e54069fb-21b9-4f93-92d7-677cd9490299"). InnerVolumeSpecName "kube-api-access-9h89s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.160867 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6nr2\" (UniqueName: \"kubernetes.io/projected/7007310e-51eb-46e9-9344-2458f4d82516-kube-api-access-m6nr2\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.160922 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7007310e-51eb-46e9-9344-2458f4d82516-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.160943 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h89s\" (UniqueName: \"kubernetes.io/projected/e54069fb-21b9-4f93-92d7-677cd9490299-kube-api-access-9h89s\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.160960 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e54069fb-21b9-4f93-92d7-677cd9490299-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.476309 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-daf3-account-create-b7tcc" event={"ID":"e54069fb-21b9-4f93-92d7-677cd9490299","Type":"ContainerDied","Data":"9e8b9bf0f5af4ead116a46068750934121ee7c131d05e439b820036b3431d347"} Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.476374 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e8b9bf0f5af4ead116a46068750934121ee7c131d05e439b820036b3431d347" Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.476334 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-daf3-account-create-b7tcc" Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.478965 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2jcnh" event={"ID":"7007310e-51eb-46e9-9344-2458f4d82516","Type":"ContainerDied","Data":"ab33d245475258e1e254d249a42d0dd987d1bff34fd1d56ca8f4779a197183e8"} Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.479019 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab33d245475258e1e254d249a42d0dd987d1bff34fd1d56ca8f4779a197183e8" Nov 21 11:13:24 crc kubenswrapper[4972]: I1121 11:13:24.479227 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2jcnh" Nov 21 11:13:25 crc kubenswrapper[4972]: I1121 11:13:25.976586 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-v9pb4"] Nov 21 11:13:25 crc kubenswrapper[4972]: E1121 11:13:25.978661 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54069fb-21b9-4f93-92d7-677cd9490299" containerName="mariadb-account-create" Nov 21 11:13:25 crc kubenswrapper[4972]: I1121 11:13:25.978989 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54069fb-21b9-4f93-92d7-677cd9490299" containerName="mariadb-account-create" Nov 21 11:13:25 crc kubenswrapper[4972]: E1121 11:13:25.979156 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7007310e-51eb-46e9-9344-2458f4d82516" containerName="mariadb-database-create" Nov 21 11:13:25 crc kubenswrapper[4972]: I1121 11:13:25.979281 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7007310e-51eb-46e9-9344-2458f4d82516" containerName="mariadb-database-create" Nov 21 11:13:25 crc kubenswrapper[4972]: I1121 11:13:25.979687 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="7007310e-51eb-46e9-9344-2458f4d82516" containerName="mariadb-database-create" Nov 21 11:13:25 crc kubenswrapper[4972]: I1121 11:13:25.979875 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54069fb-21b9-4f93-92d7-677cd9490299" containerName="mariadb-account-create" Nov 21 11:13:25 crc kubenswrapper[4972]: I1121 11:13:25.981352 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:25 crc kubenswrapper[4972]: I1121 11:13:25.985981 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-k4pbp" Nov 21 11:13:25 crc kubenswrapper[4972]: I1121 11:13:25.986314 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 21 11:13:25 crc kubenswrapper[4972]: I1121 11:13:25.986373 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 21 11:13:25 crc kubenswrapper[4972]: I1121 11:13:25.989604 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-v9pb4"] Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.102367 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x2cb\" (UniqueName: \"kubernetes.io/projected/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-kube-api-access-2x2cb\") pod \"neutron-db-sync-v9pb4\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.102466 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-config\") pod \"neutron-db-sync-v9pb4\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.102644 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-combined-ca-bundle\") pod \"neutron-db-sync-v9pb4\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.204775 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-config\") pod \"neutron-db-sync-v9pb4\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.205293 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-combined-ca-bundle\") pod \"neutron-db-sync-v9pb4\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.205468 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x2cb\" (UniqueName: \"kubernetes.io/projected/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-kube-api-access-2x2cb\") pod \"neutron-db-sync-v9pb4\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.211262 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-config\") pod \"neutron-db-sync-v9pb4\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.217097 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-combined-ca-bundle\") pod \"neutron-db-sync-v9pb4\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.228822 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x2cb\" (UniqueName: \"kubernetes.io/projected/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-kube-api-access-2x2cb\") pod \"neutron-db-sync-v9pb4\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.317944 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.863275 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-v9pb4"] Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.924955 4972 scope.go:117] "RemoveContainer" containerID="8f52e0889341e84165c5bb996d3c4968e5e9fdf9f66fec131e9811e949815244" Nov 21 11:13:26 crc kubenswrapper[4972]: I1121 11:13:26.998223 4972 scope.go:117] "RemoveContainer" containerID="f3113c6c862b6eed0449fe7519479cc9a53915cba48d10642554e2e8a5d51a22" Nov 21 11:13:27 crc kubenswrapper[4972]: I1121 11:13:27.446919 4972 scope.go:117] "RemoveContainer" containerID="1a844b8d09573d9c09a454ca32b28371a4c0f499579d65b64bd8f5448a089b42" Nov 21 11:13:27 crc kubenswrapper[4972]: I1121 11:13:27.508144 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-v9pb4" event={"ID":"9020c23e-fa46-4c43-8052-26bd2ce4e4ea","Type":"ContainerStarted","Data":"3280f27cc40ea67752c52ddb8356b9ae2c0edd33d2843eac017e270730cd96bc"} Nov 21 11:13:28 crc kubenswrapper[4972]: I1121 11:13:28.522324 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-v9pb4" event={"ID":"9020c23e-fa46-4c43-8052-26bd2ce4e4ea","Type":"ContainerStarted","Data":"df8ab6020cb43d59d35da9929b4c2835e5d8ed124c4d62c1e5d258930fc03bba"} Nov 21 11:13:28 crc kubenswrapper[4972]: I1121 11:13:28.561631 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-v9pb4" podStartSLOduration=3.561600645 podStartE2EDuration="3.561600645s" podCreationTimestamp="2025-11-21 11:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:13:28.544787389 +0000 UTC m=+5553.653929947" watchObservedRunningTime="2025-11-21 11:13:28.561600645 +0000 UTC m=+5553.670743183" Nov 21 11:13:31 crc kubenswrapper[4972]: I1121 11:13:31.553154 4972 generic.go:334] "Generic (PLEG): container finished" podID="9020c23e-fa46-4c43-8052-26bd2ce4e4ea" containerID="df8ab6020cb43d59d35da9929b4c2835e5d8ed124c4d62c1e5d258930fc03bba" exitCode=0 Nov 21 11:13:31 crc kubenswrapper[4972]: I1121 11:13:31.553213 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-v9pb4" event={"ID":"9020c23e-fa46-4c43-8052-26bd2ce4e4ea","Type":"ContainerDied","Data":"df8ab6020cb43d59d35da9929b4c2835e5d8ed124c4d62c1e5d258930fc03bba"} Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.002269 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.073673 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-config\") pod \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.073758 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-combined-ca-bundle\") pod \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.073800 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x2cb\" (UniqueName: \"kubernetes.io/projected/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-kube-api-access-2x2cb\") pod \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.079013 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-kube-api-access-2x2cb" (OuterVolumeSpecName: "kube-api-access-2x2cb") pod "9020c23e-fa46-4c43-8052-26bd2ce4e4ea" (UID: "9020c23e-fa46-4c43-8052-26bd2ce4e4ea"). InnerVolumeSpecName "kube-api-access-2x2cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:13:33 crc kubenswrapper[4972]: E1121 11:13:33.113254 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-combined-ca-bundle podName:9020c23e-fa46-4c43-8052-26bd2ce4e4ea nodeName:}" failed. No retries permitted until 2025-11-21 11:13:33.613222605 +0000 UTC m=+5558.722365123 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-combined-ca-bundle") pod "9020c23e-fa46-4c43-8052-26bd2ce4e4ea" (UID: "9020c23e-fa46-4c43-8052-26bd2ce4e4ea") : error deleting /var/lib/kubelet/pods/9020c23e-fa46-4c43-8052-26bd2ce4e4ea/volume-subpaths: remove /var/lib/kubelet/pods/9020c23e-fa46-4c43-8052-26bd2ce4e4ea/volume-subpaths: no such file or directory Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.116563 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-config" (OuterVolumeSpecName: "config") pod "9020c23e-fa46-4c43-8052-26bd2ce4e4ea" (UID: "9020c23e-fa46-4c43-8052-26bd2ce4e4ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.176251 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.176299 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x2cb\" (UniqueName: \"kubernetes.io/projected/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-kube-api-access-2x2cb\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.579821 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-v9pb4" event={"ID":"9020c23e-fa46-4c43-8052-26bd2ce4e4ea","Type":"ContainerDied","Data":"3280f27cc40ea67752c52ddb8356b9ae2c0edd33d2843eac017e270730cd96bc"} Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.579886 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3280f27cc40ea67752c52ddb8356b9ae2c0edd33d2843eac017e270730cd96bc" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.579893 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-v9pb4" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.685005 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-combined-ca-bundle\") pod \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\" (UID: \"9020c23e-fa46-4c43-8052-26bd2ce4e4ea\") " Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.693061 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9020c23e-fa46-4c43-8052-26bd2ce4e4ea" (UID: "9020c23e-fa46-4c43-8052-26bd2ce4e4ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.751543 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848679d585-l8m9p"] Nov 21 11:13:33 crc kubenswrapper[4972]: E1121 11:13:33.752069 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9020c23e-fa46-4c43-8052-26bd2ce4e4ea" containerName="neutron-db-sync" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.752088 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9020c23e-fa46-4c43-8052-26bd2ce4e4ea" containerName="neutron-db-sync" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.752293 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9020c23e-fa46-4c43-8052-26bd2ce4e4ea" containerName="neutron-db-sync" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.753357 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.785540 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848679d585-l8m9p"] Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.786933 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9020c23e-fa46-4c43-8052-26bd2ce4e4ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.858640 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-697b5fbdf-gf62f"] Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.883314 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-697b5fbdf-gf62f"] Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.883448 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.890844 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-config\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.891012 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-ovsdbserver-sb\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.891063 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-dns-svc\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.891106 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-ovsdbserver-nb\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.891249 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72dq6\" (UniqueName: \"kubernetes.io/projected/3eeaefff-9150-4268-81c3-10ad05d6a600-kube-api-access-72dq6\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.992441 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-config\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.992805 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmlgb\" (UniqueName: \"kubernetes.io/projected/1c8983ed-b01a-41ce-9f06-cf429e74f0c3-kube-api-access-vmlgb\") pod \"neutron-697b5fbdf-gf62f\" (UID: \"1c8983ed-b01a-41ce-9f06-cf429e74f0c3\") " pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.992860 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-ovsdbserver-sb\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.992884 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-dns-svc\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.992905 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8983ed-b01a-41ce-9f06-cf429e74f0c3-combined-ca-bundle\") pod \"neutron-697b5fbdf-gf62f\" (UID: \"1c8983ed-b01a-41ce-9f06-cf429e74f0c3\") " pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.992921 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-ovsdbserver-nb\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.992942 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c8983ed-b01a-41ce-9f06-cf429e74f0c3-httpd-config\") pod \"neutron-697b5fbdf-gf62f\" (UID: \"1c8983ed-b01a-41ce-9f06-cf429e74f0c3\") " pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.992975 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72dq6\" (UniqueName: \"kubernetes.io/projected/3eeaefff-9150-4268-81c3-10ad05d6a600-kube-api-access-72dq6\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.993038 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c8983ed-b01a-41ce-9f06-cf429e74f0c3-config\") pod \"neutron-697b5fbdf-gf62f\" (UID: \"1c8983ed-b01a-41ce-9f06-cf429e74f0c3\") " pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.994048 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-ovsdbserver-sb\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.994241 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-ovsdbserver-nb\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.994052 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-config\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:33 crc kubenswrapper[4972]: I1121 11:13:33.994473 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-dns-svc\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.037933 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72dq6\" (UniqueName: \"kubernetes.io/projected/3eeaefff-9150-4268-81c3-10ad05d6a600-kube-api-access-72dq6\") pod \"dnsmasq-dns-848679d585-l8m9p\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.082295 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.094095 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c8983ed-b01a-41ce-9f06-cf429e74f0c3-config\") pod \"neutron-697b5fbdf-gf62f\" (UID: \"1c8983ed-b01a-41ce-9f06-cf429e74f0c3\") " pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.094412 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmlgb\" (UniqueName: \"kubernetes.io/projected/1c8983ed-b01a-41ce-9f06-cf429e74f0c3-kube-api-access-vmlgb\") pod \"neutron-697b5fbdf-gf62f\" (UID: \"1c8983ed-b01a-41ce-9f06-cf429e74f0c3\") " pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.094531 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8983ed-b01a-41ce-9f06-cf429e74f0c3-combined-ca-bundle\") pod \"neutron-697b5fbdf-gf62f\" (UID: \"1c8983ed-b01a-41ce-9f06-cf429e74f0c3\") " pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.094605 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c8983ed-b01a-41ce-9f06-cf429e74f0c3-httpd-config\") pod \"neutron-697b5fbdf-gf62f\" (UID: \"1c8983ed-b01a-41ce-9f06-cf429e74f0c3\") " pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.100484 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c8983ed-b01a-41ce-9f06-cf429e74f0c3-httpd-config\") pod \"neutron-697b5fbdf-gf62f\" (UID: \"1c8983ed-b01a-41ce-9f06-cf429e74f0c3\") " pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.102309 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c8983ed-b01a-41ce-9f06-cf429e74f0c3-config\") pod \"neutron-697b5fbdf-gf62f\" (UID: \"1c8983ed-b01a-41ce-9f06-cf429e74f0c3\") " pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.115630 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8983ed-b01a-41ce-9f06-cf429e74f0c3-combined-ca-bundle\") pod \"neutron-697b5fbdf-gf62f\" (UID: \"1c8983ed-b01a-41ce-9f06-cf429e74f0c3\") " pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.119665 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmlgb\" (UniqueName: \"kubernetes.io/projected/1c8983ed-b01a-41ce-9f06-cf429e74f0c3-kube-api-access-vmlgb\") pod \"neutron-697b5fbdf-gf62f\" (UID: \"1c8983ed-b01a-41ce-9f06-cf429e74f0c3\") " pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.256351 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.567991 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848679d585-l8m9p"] Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.588743 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848679d585-l8m9p" event={"ID":"3eeaefff-9150-4268-81c3-10ad05d6a600","Type":"ContainerStarted","Data":"e3a2046bb714d25969d8ab2d1be762835e255162ed981cff33375d95edbbc289"} Nov 21 11:13:34 crc kubenswrapper[4972]: I1121 11:13:34.849250 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-697b5fbdf-gf62f"] Nov 21 11:13:34 crc kubenswrapper[4972]: W1121 11:13:34.871961 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c8983ed_b01a_41ce_9f06_cf429e74f0c3.slice/crio-9e4ebb327686a73597b4575d1fa22561b6f676e6731033f320f314a80c03c4d1 WatchSource:0}: Error finding container 9e4ebb327686a73597b4575d1fa22561b6f676e6731033f320f314a80c03c4d1: Status 404 returned error can't find the container with id 9e4ebb327686a73597b4575d1fa22561b6f676e6731033f320f314a80c03c4d1 Nov 21 11:13:35 crc kubenswrapper[4972]: I1121 11:13:35.599130 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-697b5fbdf-gf62f" event={"ID":"1c8983ed-b01a-41ce-9f06-cf429e74f0c3","Type":"ContainerStarted","Data":"7e9e9498ac4ce39991cef88850bcda03a9270e7a625f697926929fa96058168a"} Nov 21 11:13:35 crc kubenswrapper[4972]: I1121 11:13:35.599639 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:13:35 crc kubenswrapper[4972]: I1121 11:13:35.599653 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-697b5fbdf-gf62f" event={"ID":"1c8983ed-b01a-41ce-9f06-cf429e74f0c3","Type":"ContainerStarted","Data":"a875f6e876d962e9adf76dc761010ac0e77bc1d6a65ff0d263c55f5c2002950e"} Nov 21 11:13:35 crc kubenswrapper[4972]: I1121 11:13:35.599666 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-697b5fbdf-gf62f" event={"ID":"1c8983ed-b01a-41ce-9f06-cf429e74f0c3","Type":"ContainerStarted","Data":"9e4ebb327686a73597b4575d1fa22561b6f676e6731033f320f314a80c03c4d1"} Nov 21 11:13:35 crc kubenswrapper[4972]: I1121 11:13:35.601537 4972 generic.go:334] "Generic (PLEG): container finished" podID="3eeaefff-9150-4268-81c3-10ad05d6a600" containerID="f9cf2407cd33691be30fb4bd180103f0b3aa9092e6760e47e59d70659e33fb27" exitCode=0 Nov 21 11:13:35 crc kubenswrapper[4972]: I1121 11:13:35.601570 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848679d585-l8m9p" event={"ID":"3eeaefff-9150-4268-81c3-10ad05d6a600","Type":"ContainerDied","Data":"f9cf2407cd33691be30fb4bd180103f0b3aa9092e6760e47e59d70659e33fb27"} Nov 21 11:13:35 crc kubenswrapper[4972]: I1121 11:13:35.628767 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-697b5fbdf-gf62f" podStartSLOduration=2.628747847 podStartE2EDuration="2.628747847s" podCreationTimestamp="2025-11-21 11:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:13:35.615316871 +0000 UTC m=+5560.724459389" watchObservedRunningTime="2025-11-21 11:13:35.628747847 +0000 UTC m=+5560.737890365" Nov 21 11:13:36 crc kubenswrapper[4972]: I1121 11:13:36.613641 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848679d585-l8m9p" event={"ID":"3eeaefff-9150-4268-81c3-10ad05d6a600","Type":"ContainerStarted","Data":"85dc7234f489b4b336c664aa2ea5c958b8bbf886c5a6cb9b85a63988228c6fe6"} Nov 21 11:13:36 crc kubenswrapper[4972]: I1121 11:13:36.636306 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848679d585-l8m9p" podStartSLOduration=3.636288388 podStartE2EDuration="3.636288388s" podCreationTimestamp="2025-11-21 11:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:13:36.633182555 +0000 UTC m=+5561.742325103" watchObservedRunningTime="2025-11-21 11:13:36.636288388 +0000 UTC m=+5561.745430896" Nov 21 11:13:37 crc kubenswrapper[4972]: I1121 11:13:37.623587 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.084092 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.147552 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7fdfc899-ncvwx"] Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.147775 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" podUID="20682c16-9a92-4ca0-bb7e-ad0b023153f0" containerName="dnsmasq-dns" containerID="cri-o://5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd" gracePeriod=10 Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.618960 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.688936 4972 generic.go:334] "Generic (PLEG): container finished" podID="20682c16-9a92-4ca0-bb7e-ad0b023153f0" containerID="5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd" exitCode=0 Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.688985 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" event={"ID":"20682c16-9a92-4ca0-bb7e-ad0b023153f0","Type":"ContainerDied","Data":"5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd"} Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.689017 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" event={"ID":"20682c16-9a92-4ca0-bb7e-ad0b023153f0","Type":"ContainerDied","Data":"ee78ef9b77e34f687a02234323878d79dcb6574e72d9cab77efd6b46190602fd"} Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.689037 4972 scope.go:117] "RemoveContainer" containerID="5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.689214 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7fdfc899-ncvwx" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.711504 4972 scope.go:117] "RemoveContainer" containerID="b843248d631ac71b9d45bde39fdaf70d1a266066e8ce427701beec09f6ff2e73" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.716913 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-dns-svc\") pod \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.716954 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbf97\" (UniqueName: \"kubernetes.io/projected/20682c16-9a92-4ca0-bb7e-ad0b023153f0-kube-api-access-jbf97\") pod \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.717014 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-config\") pod \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.717085 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-ovsdbserver-nb\") pod \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.717131 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-ovsdbserver-sb\") pod \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\" (UID: \"20682c16-9a92-4ca0-bb7e-ad0b023153f0\") " Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.722197 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20682c16-9a92-4ca0-bb7e-ad0b023153f0-kube-api-access-jbf97" (OuterVolumeSpecName: "kube-api-access-jbf97") pod "20682c16-9a92-4ca0-bb7e-ad0b023153f0" (UID: "20682c16-9a92-4ca0-bb7e-ad0b023153f0"). InnerVolumeSpecName "kube-api-access-jbf97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.730191 4972 scope.go:117] "RemoveContainer" containerID="5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd" Nov 21 11:13:44 crc kubenswrapper[4972]: E1121 11:13:44.730781 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd\": container with ID starting with 5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd not found: ID does not exist" containerID="5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.731033 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd"} err="failed to get container status \"5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd\": rpc error: code = NotFound desc = could not find container \"5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd\": container with ID starting with 5dadbaa247ba737ff19106193ec5e9757e987d045afe99b785cf3291f88390fd not found: ID does not exist" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.731069 4972 scope.go:117] "RemoveContainer" containerID="b843248d631ac71b9d45bde39fdaf70d1a266066e8ce427701beec09f6ff2e73" Nov 21 11:13:44 crc kubenswrapper[4972]: E1121 11:13:44.731426 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b843248d631ac71b9d45bde39fdaf70d1a266066e8ce427701beec09f6ff2e73\": container with ID starting with b843248d631ac71b9d45bde39fdaf70d1a266066e8ce427701beec09f6ff2e73 not found: ID does not exist" containerID="b843248d631ac71b9d45bde39fdaf70d1a266066e8ce427701beec09f6ff2e73" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.731506 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b843248d631ac71b9d45bde39fdaf70d1a266066e8ce427701beec09f6ff2e73"} err="failed to get container status \"b843248d631ac71b9d45bde39fdaf70d1a266066e8ce427701beec09f6ff2e73\": rpc error: code = NotFound desc = could not find container \"b843248d631ac71b9d45bde39fdaf70d1a266066e8ce427701beec09f6ff2e73\": container with ID starting with b843248d631ac71b9d45bde39fdaf70d1a266066e8ce427701beec09f6ff2e73 not found: ID does not exist" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.762148 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "20682c16-9a92-4ca0-bb7e-ad0b023153f0" (UID: "20682c16-9a92-4ca0-bb7e-ad0b023153f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.776522 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-config" (OuterVolumeSpecName: "config") pod "20682c16-9a92-4ca0-bb7e-ad0b023153f0" (UID: "20682c16-9a92-4ca0-bb7e-ad0b023153f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.794658 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "20682c16-9a92-4ca0-bb7e-ad0b023153f0" (UID: "20682c16-9a92-4ca0-bb7e-ad0b023153f0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.799683 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "20682c16-9a92-4ca0-bb7e-ad0b023153f0" (UID: "20682c16-9a92-4ca0-bb7e-ad0b023153f0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.819207 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.819259 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbf97\" (UniqueName: \"kubernetes.io/projected/20682c16-9a92-4ca0-bb7e-ad0b023153f0-kube-api-access-jbf97\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.819280 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.819297 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:44 crc kubenswrapper[4972]: I1121 11:13:44.819313 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20682c16-9a92-4ca0-bb7e-ad0b023153f0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 11:13:45 crc kubenswrapper[4972]: I1121 11:13:45.029077 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7fdfc899-ncvwx"] Nov 21 11:13:45 crc kubenswrapper[4972]: I1121 11:13:45.037515 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7fdfc899-ncvwx"] Nov 21 11:13:45 crc kubenswrapper[4972]: I1121 11:13:45.768162 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20682c16-9a92-4ca0-bb7e-ad0b023153f0" path="/var/lib/kubelet/pods/20682c16-9a92-4ca0-bb7e-ad0b023153f0/volumes" Nov 21 11:14:04 crc kubenswrapper[4972]: I1121 11:14:04.274427 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-697b5fbdf-gf62f" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.070921 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-m66t6"] Nov 21 11:14:12 crc kubenswrapper[4972]: E1121 11:14:12.072100 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20682c16-9a92-4ca0-bb7e-ad0b023153f0" containerName="dnsmasq-dns" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.072122 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="20682c16-9a92-4ca0-bb7e-ad0b023153f0" containerName="dnsmasq-dns" Nov 21 11:14:12 crc kubenswrapper[4972]: E1121 11:14:12.072152 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20682c16-9a92-4ca0-bb7e-ad0b023153f0" containerName="init" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.072169 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="20682c16-9a92-4ca0-bb7e-ad0b023153f0" containerName="init" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.072478 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="20682c16-9a92-4ca0-bb7e-ad0b023153f0" containerName="dnsmasq-dns" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.073640 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m66t6" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.078106 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m66t6"] Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.159403 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9b8c-account-create-ms545"] Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.160600 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9b8c-account-create-ms545" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.166627 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.179138 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9b8c-account-create-ms545"] Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.206555 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60e15a23-4cc6-4c73-a4a3-008c59898063-operator-scripts\") pod \"glance-db-create-m66t6\" (UID: \"60e15a23-4cc6-4c73-a4a3-008c59898063\") " pod="openstack/glance-db-create-m66t6" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.206912 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjmc\" (UniqueName: \"kubernetes.io/projected/60e15a23-4cc6-4c73-a4a3-008c59898063-kube-api-access-4fjmc\") pod \"glance-db-create-m66t6\" (UID: \"60e15a23-4cc6-4c73-a4a3-008c59898063\") " pod="openstack/glance-db-create-m66t6" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.308807 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60e15a23-4cc6-4c73-a4a3-008c59898063-operator-scripts\") pod \"glance-db-create-m66t6\" (UID: \"60e15a23-4cc6-4c73-a4a3-008c59898063\") " pod="openstack/glance-db-create-m66t6" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.308938 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2bfaed9-6d3e-4c14-b729-c8543e84abdc-operator-scripts\") pod \"glance-9b8c-account-create-ms545\" (UID: \"e2bfaed9-6d3e-4c14-b729-c8543e84abdc\") " pod="openstack/glance-9b8c-account-create-ms545" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.308997 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjmc\" (UniqueName: \"kubernetes.io/projected/60e15a23-4cc6-4c73-a4a3-008c59898063-kube-api-access-4fjmc\") pod \"glance-db-create-m66t6\" (UID: \"60e15a23-4cc6-4c73-a4a3-008c59898063\") " pod="openstack/glance-db-create-m66t6" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.309042 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbjb6\" (UniqueName: \"kubernetes.io/projected/e2bfaed9-6d3e-4c14-b729-c8543e84abdc-kube-api-access-fbjb6\") pod \"glance-9b8c-account-create-ms545\" (UID: \"e2bfaed9-6d3e-4c14-b729-c8543e84abdc\") " pod="openstack/glance-9b8c-account-create-ms545" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.310105 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60e15a23-4cc6-4c73-a4a3-008c59898063-operator-scripts\") pod \"glance-db-create-m66t6\" (UID: \"60e15a23-4cc6-4c73-a4a3-008c59898063\") " pod="openstack/glance-db-create-m66t6" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.343769 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjmc\" (UniqueName: \"kubernetes.io/projected/60e15a23-4cc6-4c73-a4a3-008c59898063-kube-api-access-4fjmc\") pod \"glance-db-create-m66t6\" (UID: \"60e15a23-4cc6-4c73-a4a3-008c59898063\") " pod="openstack/glance-db-create-m66t6" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.391906 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m66t6" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.411073 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2bfaed9-6d3e-4c14-b729-c8543e84abdc-operator-scripts\") pod \"glance-9b8c-account-create-ms545\" (UID: \"e2bfaed9-6d3e-4c14-b729-c8543e84abdc\") " pod="openstack/glance-9b8c-account-create-ms545" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.411189 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbjb6\" (UniqueName: \"kubernetes.io/projected/e2bfaed9-6d3e-4c14-b729-c8543e84abdc-kube-api-access-fbjb6\") pod \"glance-9b8c-account-create-ms545\" (UID: \"e2bfaed9-6d3e-4c14-b729-c8543e84abdc\") " pod="openstack/glance-9b8c-account-create-ms545" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.412143 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2bfaed9-6d3e-4c14-b729-c8543e84abdc-operator-scripts\") pod \"glance-9b8c-account-create-ms545\" (UID: \"e2bfaed9-6d3e-4c14-b729-c8543e84abdc\") " pod="openstack/glance-9b8c-account-create-ms545" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.461494 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbjb6\" (UniqueName: \"kubernetes.io/projected/e2bfaed9-6d3e-4c14-b729-c8543e84abdc-kube-api-access-fbjb6\") pod \"glance-9b8c-account-create-ms545\" (UID: \"e2bfaed9-6d3e-4c14-b729-c8543e84abdc\") " pod="openstack/glance-9b8c-account-create-ms545" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.477338 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9b8c-account-create-ms545" Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.933591 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m66t6"] Nov 21 11:14:12 crc kubenswrapper[4972]: I1121 11:14:12.994985 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9b8c-account-create-ms545"] Nov 21 11:14:13 crc kubenswrapper[4972]: I1121 11:14:13.000375 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m66t6" event={"ID":"60e15a23-4cc6-4c73-a4a3-008c59898063","Type":"ContainerStarted","Data":"e8e44c3af3d94becd3cd3b2bea287afd1af9f3c8762e6816d482212110428671"} Nov 21 11:14:13 crc kubenswrapper[4972]: W1121 11:14:13.004199 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2bfaed9_6d3e_4c14_b729_c8543e84abdc.slice/crio-99ecebce5f70da3e7ed8dc17886c3432fee252af897129e8ba5f22911701eec7 WatchSource:0}: Error finding container 99ecebce5f70da3e7ed8dc17886c3432fee252af897129e8ba5f22911701eec7: Status 404 returned error can't find the container with id 99ecebce5f70da3e7ed8dc17886c3432fee252af897129e8ba5f22911701eec7 Nov 21 11:14:14 crc kubenswrapper[4972]: I1121 11:14:14.013626 4972 generic.go:334] "Generic (PLEG): container finished" podID="e2bfaed9-6d3e-4c14-b729-c8543e84abdc" containerID="564fdaef9d54b73719d7b5389493ae0e18511631a530b489a1ec15bae564f5ec" exitCode=0 Nov 21 11:14:14 crc kubenswrapper[4972]: I1121 11:14:14.013708 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9b8c-account-create-ms545" event={"ID":"e2bfaed9-6d3e-4c14-b729-c8543e84abdc","Type":"ContainerDied","Data":"564fdaef9d54b73719d7b5389493ae0e18511631a530b489a1ec15bae564f5ec"} Nov 21 11:14:14 crc kubenswrapper[4972]: I1121 11:14:14.013849 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9b8c-account-create-ms545" event={"ID":"e2bfaed9-6d3e-4c14-b729-c8543e84abdc","Type":"ContainerStarted","Data":"99ecebce5f70da3e7ed8dc17886c3432fee252af897129e8ba5f22911701eec7"} Nov 21 11:14:14 crc kubenswrapper[4972]: I1121 11:14:14.015679 4972 generic.go:334] "Generic (PLEG): container finished" podID="60e15a23-4cc6-4c73-a4a3-008c59898063" containerID="f5f6b3799a11c77973f7a3cc73d0ca7a15c8c615e9d9be51892258b6a3cd5697" exitCode=0 Nov 21 11:14:14 crc kubenswrapper[4972]: I1121 11:14:14.015732 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m66t6" event={"ID":"60e15a23-4cc6-4c73-a4a3-008c59898063","Type":"ContainerDied","Data":"f5f6b3799a11c77973f7a3cc73d0ca7a15c8c615e9d9be51892258b6a3cd5697"} Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.453346 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m66t6" Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.460385 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9b8c-account-create-ms545" Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.581905 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2bfaed9-6d3e-4c14-b729-c8543e84abdc-operator-scripts\") pod \"e2bfaed9-6d3e-4c14-b729-c8543e84abdc\" (UID: \"e2bfaed9-6d3e-4c14-b729-c8543e84abdc\") " Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.582054 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60e15a23-4cc6-4c73-a4a3-008c59898063-operator-scripts\") pod \"60e15a23-4cc6-4c73-a4a3-008c59898063\" (UID: \"60e15a23-4cc6-4c73-a4a3-008c59898063\") " Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.582297 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbjb6\" (UniqueName: \"kubernetes.io/projected/e2bfaed9-6d3e-4c14-b729-c8543e84abdc-kube-api-access-fbjb6\") pod \"e2bfaed9-6d3e-4c14-b729-c8543e84abdc\" (UID: \"e2bfaed9-6d3e-4c14-b729-c8543e84abdc\") " Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.582478 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fjmc\" (UniqueName: \"kubernetes.io/projected/60e15a23-4cc6-4c73-a4a3-008c59898063-kube-api-access-4fjmc\") pod \"60e15a23-4cc6-4c73-a4a3-008c59898063\" (UID: \"60e15a23-4cc6-4c73-a4a3-008c59898063\") " Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.582913 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60e15a23-4cc6-4c73-a4a3-008c59898063-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60e15a23-4cc6-4c73-a4a3-008c59898063" (UID: "60e15a23-4cc6-4c73-a4a3-008c59898063"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.583050 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60e15a23-4cc6-4c73-a4a3-008c59898063-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.583089 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2bfaed9-6d3e-4c14-b729-c8543e84abdc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2bfaed9-6d3e-4c14-b729-c8543e84abdc" (UID: "e2bfaed9-6d3e-4c14-b729-c8543e84abdc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.587804 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2bfaed9-6d3e-4c14-b729-c8543e84abdc-kube-api-access-fbjb6" (OuterVolumeSpecName: "kube-api-access-fbjb6") pod "e2bfaed9-6d3e-4c14-b729-c8543e84abdc" (UID: "e2bfaed9-6d3e-4c14-b729-c8543e84abdc"). InnerVolumeSpecName "kube-api-access-fbjb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.589129 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60e15a23-4cc6-4c73-a4a3-008c59898063-kube-api-access-4fjmc" (OuterVolumeSpecName: "kube-api-access-4fjmc") pod "60e15a23-4cc6-4c73-a4a3-008c59898063" (UID: "60e15a23-4cc6-4c73-a4a3-008c59898063"). InnerVolumeSpecName "kube-api-access-4fjmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.686085 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fjmc\" (UniqueName: \"kubernetes.io/projected/60e15a23-4cc6-4c73-a4a3-008c59898063-kube-api-access-4fjmc\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.686182 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2bfaed9-6d3e-4c14-b729-c8543e84abdc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:15 crc kubenswrapper[4972]: I1121 11:14:15.686206 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbjb6\" (UniqueName: \"kubernetes.io/projected/e2bfaed9-6d3e-4c14-b729-c8543e84abdc-kube-api-access-fbjb6\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:16 crc kubenswrapper[4972]: I1121 11:14:16.042314 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9b8c-account-create-ms545" event={"ID":"e2bfaed9-6d3e-4c14-b729-c8543e84abdc","Type":"ContainerDied","Data":"99ecebce5f70da3e7ed8dc17886c3432fee252af897129e8ba5f22911701eec7"} Nov 21 11:14:16 crc kubenswrapper[4972]: I1121 11:14:16.043113 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99ecebce5f70da3e7ed8dc17886c3432fee252af897129e8ba5f22911701eec7" Nov 21 11:14:16 crc kubenswrapper[4972]: I1121 11:14:16.043290 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9b8c-account-create-ms545" Nov 21 11:14:16 crc kubenswrapper[4972]: I1121 11:14:16.045369 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m66t6" event={"ID":"60e15a23-4cc6-4c73-a4a3-008c59898063","Type":"ContainerDied","Data":"e8e44c3af3d94becd3cd3b2bea287afd1af9f3c8762e6816d482212110428671"} Nov 21 11:14:16 crc kubenswrapper[4972]: I1121 11:14:16.045426 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e44c3af3d94becd3cd3b2bea287afd1af9f3c8762e6816d482212110428671" Nov 21 11:14:16 crc kubenswrapper[4972]: I1121 11:14:16.045472 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m66t6" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.308538 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-2t6tp"] Nov 21 11:14:17 crc kubenswrapper[4972]: E1121 11:14:17.309198 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bfaed9-6d3e-4c14-b729-c8543e84abdc" containerName="mariadb-account-create" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.309229 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bfaed9-6d3e-4c14-b729-c8543e84abdc" containerName="mariadb-account-create" Nov 21 11:14:17 crc kubenswrapper[4972]: E1121 11:14:17.309251 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e15a23-4cc6-4c73-a4a3-008c59898063" containerName="mariadb-database-create" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.309269 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e15a23-4cc6-4c73-a4a3-008c59898063" containerName="mariadb-database-create" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.309702 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bfaed9-6d3e-4c14-b729-c8543e84abdc" containerName="mariadb-account-create" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.309728 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="60e15a23-4cc6-4c73-a4a3-008c59898063" containerName="mariadb-database-create" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.310930 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.313441 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.315714 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mqqnk" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.319925 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2t6tp"] Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.422395 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6c9g\" (UniqueName: \"kubernetes.io/projected/b97204f4-1052-458f-8b04-4802e5fa78ad-kube-api-access-b6c9g\") pod \"glance-db-sync-2t6tp\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.423002 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-config-data\") pod \"glance-db-sync-2t6tp\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.423032 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-db-sync-config-data\") pod \"glance-db-sync-2t6tp\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.423164 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-combined-ca-bundle\") pod \"glance-db-sync-2t6tp\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.524466 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6c9g\" (UniqueName: \"kubernetes.io/projected/b97204f4-1052-458f-8b04-4802e5fa78ad-kube-api-access-b6c9g\") pod \"glance-db-sync-2t6tp\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.524544 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-config-data\") pod \"glance-db-sync-2t6tp\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.524567 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-db-sync-config-data\") pod \"glance-db-sync-2t6tp\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.524694 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-combined-ca-bundle\") pod \"glance-db-sync-2t6tp\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.533366 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-config-data\") pod \"glance-db-sync-2t6tp\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.534394 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-db-sync-config-data\") pod \"glance-db-sync-2t6tp\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.535654 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-combined-ca-bundle\") pod \"glance-db-sync-2t6tp\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.557516 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6c9g\" (UniqueName: \"kubernetes.io/projected/b97204f4-1052-458f-8b04-4802e5fa78ad-kube-api-access-b6c9g\") pod \"glance-db-sync-2t6tp\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.639277 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:17 crc kubenswrapper[4972]: I1121 11:14:17.975531 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2t6tp"] Nov 21 11:14:17 crc kubenswrapper[4972]: W1121 11:14:17.982597 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb97204f4_1052_458f_8b04_4802e5fa78ad.slice/crio-f13177e66fe07f42afbcf139a80346a377546f01fd3fa65fb5196c7a994d6ca9 WatchSource:0}: Error finding container f13177e66fe07f42afbcf139a80346a377546f01fd3fa65fb5196c7a994d6ca9: Status 404 returned error can't find the container with id f13177e66fe07f42afbcf139a80346a377546f01fd3fa65fb5196c7a994d6ca9 Nov 21 11:14:18 crc kubenswrapper[4972]: I1121 11:14:18.071006 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2t6tp" event={"ID":"b97204f4-1052-458f-8b04-4802e5fa78ad","Type":"ContainerStarted","Data":"f13177e66fe07f42afbcf139a80346a377546f01fd3fa65fb5196c7a994d6ca9"} Nov 21 11:14:19 crc kubenswrapper[4972]: I1121 11:14:19.082602 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2t6tp" event={"ID":"b97204f4-1052-458f-8b04-4802e5fa78ad","Type":"ContainerStarted","Data":"616198d0d0d7868fb49309684d39ff18e2be04f210e6496b20fe5e4e353799c9"} Nov 21 11:14:19 crc kubenswrapper[4972]: I1121 11:14:19.108394 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-2t6tp" podStartSLOduration=2.108291365 podStartE2EDuration="2.108291365s" podCreationTimestamp="2025-11-21 11:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:14:19.106188809 +0000 UTC m=+5604.215331327" watchObservedRunningTime="2025-11-21 11:14:19.108291365 +0000 UTC m=+5604.217433903" Nov 21 11:14:22 crc kubenswrapper[4972]: I1121 11:14:22.120243 4972 generic.go:334] "Generic (PLEG): container finished" podID="b97204f4-1052-458f-8b04-4802e5fa78ad" containerID="616198d0d0d7868fb49309684d39ff18e2be04f210e6496b20fe5e4e353799c9" exitCode=0 Nov 21 11:14:22 crc kubenswrapper[4972]: I1121 11:14:22.120363 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2t6tp" event={"ID":"b97204f4-1052-458f-8b04-4802e5fa78ad","Type":"ContainerDied","Data":"616198d0d0d7868fb49309684d39ff18e2be04f210e6496b20fe5e4e353799c9"} Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.660467 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.684290 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6c9g\" (UniqueName: \"kubernetes.io/projected/b97204f4-1052-458f-8b04-4802e5fa78ad-kube-api-access-b6c9g\") pod \"b97204f4-1052-458f-8b04-4802e5fa78ad\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.684436 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-config-data\") pod \"b97204f4-1052-458f-8b04-4802e5fa78ad\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.684479 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-db-sync-config-data\") pod \"b97204f4-1052-458f-8b04-4802e5fa78ad\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.684556 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-combined-ca-bundle\") pod \"b97204f4-1052-458f-8b04-4802e5fa78ad\" (UID: \"b97204f4-1052-458f-8b04-4802e5fa78ad\") " Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.690330 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b97204f4-1052-458f-8b04-4802e5fa78ad-kube-api-access-b6c9g" (OuterVolumeSpecName: "kube-api-access-b6c9g") pod "b97204f4-1052-458f-8b04-4802e5fa78ad" (UID: "b97204f4-1052-458f-8b04-4802e5fa78ad"). InnerVolumeSpecName "kube-api-access-b6c9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.692487 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b97204f4-1052-458f-8b04-4802e5fa78ad" (UID: "b97204f4-1052-458f-8b04-4802e5fa78ad"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.738758 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-config-data" (OuterVolumeSpecName: "config-data") pod "b97204f4-1052-458f-8b04-4802e5fa78ad" (UID: "b97204f4-1052-458f-8b04-4802e5fa78ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.749944 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b97204f4-1052-458f-8b04-4802e5fa78ad" (UID: "b97204f4-1052-458f-8b04-4802e5fa78ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.786724 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6c9g\" (UniqueName: \"kubernetes.io/projected/b97204f4-1052-458f-8b04-4802e5fa78ad-kube-api-access-b6c9g\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.786773 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.786795 4972 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:23 crc kubenswrapper[4972]: I1121 11:14:23.786818 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b97204f4-1052-458f-8b04-4802e5fa78ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.187108 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2t6tp" event={"ID":"b97204f4-1052-458f-8b04-4802e5fa78ad","Type":"ContainerDied","Data":"f13177e66fe07f42afbcf139a80346a377546f01fd3fa65fb5196c7a994d6ca9"} Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.187175 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f13177e66fe07f42afbcf139a80346a377546f01fd3fa65fb5196c7a994d6ca9" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.187302 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2t6tp" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.560021 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-746ff6c485-ccrh2"] Nov 21 11:14:24 crc kubenswrapper[4972]: E1121 11:14:24.560592 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97204f4-1052-458f-8b04-4802e5fa78ad" containerName="glance-db-sync" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.560609 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97204f4-1052-458f-8b04-4802e5fa78ad" containerName="glance-db-sync" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.560794 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97204f4-1052-458f-8b04-4802e5fa78ad" containerName="glance-db-sync" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.561687 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.581713 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-746ff6c485-ccrh2"] Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.615686 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.621505 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.624356 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.624575 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.624702 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.635506 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.669283 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mqqnk" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.702015 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-config\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.702059 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-ovsdbserver-nb\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.702140 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-ovsdbserver-sb\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.702177 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghr77\" (UniqueName: \"kubernetes.io/projected/445cf995-4393-496e-963e-42f1745d0610-kube-api-access-ghr77\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.702295 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-dns-svc\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.720813 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.722164 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: W1121 11:14:24.723788 4972 reflector.go:561] object-"openstack"/"glance-default-internal-config-data": failed to list *v1.Secret: secrets "glance-default-internal-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 21 11:14:24 crc kubenswrapper[4972]: E1121 11:14:24.723856 4972 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-internal-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"glance-default-internal-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.734496 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.803969 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-ovsdbserver-nb\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.804025 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-scripts\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.804061 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.804093 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-ovsdbserver-sb\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.804121 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.804146 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghr77\" (UniqueName: \"kubernetes.io/projected/445cf995-4393-496e-963e-42f1745d0610-kube-api-access-ghr77\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.804322 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-dns-svc\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.804382 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl46w\" (UniqueName: \"kubernetes.io/projected/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-kube-api-access-nl46w\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.804423 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-ceph\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.804478 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-logs\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.804539 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-config\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.804575 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-config-data\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.805045 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-ovsdbserver-sb\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.805058 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-ovsdbserver-nb\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.805678 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-config\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.805808 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-dns-svc\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.818072 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghr77\" (UniqueName: \"kubernetes.io/projected/445cf995-4393-496e-963e-42f1745d0610-kube-api-access-ghr77\") pod \"dnsmasq-dns-746ff6c485-ccrh2\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.890399 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.905718 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cfa39d8-de21-4a33-82f7-7b3d4085383b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.905777 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-scripts\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.905802 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.905822 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.905867 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0cfa39d8-de21-4a33-82f7-7b3d4085383b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.905894 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.905935 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cfa39d8-de21-4a33-82f7-7b3d4085383b-logs\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.905954 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.905983 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxg9b\" (UniqueName: \"kubernetes.io/projected/0cfa39d8-de21-4a33-82f7-7b3d4085383b-kube-api-access-cxg9b\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.906037 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl46w\" (UniqueName: \"kubernetes.io/projected/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-kube-api-access-nl46w\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.906065 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-ceph\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.906095 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.906117 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-logs\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.906150 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-config-data\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.906809 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.907156 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-logs\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.909962 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-scripts\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.910939 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-ceph\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.911322 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-config-data\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.912211 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.921885 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl46w\" (UniqueName: \"kubernetes.io/projected/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-kube-api-access-nl46w\") pod \"glance-default-external-api-0\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:24 crc kubenswrapper[4972]: I1121 11:14:24.939673 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.007687 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxg9b\" (UniqueName: \"kubernetes.io/projected/0cfa39d8-de21-4a33-82f7-7b3d4085383b-kube-api-access-cxg9b\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.011506 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.011596 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cfa39d8-de21-4a33-82f7-7b3d4085383b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.011647 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.011669 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.011699 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0cfa39d8-de21-4a33-82f7-7b3d4085383b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.011761 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cfa39d8-de21-4a33-82f7-7b3d4085383b-logs\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.012626 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cfa39d8-de21-4a33-82f7-7b3d4085383b-logs\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.012849 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cfa39d8-de21-4a33-82f7-7b3d4085383b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.025355 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0cfa39d8-de21-4a33-82f7-7b3d4085383b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.025360 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.025592 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.028932 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxg9b\" (UniqueName: \"kubernetes.io/projected/0cfa39d8-de21-4a33-82f7-7b3d4085383b-kube-api-access-cxg9b\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.361204 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-746ff6c485-ccrh2"] Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.437148 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.525991 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:14:25 crc kubenswrapper[4972]: W1121 11:14:25.545445 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4c808e6_0794_4a1b_8291_b6193f3f8dc3.slice/crio-4292e0738ffe42877ba6fde3d315172d2d397f5b5359e0e863ecc0d23422b096 WatchSource:0}: Error finding container 4292e0738ffe42877ba6fde3d315172d2d397f5b5359e0e863ecc0d23422b096: Status 404 returned error can't find the container with id 4292e0738ffe42877ba6fde3d315172d2d397f5b5359e0e863ecc0d23422b096 Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.591223 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.597430 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:25 crc kubenswrapper[4972]: I1121 11:14:25.637023 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:26 crc kubenswrapper[4972]: I1121 11:14:26.194435 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:14:26 crc kubenswrapper[4972]: W1121 11:14:26.201718 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cfa39d8_de21_4a33_82f7_7b3d4085383b.slice/crio-6259084478a0660abcb1f94cfb42387f653de1d2e4ecc6a8a54b77192c2c2739 WatchSource:0}: Error finding container 6259084478a0660abcb1f94cfb42387f653de1d2e4ecc6a8a54b77192c2c2739: Status 404 returned error can't find the container with id 6259084478a0660abcb1f94cfb42387f653de1d2e4ecc6a8a54b77192c2c2739 Nov 21 11:14:26 crc kubenswrapper[4972]: I1121 11:14:26.217266 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4c808e6-0794-4a1b-8291-b6193f3f8dc3","Type":"ContainerStarted","Data":"9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310"} Nov 21 11:14:26 crc kubenswrapper[4972]: I1121 11:14:26.217305 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4c808e6-0794-4a1b-8291-b6193f3f8dc3","Type":"ContainerStarted","Data":"4292e0738ffe42877ba6fde3d315172d2d397f5b5359e0e863ecc0d23422b096"} Nov 21 11:14:26 crc kubenswrapper[4972]: I1121 11:14:26.218855 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0cfa39d8-de21-4a33-82f7-7b3d4085383b","Type":"ContainerStarted","Data":"6259084478a0660abcb1f94cfb42387f653de1d2e4ecc6a8a54b77192c2c2739"} Nov 21 11:14:26 crc kubenswrapper[4972]: I1121 11:14:26.223753 4972 generic.go:334] "Generic (PLEG): container finished" podID="445cf995-4393-496e-963e-42f1745d0610" containerID="b7dd9381674bd3bb344827ea68c901f52e1f53c09ebb0f24fa62e9c9cb25c151" exitCode=0 Nov 21 11:14:26 crc kubenswrapper[4972]: I1121 11:14:26.223798 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" event={"ID":"445cf995-4393-496e-963e-42f1745d0610","Type":"ContainerDied","Data":"b7dd9381674bd3bb344827ea68c901f52e1f53c09ebb0f24fa62e9c9cb25c151"} Nov 21 11:14:26 crc kubenswrapper[4972]: I1121 11:14:26.223890 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" event={"ID":"445cf995-4393-496e-963e-42f1745d0610","Type":"ContainerStarted","Data":"524f22d19b208aff9ede4750280e621e84576bb4069cd5663b56c9289c9dd4f7"} Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.212645 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.235057 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0cfa39d8-de21-4a33-82f7-7b3d4085383b","Type":"ContainerStarted","Data":"1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29"} Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.236988 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" event={"ID":"445cf995-4393-496e-963e-42f1745d0610","Type":"ContainerStarted","Data":"93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9"} Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.238002 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.239341 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4c808e6-0794-4a1b-8291-b6193f3f8dc3","Type":"ContainerStarted","Data":"59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0"} Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.239440 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b4c808e6-0794-4a1b-8291-b6193f3f8dc3" containerName="glance-log" containerID="cri-o://9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310" gracePeriod=30 Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.239528 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b4c808e6-0794-4a1b-8291-b6193f3f8dc3" containerName="glance-httpd" containerID="cri-o://59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0" gracePeriod=30 Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.258468 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" podStartSLOduration=3.258453996 podStartE2EDuration="3.258453996s" podCreationTimestamp="2025-11-21 11:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:14:27.256575577 +0000 UTC m=+5612.365718075" watchObservedRunningTime="2025-11-21 11:14:27.258453996 +0000 UTC m=+5612.367596494" Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.279928 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.279907134 podStartE2EDuration="3.279907134s" podCreationTimestamp="2025-11-21 11:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:14:27.273398572 +0000 UTC m=+5612.382541100" watchObservedRunningTime="2025-11-21 11:14:27.279907134 +0000 UTC m=+5612.389049632" Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.840295 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.964008 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-logs\") pod \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.964065 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-httpd-run\") pod \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.964098 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-config-data\") pod \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.964137 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl46w\" (UniqueName: \"kubernetes.io/projected/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-kube-api-access-nl46w\") pod \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.964267 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-scripts\") pod \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.964325 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-combined-ca-bundle\") pod \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.964370 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-ceph\") pod \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\" (UID: \"b4c808e6-0794-4a1b-8291-b6193f3f8dc3\") " Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.964481 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b4c808e6-0794-4a1b-8291-b6193f3f8dc3" (UID: "b4c808e6-0794-4a1b-8291-b6193f3f8dc3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.964510 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-logs" (OuterVolumeSpecName: "logs") pod "b4c808e6-0794-4a1b-8291-b6193f3f8dc3" (UID: "b4c808e6-0794-4a1b-8291-b6193f3f8dc3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.964766 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.964784 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.970301 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-scripts" (OuterVolumeSpecName: "scripts") pod "b4c808e6-0794-4a1b-8291-b6193f3f8dc3" (UID: "b4c808e6-0794-4a1b-8291-b6193f3f8dc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.972747 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-ceph" (OuterVolumeSpecName: "ceph") pod "b4c808e6-0794-4a1b-8291-b6193f3f8dc3" (UID: "b4c808e6-0794-4a1b-8291-b6193f3f8dc3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.985968 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-kube-api-access-nl46w" (OuterVolumeSpecName: "kube-api-access-nl46w") pod "b4c808e6-0794-4a1b-8291-b6193f3f8dc3" (UID: "b4c808e6-0794-4a1b-8291-b6193f3f8dc3"). InnerVolumeSpecName "kube-api-access-nl46w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:14:27 crc kubenswrapper[4972]: I1121 11:14:27.996303 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4c808e6-0794-4a1b-8291-b6193f3f8dc3" (UID: "b4c808e6-0794-4a1b-8291-b6193f3f8dc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.035400 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-config-data" (OuterVolumeSpecName: "config-data") pod "b4c808e6-0794-4a1b-8291-b6193f3f8dc3" (UID: "b4c808e6-0794-4a1b-8291-b6193f3f8dc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.066762 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.066898 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.066960 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.067012 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.067075 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl46w\" (UniqueName: \"kubernetes.io/projected/b4c808e6-0794-4a1b-8291-b6193f3f8dc3-kube-api-access-nl46w\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.249007 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0cfa39d8-de21-4a33-82f7-7b3d4085383b","Type":"ContainerStarted","Data":"abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97"} Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.249151 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0cfa39d8-de21-4a33-82f7-7b3d4085383b" containerName="glance-log" containerID="cri-o://1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29" gracePeriod=30 Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.249227 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0cfa39d8-de21-4a33-82f7-7b3d4085383b" containerName="glance-httpd" containerID="cri-o://abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97" gracePeriod=30 Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.261940 4972 generic.go:334] "Generic (PLEG): container finished" podID="b4c808e6-0794-4a1b-8291-b6193f3f8dc3" containerID="59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0" exitCode=0 Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.261982 4972 generic.go:334] "Generic (PLEG): container finished" podID="b4c808e6-0794-4a1b-8291-b6193f3f8dc3" containerID="9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310" exitCode=143 Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.262038 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.262065 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4c808e6-0794-4a1b-8291-b6193f3f8dc3","Type":"ContainerDied","Data":"59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0"} Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.262137 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4c808e6-0794-4a1b-8291-b6193f3f8dc3","Type":"ContainerDied","Data":"9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310"} Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.262164 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4c808e6-0794-4a1b-8291-b6193f3f8dc3","Type":"ContainerDied","Data":"4292e0738ffe42877ba6fde3d315172d2d397f5b5359e0e863ecc0d23422b096"} Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.262195 4972 scope.go:117] "RemoveContainer" containerID="59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.290559 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.290528216 podStartE2EDuration="4.290528216s" podCreationTimestamp="2025-11-21 11:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:14:28.280234863 +0000 UTC m=+5613.389377401" watchObservedRunningTime="2025-11-21 11:14:28.290528216 +0000 UTC m=+5613.399670754" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.311883 4972 scope.go:117] "RemoveContainer" containerID="9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.331188 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.338932 4972 scope.go:117] "RemoveContainer" containerID="59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0" Nov 21 11:14:28 crc kubenswrapper[4972]: E1121 11:14:28.339523 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0\": container with ID starting with 59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0 not found: ID does not exist" containerID="59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.339554 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0"} err="failed to get container status \"59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0\": rpc error: code = NotFound desc = could not find container \"59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0\": container with ID starting with 59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0 not found: ID does not exist" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.339585 4972 scope.go:117] "RemoveContainer" containerID="9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310" Nov 21 11:14:28 crc kubenswrapper[4972]: E1121 11:14:28.339893 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310\": container with ID starting with 9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310 not found: ID does not exist" containerID="9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.339918 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310"} err="failed to get container status \"9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310\": rpc error: code = NotFound desc = could not find container \"9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310\": container with ID starting with 9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310 not found: ID does not exist" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.339935 4972 scope.go:117] "RemoveContainer" containerID="59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.340400 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0"} err="failed to get container status \"59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0\": rpc error: code = NotFound desc = could not find container \"59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0\": container with ID starting with 59ece13249f4b9ecbadc35df57c255fe753df4faf432851d806628f2da9090b0 not found: ID does not exist" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.340422 4972 scope.go:117] "RemoveContainer" containerID="9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.340638 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310"} err="failed to get container status \"9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310\": rpc error: code = NotFound desc = could not find container \"9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310\": container with ID starting with 9decd6ff61b4ba486620cdba8d8b5fd921cf6622748c766ad00d03da77ed2310 not found: ID does not exist" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.345942 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.358454 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:14:28 crc kubenswrapper[4972]: E1121 11:14:28.359329 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c808e6-0794-4a1b-8291-b6193f3f8dc3" containerName="glance-httpd" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.359434 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c808e6-0794-4a1b-8291-b6193f3f8dc3" containerName="glance-httpd" Nov 21 11:14:28 crc kubenswrapper[4972]: E1121 11:14:28.359518 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c808e6-0794-4a1b-8291-b6193f3f8dc3" containerName="glance-log" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.359573 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c808e6-0794-4a1b-8291-b6193f3f8dc3" containerName="glance-log" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.359812 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c808e6-0794-4a1b-8291-b6193f3f8dc3" containerName="glance-httpd" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.359927 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c808e6-0794-4a1b-8291-b6193f3f8dc3" containerName="glance-log" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.361168 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.363454 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.373924 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.492715 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d92abb74-0a12-4643-ab97-5239d575301f-ceph\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.492966 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-config-data\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.493138 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d92abb74-0a12-4643-ab97-5239d575301f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.493289 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92abb74-0a12-4643-ab97-5239d575301f-logs\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.493477 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9ng6\" (UniqueName: \"kubernetes.io/projected/d92abb74-0a12-4643-ab97-5239d575301f-kube-api-access-z9ng6\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.493702 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.493745 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-scripts\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.595699 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92abb74-0a12-4643-ab97-5239d575301f-logs\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.595785 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9ng6\" (UniqueName: \"kubernetes.io/projected/d92abb74-0a12-4643-ab97-5239d575301f-kube-api-access-z9ng6\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.595861 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.595883 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-scripts\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.595933 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d92abb74-0a12-4643-ab97-5239d575301f-ceph\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.595950 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-config-data\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.595981 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d92abb74-0a12-4643-ab97-5239d575301f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.596397 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92abb74-0a12-4643-ab97-5239d575301f-logs\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.596417 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d92abb74-0a12-4643-ab97-5239d575301f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.601153 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.601698 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-scripts\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.603980 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d92abb74-0a12-4643-ab97-5239d575301f-ceph\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.607675 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-config-data\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.617298 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9ng6\" (UniqueName: \"kubernetes.io/projected/d92abb74-0a12-4643-ab97-5239d575301f-kube-api-access-z9ng6\") pod \"glance-default-external-api-0\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.689655 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.814508 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.902208 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cfa39d8-de21-4a33-82f7-7b3d4085383b-logs\") pod \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.902284 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxg9b\" (UniqueName: \"kubernetes.io/projected/0cfa39d8-de21-4a33-82f7-7b3d4085383b-kube-api-access-cxg9b\") pod \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.902310 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-combined-ca-bundle\") pod \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.902333 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-config-data\") pod \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.902371 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-scripts\") pod \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.902483 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cfa39d8-de21-4a33-82f7-7b3d4085383b-httpd-run\") pod \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.902522 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0cfa39d8-de21-4a33-82f7-7b3d4085383b-ceph\") pod \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\" (UID: \"0cfa39d8-de21-4a33-82f7-7b3d4085383b\") " Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.904285 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cfa39d8-de21-4a33-82f7-7b3d4085383b-logs" (OuterVolumeSpecName: "logs") pod "0cfa39d8-de21-4a33-82f7-7b3d4085383b" (UID: "0cfa39d8-de21-4a33-82f7-7b3d4085383b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.905818 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cfa39d8-de21-4a33-82f7-7b3d4085383b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0cfa39d8-de21-4a33-82f7-7b3d4085383b" (UID: "0cfa39d8-de21-4a33-82f7-7b3d4085383b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.908812 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-scripts" (OuterVolumeSpecName: "scripts") pod "0cfa39d8-de21-4a33-82f7-7b3d4085383b" (UID: "0cfa39d8-de21-4a33-82f7-7b3d4085383b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.952542 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cfa39d8-de21-4a33-82f7-7b3d4085383b-ceph" (OuterVolumeSpecName: "ceph") pod "0cfa39d8-de21-4a33-82f7-7b3d4085383b" (UID: "0cfa39d8-de21-4a33-82f7-7b3d4085383b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.952661 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cfa39d8-de21-4a33-82f7-7b3d4085383b-kube-api-access-cxg9b" (OuterVolumeSpecName: "kube-api-access-cxg9b") pod "0cfa39d8-de21-4a33-82f7-7b3d4085383b" (UID: "0cfa39d8-de21-4a33-82f7-7b3d4085383b"). InnerVolumeSpecName "kube-api-access-cxg9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.961342 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cfa39d8-de21-4a33-82f7-7b3d4085383b" (UID: "0cfa39d8-de21-4a33-82f7-7b3d4085383b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:14:28 crc kubenswrapper[4972]: I1121 11:14:28.972978 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-config-data" (OuterVolumeSpecName: "config-data") pod "0cfa39d8-de21-4a33-82f7-7b3d4085383b" (UID: "0cfa39d8-de21-4a33-82f7-7b3d4085383b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.004665 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0cfa39d8-de21-4a33-82f7-7b3d4085383b-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.004704 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cfa39d8-de21-4a33-82f7-7b3d4085383b-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.004719 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxg9b\" (UniqueName: \"kubernetes.io/projected/0cfa39d8-de21-4a33-82f7-7b3d4085383b-kube-api-access-cxg9b\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.004736 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.004748 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.004802 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cfa39d8-de21-4a33-82f7-7b3d4085383b-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.004811 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cfa39d8-de21-4a33-82f7-7b3d4085383b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.258385 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.286573 4972 generic.go:334] "Generic (PLEG): container finished" podID="0cfa39d8-de21-4a33-82f7-7b3d4085383b" containerID="abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97" exitCode=0 Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.286624 4972 generic.go:334] "Generic (PLEG): container finished" podID="0cfa39d8-de21-4a33-82f7-7b3d4085383b" containerID="1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29" exitCode=143 Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.286702 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0cfa39d8-de21-4a33-82f7-7b3d4085383b","Type":"ContainerDied","Data":"abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97"} Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.286749 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0cfa39d8-de21-4a33-82f7-7b3d4085383b","Type":"ContainerDied","Data":"1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29"} Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.286770 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0cfa39d8-de21-4a33-82f7-7b3d4085383b","Type":"ContainerDied","Data":"6259084478a0660abcb1f94cfb42387f653de1d2e4ecc6a8a54b77192c2c2739"} Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.286797 4972 scope.go:117] "RemoveContainer" containerID="abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.287064 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.290379 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d92abb74-0a12-4643-ab97-5239d575301f","Type":"ContainerStarted","Data":"471650c81838c4a31bdcab829ec245c4ca13856b466f917be2a4e3eef161545b"} Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.328250 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.343554 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.361295 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.361629 4972 scope.go:117] "RemoveContainer" containerID="1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29" Nov 21 11:14:29 crc kubenswrapper[4972]: E1121 11:14:29.361806 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cfa39d8-de21-4a33-82f7-7b3d4085383b" containerName="glance-httpd" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.361843 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cfa39d8-de21-4a33-82f7-7b3d4085383b" containerName="glance-httpd" Nov 21 11:14:29 crc kubenswrapper[4972]: E1121 11:14:29.361868 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cfa39d8-de21-4a33-82f7-7b3d4085383b" containerName="glance-log" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.361876 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cfa39d8-de21-4a33-82f7-7b3d4085383b" containerName="glance-log" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.362087 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cfa39d8-de21-4a33-82f7-7b3d4085383b" containerName="glance-httpd" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.362123 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cfa39d8-de21-4a33-82f7-7b3d4085383b" containerName="glance-log" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.363307 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.368348 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.371612 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.408355 4972 scope.go:117] "RemoveContainer" containerID="abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97" Nov 21 11:14:29 crc kubenswrapper[4972]: E1121 11:14:29.409339 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97\": container with ID starting with abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97 not found: ID does not exist" containerID="abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.409370 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97"} err="failed to get container status \"abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97\": rpc error: code = NotFound desc = could not find container \"abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97\": container with ID starting with abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97 not found: ID does not exist" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.409548 4972 scope.go:117] "RemoveContainer" containerID="1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29" Nov 21 11:14:29 crc kubenswrapper[4972]: E1121 11:14:29.410210 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29\": container with ID starting with 1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29 not found: ID does not exist" containerID="1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.410263 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29"} err="failed to get container status \"1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29\": rpc error: code = NotFound desc = could not find container \"1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29\": container with ID starting with 1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29 not found: ID does not exist" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.410295 4972 scope.go:117] "RemoveContainer" containerID="abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.410777 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97"} err="failed to get container status \"abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97\": rpc error: code = NotFound desc = could not find container \"abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97\": container with ID starting with abfc94f0948320bea1d0a7c79deddf90e4342d6dce22b76e70a695032af7ba97 not found: ID does not exist" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.410803 4972 scope.go:117] "RemoveContainer" containerID="1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.413239 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29"} err="failed to get container status \"1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29\": rpc error: code = NotFound desc = could not find container \"1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29\": container with ID starting with 1ac867751177c0f4327c0488661b8641b2d6653afc08753fdd0b10ee6f5dca29 not found: ID does not exist" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.514134 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/65c73566-904d-4e66-a7c3-5ee16b691565-ceph\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.514199 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6bw\" (UniqueName: \"kubernetes.io/projected/65c73566-904d-4e66-a7c3-5ee16b691565-kube-api-access-jf6bw\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.514228 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.514428 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-scripts\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.514502 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-config-data\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.514581 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65c73566-904d-4e66-a7c3-5ee16b691565-logs\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.514608 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65c73566-904d-4e66-a7c3-5ee16b691565-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.616370 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-config-data\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.616693 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65c73566-904d-4e66-a7c3-5ee16b691565-logs\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.616716 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65c73566-904d-4e66-a7c3-5ee16b691565-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.616859 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/65c73566-904d-4e66-a7c3-5ee16b691565-ceph\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.616886 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf6bw\" (UniqueName: \"kubernetes.io/projected/65c73566-904d-4e66-a7c3-5ee16b691565-kube-api-access-jf6bw\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.616911 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.616957 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-scripts\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.617322 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65c73566-904d-4e66-a7c3-5ee16b691565-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.617371 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65c73566-904d-4e66-a7c3-5ee16b691565-logs\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.621131 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-scripts\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.621159 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/65c73566-904d-4e66-a7c3-5ee16b691565-ceph\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.621601 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-config-data\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.632976 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.643146 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf6bw\" (UniqueName: \"kubernetes.io/projected/65c73566-904d-4e66-a7c3-5ee16b691565-kube-api-access-jf6bw\") pod \"glance-default-internal-api-0\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.691344 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.796051 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cfa39d8-de21-4a33-82f7-7b3d4085383b" path="/var/lib/kubelet/pods/0cfa39d8-de21-4a33-82f7-7b3d4085383b/volumes" Nov 21 11:14:29 crc kubenswrapper[4972]: I1121 11:14:29.802923 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c808e6-0794-4a1b-8291-b6193f3f8dc3" path="/var/lib/kubelet/pods/b4c808e6-0794-4a1b-8291-b6193f3f8dc3/volumes" Nov 21 11:14:30 crc kubenswrapper[4972]: I1121 11:14:30.293044 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:14:30 crc kubenswrapper[4972]: I1121 11:14:30.308318 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d92abb74-0a12-4643-ab97-5239d575301f","Type":"ContainerStarted","Data":"7595a9c724ea3ee3e320c47c0f60a0e2082a6b01ceb11c8c9aa16ff1d1a9014b"} Nov 21 11:14:30 crc kubenswrapper[4972]: I1121 11:14:30.311232 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65c73566-904d-4e66-a7c3-5ee16b691565","Type":"ContainerStarted","Data":"e1dfee453c48525054057a2380e722091cc4369cb90cebb885b0f43d1c1b5174"} Nov 21 11:14:31 crc kubenswrapper[4972]: I1121 11:14:31.331057 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d92abb74-0a12-4643-ab97-5239d575301f","Type":"ContainerStarted","Data":"b745ce440a14b8d70fb298d8ab2a5a3de281c6c1b8877fc0b2aa5190fbd56b96"} Nov 21 11:14:31 crc kubenswrapper[4972]: I1121 11:14:31.334068 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65c73566-904d-4e66-a7c3-5ee16b691565","Type":"ContainerStarted","Data":"fbfd982a646d2f786bb4dcad7b0ec57f6ba7572f48df02ef170c278fc4b4c6e8"} Nov 21 11:14:31 crc kubenswrapper[4972]: I1121 11:14:31.354172 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.354151322 podStartE2EDuration="3.354151322s" podCreationTimestamp="2025-11-21 11:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:14:31.350358762 +0000 UTC m=+5616.459501270" watchObservedRunningTime="2025-11-21 11:14:31.354151322 +0000 UTC m=+5616.463293820" Nov 21 11:14:32 crc kubenswrapper[4972]: I1121 11:14:32.347637 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65c73566-904d-4e66-a7c3-5ee16b691565","Type":"ContainerStarted","Data":"990c8ed0bfa18e664e8ba651aea29c2da75a272f2da09bb1ba15e140fb0bc6ec"} Nov 21 11:14:32 crc kubenswrapper[4972]: I1121 11:14:32.395271 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.395241571 podStartE2EDuration="3.395241571s" podCreationTimestamp="2025-11-21 11:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:14:32.377860611 +0000 UTC m=+5617.487003149" watchObservedRunningTime="2025-11-21 11:14:32.395241571 +0000 UTC m=+5617.504384099" Nov 21 11:14:34 crc kubenswrapper[4972]: I1121 11:14:34.893113 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:34 crc kubenswrapper[4972]: I1121 11:14:34.984185 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848679d585-l8m9p"] Nov 21 11:14:34 crc kubenswrapper[4972]: I1121 11:14:34.984562 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848679d585-l8m9p" podUID="3eeaefff-9150-4268-81c3-10ad05d6a600" containerName="dnsmasq-dns" containerID="cri-o://85dc7234f489b4b336c664aa2ea5c958b8bbf886c5a6cb9b85a63988228c6fe6" gracePeriod=10 Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.397347 4972 generic.go:334] "Generic (PLEG): container finished" podID="3eeaefff-9150-4268-81c3-10ad05d6a600" containerID="85dc7234f489b4b336c664aa2ea5c958b8bbf886c5a6cb9b85a63988228c6fe6" exitCode=0 Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.397551 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848679d585-l8m9p" event={"ID":"3eeaefff-9150-4268-81c3-10ad05d6a600","Type":"ContainerDied","Data":"85dc7234f489b4b336c664aa2ea5c958b8bbf886c5a6cb9b85a63988228c6fe6"} Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.397627 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848679d585-l8m9p" event={"ID":"3eeaefff-9150-4268-81c3-10ad05d6a600","Type":"ContainerDied","Data":"e3a2046bb714d25969d8ab2d1be762835e255162ed981cff33375d95edbbc289"} Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.397644 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3a2046bb714d25969d8ab2d1be762835e255162ed981cff33375d95edbbc289" Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.467228 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.644193 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-ovsdbserver-sb\") pod \"3eeaefff-9150-4268-81c3-10ad05d6a600\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.644288 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72dq6\" (UniqueName: \"kubernetes.io/projected/3eeaefff-9150-4268-81c3-10ad05d6a600-kube-api-access-72dq6\") pod \"3eeaefff-9150-4268-81c3-10ad05d6a600\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.644405 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-config\") pod \"3eeaefff-9150-4268-81c3-10ad05d6a600\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.644525 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-dns-svc\") pod \"3eeaefff-9150-4268-81c3-10ad05d6a600\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.644557 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-ovsdbserver-nb\") pod \"3eeaefff-9150-4268-81c3-10ad05d6a600\" (UID: \"3eeaefff-9150-4268-81c3-10ad05d6a600\") " Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.657034 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eeaefff-9150-4268-81c3-10ad05d6a600-kube-api-access-72dq6" (OuterVolumeSpecName: "kube-api-access-72dq6") pod "3eeaefff-9150-4268-81c3-10ad05d6a600" (UID: "3eeaefff-9150-4268-81c3-10ad05d6a600"). InnerVolumeSpecName "kube-api-access-72dq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.693869 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3eeaefff-9150-4268-81c3-10ad05d6a600" (UID: "3eeaefff-9150-4268-81c3-10ad05d6a600"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.703087 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3eeaefff-9150-4268-81c3-10ad05d6a600" (UID: "3eeaefff-9150-4268-81c3-10ad05d6a600"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.717180 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-config" (OuterVolumeSpecName: "config") pod "3eeaefff-9150-4268-81c3-10ad05d6a600" (UID: "3eeaefff-9150-4268-81c3-10ad05d6a600"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.723524 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3eeaefff-9150-4268-81c3-10ad05d6a600" (UID: "3eeaefff-9150-4268-81c3-10ad05d6a600"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.747350 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.747406 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.747430 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.747452 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eeaefff-9150-4268-81c3-10ad05d6a600-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:35 crc kubenswrapper[4972]: I1121 11:14:35.747473 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72dq6\" (UniqueName: \"kubernetes.io/projected/3eeaefff-9150-4268-81c3-10ad05d6a600-kube-api-access-72dq6\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:36 crc kubenswrapper[4972]: I1121 11:14:36.409001 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848679d585-l8m9p" Nov 21 11:14:36 crc kubenswrapper[4972]: I1121 11:14:36.443056 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848679d585-l8m9p"] Nov 21 11:14:36 crc kubenswrapper[4972]: I1121 11:14:36.454782 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848679d585-l8m9p"] Nov 21 11:14:37 crc kubenswrapper[4972]: I1121 11:14:37.777235 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eeaefff-9150-4268-81c3-10ad05d6a600" path="/var/lib/kubelet/pods/3eeaefff-9150-4268-81c3-10ad05d6a600/volumes" Nov 21 11:14:38 crc kubenswrapper[4972]: I1121 11:14:38.690996 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 11:14:38 crc kubenswrapper[4972]: I1121 11:14:38.691522 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 11:14:38 crc kubenswrapper[4972]: I1121 11:14:38.730182 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 11:14:38 crc kubenswrapper[4972]: I1121 11:14:38.766689 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 11:14:39 crc kubenswrapper[4972]: I1121 11:14:39.460217 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 11:14:39 crc kubenswrapper[4972]: I1121 11:14:39.460276 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 11:14:39 crc kubenswrapper[4972]: I1121 11:14:39.691772 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:39 crc kubenswrapper[4972]: I1121 11:14:39.691906 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:39 crc kubenswrapper[4972]: I1121 11:14:39.732558 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:39 crc kubenswrapper[4972]: I1121 11:14:39.783776 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:40 crc kubenswrapper[4972]: I1121 11:14:40.472411 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:40 crc kubenswrapper[4972]: I1121 11:14:40.472500 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:41 crc kubenswrapper[4972]: I1121 11:14:41.366187 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 11:14:41 crc kubenswrapper[4972]: I1121 11:14:41.482632 4972 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 11:14:41 crc kubenswrapper[4972]: I1121 11:14:41.580045 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 11:14:42 crc kubenswrapper[4972]: I1121 11:14:42.388371 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:42 crc kubenswrapper[4972]: I1121 11:14:42.389118 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.227968 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5dgr8"] Nov 21 11:14:48 crc kubenswrapper[4972]: E1121 11:14:48.228973 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeaefff-9150-4268-81c3-10ad05d6a600" containerName="init" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.228988 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeaefff-9150-4268-81c3-10ad05d6a600" containerName="init" Nov 21 11:14:48 crc kubenswrapper[4972]: E1121 11:14:48.229027 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeaefff-9150-4268-81c3-10ad05d6a600" containerName="dnsmasq-dns" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.229035 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeaefff-9150-4268-81c3-10ad05d6a600" containerName="dnsmasq-dns" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.229262 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeaefff-9150-4268-81c3-10ad05d6a600" containerName="dnsmasq-dns" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.230063 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5dgr8" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.250927 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5dgr8"] Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.318013 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab013d1-8865-43f7-a76b-0b6b5383aa23-operator-scripts\") pod \"placement-db-create-5dgr8\" (UID: \"6ab013d1-8865-43f7-a76b-0b6b5383aa23\") " pod="openstack/placement-db-create-5dgr8" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.318146 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j9bv\" (UniqueName: \"kubernetes.io/projected/6ab013d1-8865-43f7-a76b-0b6b5383aa23-kube-api-access-4j9bv\") pod \"placement-db-create-5dgr8\" (UID: \"6ab013d1-8865-43f7-a76b-0b6b5383aa23\") " pod="openstack/placement-db-create-5dgr8" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.325806 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c7de-account-create-rx642"] Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.327189 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c7de-account-create-rx642" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.330816 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.335806 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c7de-account-create-rx642"] Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.419380 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab013d1-8865-43f7-a76b-0b6b5383aa23-operator-scripts\") pod \"placement-db-create-5dgr8\" (UID: \"6ab013d1-8865-43f7-a76b-0b6b5383aa23\") " pod="openstack/placement-db-create-5dgr8" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.419514 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjfwl\" (UniqueName: \"kubernetes.io/projected/8f4a9500-7a33-48a5-949a-3bc12cf0ed61-kube-api-access-qjfwl\") pod \"placement-c7de-account-create-rx642\" (UID: \"8f4a9500-7a33-48a5-949a-3bc12cf0ed61\") " pod="openstack/placement-c7de-account-create-rx642" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.419585 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j9bv\" (UniqueName: \"kubernetes.io/projected/6ab013d1-8865-43f7-a76b-0b6b5383aa23-kube-api-access-4j9bv\") pod \"placement-db-create-5dgr8\" (UID: \"6ab013d1-8865-43f7-a76b-0b6b5383aa23\") " pod="openstack/placement-db-create-5dgr8" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.419623 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4a9500-7a33-48a5-949a-3bc12cf0ed61-operator-scripts\") pod \"placement-c7de-account-create-rx642\" (UID: \"8f4a9500-7a33-48a5-949a-3bc12cf0ed61\") " pod="openstack/placement-c7de-account-create-rx642" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.420950 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab013d1-8865-43f7-a76b-0b6b5383aa23-operator-scripts\") pod \"placement-db-create-5dgr8\" (UID: \"6ab013d1-8865-43f7-a76b-0b6b5383aa23\") " pod="openstack/placement-db-create-5dgr8" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.439717 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j9bv\" (UniqueName: \"kubernetes.io/projected/6ab013d1-8865-43f7-a76b-0b6b5383aa23-kube-api-access-4j9bv\") pod \"placement-db-create-5dgr8\" (UID: \"6ab013d1-8865-43f7-a76b-0b6b5383aa23\") " pod="openstack/placement-db-create-5dgr8" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.521731 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjfwl\" (UniqueName: \"kubernetes.io/projected/8f4a9500-7a33-48a5-949a-3bc12cf0ed61-kube-api-access-qjfwl\") pod \"placement-c7de-account-create-rx642\" (UID: \"8f4a9500-7a33-48a5-949a-3bc12cf0ed61\") " pod="openstack/placement-c7de-account-create-rx642" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.521792 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4a9500-7a33-48a5-949a-3bc12cf0ed61-operator-scripts\") pod \"placement-c7de-account-create-rx642\" (UID: \"8f4a9500-7a33-48a5-949a-3bc12cf0ed61\") " pod="openstack/placement-c7de-account-create-rx642" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.522468 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4a9500-7a33-48a5-949a-3bc12cf0ed61-operator-scripts\") pod \"placement-c7de-account-create-rx642\" (UID: \"8f4a9500-7a33-48a5-949a-3bc12cf0ed61\") " pod="openstack/placement-c7de-account-create-rx642" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.543047 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjfwl\" (UniqueName: \"kubernetes.io/projected/8f4a9500-7a33-48a5-949a-3bc12cf0ed61-kube-api-access-qjfwl\") pod \"placement-c7de-account-create-rx642\" (UID: \"8f4a9500-7a33-48a5-949a-3bc12cf0ed61\") " pod="openstack/placement-c7de-account-create-rx642" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.593677 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5dgr8" Nov 21 11:14:48 crc kubenswrapper[4972]: I1121 11:14:48.647815 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c7de-account-create-rx642" Nov 21 11:14:49 crc kubenswrapper[4972]: I1121 11:14:49.063853 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5dgr8"] Nov 21 11:14:49 crc kubenswrapper[4972]: I1121 11:14:49.139365 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c7de-account-create-rx642"] Nov 21 11:14:49 crc kubenswrapper[4972]: W1121 11:14:49.155057 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f4a9500_7a33_48a5_949a_3bc12cf0ed61.slice/crio-79c54f05caba258adaaa2c7b6ee796c269d623db06a83e70f0537b9b7d557122 WatchSource:0}: Error finding container 79c54f05caba258adaaa2c7b6ee796c269d623db06a83e70f0537b9b7d557122: Status 404 returned error can't find the container with id 79c54f05caba258adaaa2c7b6ee796c269d623db06a83e70f0537b9b7d557122 Nov 21 11:14:49 crc kubenswrapper[4972]: I1121 11:14:49.566733 4972 generic.go:334] "Generic (PLEG): container finished" podID="8f4a9500-7a33-48a5-949a-3bc12cf0ed61" containerID="ad1e37b6435ae9a805bda8848f14287a5626de47f09d8a583a46f7d82157b38c" exitCode=0 Nov 21 11:14:49 crc kubenswrapper[4972]: I1121 11:14:49.567170 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c7de-account-create-rx642" event={"ID":"8f4a9500-7a33-48a5-949a-3bc12cf0ed61","Type":"ContainerDied","Data":"ad1e37b6435ae9a805bda8848f14287a5626de47f09d8a583a46f7d82157b38c"} Nov 21 11:14:49 crc kubenswrapper[4972]: I1121 11:14:49.567198 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c7de-account-create-rx642" event={"ID":"8f4a9500-7a33-48a5-949a-3bc12cf0ed61","Type":"ContainerStarted","Data":"79c54f05caba258adaaa2c7b6ee796c269d623db06a83e70f0537b9b7d557122"} Nov 21 11:14:49 crc kubenswrapper[4972]: I1121 11:14:49.569499 4972 generic.go:334] "Generic (PLEG): container finished" podID="6ab013d1-8865-43f7-a76b-0b6b5383aa23" containerID="43eb3b1024884ace9144303797d7dfc88b383e6bb8ee77f641ced4f2e723c67d" exitCode=0 Nov 21 11:14:49 crc kubenswrapper[4972]: I1121 11:14:49.569553 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5dgr8" event={"ID":"6ab013d1-8865-43f7-a76b-0b6b5383aa23","Type":"ContainerDied","Data":"43eb3b1024884ace9144303797d7dfc88b383e6bb8ee77f641ced4f2e723c67d"} Nov 21 11:14:49 crc kubenswrapper[4972]: I1121 11:14:49.569583 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5dgr8" event={"ID":"6ab013d1-8865-43f7-a76b-0b6b5383aa23","Type":"ContainerStarted","Data":"920316603db680376cddf07e15b435a369dacf568a9c888993b5a9d8ab3f2b66"} Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.022298 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c7de-account-create-rx642" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.035719 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5dgr8" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.199438 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4a9500-7a33-48a5-949a-3bc12cf0ed61-operator-scripts\") pod \"8f4a9500-7a33-48a5-949a-3bc12cf0ed61\" (UID: \"8f4a9500-7a33-48a5-949a-3bc12cf0ed61\") " Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.199577 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjfwl\" (UniqueName: \"kubernetes.io/projected/8f4a9500-7a33-48a5-949a-3bc12cf0ed61-kube-api-access-qjfwl\") pod \"8f4a9500-7a33-48a5-949a-3bc12cf0ed61\" (UID: \"8f4a9500-7a33-48a5-949a-3bc12cf0ed61\") " Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.199662 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab013d1-8865-43f7-a76b-0b6b5383aa23-operator-scripts\") pod \"6ab013d1-8865-43f7-a76b-0b6b5383aa23\" (UID: \"6ab013d1-8865-43f7-a76b-0b6b5383aa23\") " Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.199869 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j9bv\" (UniqueName: \"kubernetes.io/projected/6ab013d1-8865-43f7-a76b-0b6b5383aa23-kube-api-access-4j9bv\") pod \"6ab013d1-8865-43f7-a76b-0b6b5383aa23\" (UID: \"6ab013d1-8865-43f7-a76b-0b6b5383aa23\") " Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.200931 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f4a9500-7a33-48a5-949a-3bc12cf0ed61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f4a9500-7a33-48a5-949a-3bc12cf0ed61" (UID: "8f4a9500-7a33-48a5-949a-3bc12cf0ed61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.200986 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ab013d1-8865-43f7-a76b-0b6b5383aa23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ab013d1-8865-43f7-a76b-0b6b5383aa23" (UID: "6ab013d1-8865-43f7-a76b-0b6b5383aa23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.208173 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab013d1-8865-43f7-a76b-0b6b5383aa23-kube-api-access-4j9bv" (OuterVolumeSpecName: "kube-api-access-4j9bv") pod "6ab013d1-8865-43f7-a76b-0b6b5383aa23" (UID: "6ab013d1-8865-43f7-a76b-0b6b5383aa23"). InnerVolumeSpecName "kube-api-access-4j9bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.208314 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f4a9500-7a33-48a5-949a-3bc12cf0ed61-kube-api-access-qjfwl" (OuterVolumeSpecName: "kube-api-access-qjfwl") pod "8f4a9500-7a33-48a5-949a-3bc12cf0ed61" (UID: "8f4a9500-7a33-48a5-949a-3bc12cf0ed61"). InnerVolumeSpecName "kube-api-access-qjfwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.302241 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j9bv\" (UniqueName: \"kubernetes.io/projected/6ab013d1-8865-43f7-a76b-0b6b5383aa23-kube-api-access-4j9bv\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.302557 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4a9500-7a33-48a5-949a-3bc12cf0ed61-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.302698 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjfwl\" (UniqueName: \"kubernetes.io/projected/8f4a9500-7a33-48a5-949a-3bc12cf0ed61-kube-api-access-qjfwl\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.303042 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab013d1-8865-43f7-a76b-0b6b5383aa23-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.596026 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5dgr8" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.595953 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5dgr8" event={"ID":"6ab013d1-8865-43f7-a76b-0b6b5383aa23","Type":"ContainerDied","Data":"920316603db680376cddf07e15b435a369dacf568a9c888993b5a9d8ab3f2b66"} Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.596292 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="920316603db680376cddf07e15b435a369dacf568a9c888993b5a9d8ab3f2b66" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.597996 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c7de-account-create-rx642" event={"ID":"8f4a9500-7a33-48a5-949a-3bc12cf0ed61","Type":"ContainerDied","Data":"79c54f05caba258adaaa2c7b6ee796c269d623db06a83e70f0537b9b7d557122"} Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.598025 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79c54f05caba258adaaa2c7b6ee796c269d623db06a83e70f0537b9b7d557122" Nov 21 11:14:51 crc kubenswrapper[4972]: I1121 11:14:51.598092 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c7de-account-create-rx642" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.636177 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-bjtgf"] Nov 21 11:14:53 crc kubenswrapper[4972]: E1121 11:14:53.636764 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab013d1-8865-43f7-a76b-0b6b5383aa23" containerName="mariadb-database-create" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.636776 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab013d1-8865-43f7-a76b-0b6b5383aa23" containerName="mariadb-database-create" Nov 21 11:14:53 crc kubenswrapper[4972]: E1121 11:14:53.636801 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4a9500-7a33-48a5-949a-3bc12cf0ed61" containerName="mariadb-account-create" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.636807 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4a9500-7a33-48a5-949a-3bc12cf0ed61" containerName="mariadb-account-create" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.636992 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f4a9500-7a33-48a5-949a-3bc12cf0ed61" containerName="mariadb-account-create" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.637006 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab013d1-8865-43f7-a76b-0b6b5383aa23" containerName="mariadb-database-create" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.637535 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.647179 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bjtgf"] Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.648636 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.654401 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.655017 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zdjt7" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.656995 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26e55123-6d03-4c7c-aa57-40d72627784e-logs\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.657051 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-scripts\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.657115 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-combined-ca-bundle\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.657490 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhqqh\" (UniqueName: \"kubernetes.io/projected/26e55123-6d03-4c7c-aa57-40d72627784e-kube-api-access-zhqqh\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.657655 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-config-data\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.675406 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f8c4cb9bc-p66g8"] Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.677116 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.729577 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f8c4cb9bc-p66g8"] Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.763794 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-config-data\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.763911 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26e55123-6d03-4c7c-aa57-40d72627784e-logs\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.763942 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-scripts\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.763968 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-combined-ca-bundle\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.764015 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhqqh\" (UniqueName: \"kubernetes.io/projected/26e55123-6d03-4c7c-aa57-40d72627784e-kube-api-access-zhqqh\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.768261 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26e55123-6d03-4c7c-aa57-40d72627784e-logs\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.780691 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-combined-ca-bundle\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.783564 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-config-data\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.790476 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhqqh\" (UniqueName: \"kubernetes.io/projected/26e55123-6d03-4c7c-aa57-40d72627784e-kube-api-access-zhqqh\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.806490 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-scripts\") pod \"placement-db-sync-bjtgf\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.868356 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-config\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.868437 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-dns-svc\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.868478 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-ovsdbserver-sb\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.868548 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-ovsdbserver-nb\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.868601 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwmw4\" (UniqueName: \"kubernetes.io/projected/893f2fb2-c476-44ae-a954-6d7463ccf560-kube-api-access-xwmw4\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.970411 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-ovsdbserver-sb\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.970518 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-ovsdbserver-nb\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.970577 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwmw4\" (UniqueName: \"kubernetes.io/projected/893f2fb2-c476-44ae-a954-6d7463ccf560-kube-api-access-xwmw4\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.970618 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-config\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.970654 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-dns-svc\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.971601 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-ovsdbserver-sb\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.971742 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-ovsdbserver-nb\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.971788 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-config\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.971794 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-dns-svc\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.974125 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:53 crc kubenswrapper[4972]: I1121 11:14:53.988145 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwmw4\" (UniqueName: \"kubernetes.io/projected/893f2fb2-c476-44ae-a954-6d7463ccf560-kube-api-access-xwmw4\") pod \"dnsmasq-dns-7f8c4cb9bc-p66g8\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:54 crc kubenswrapper[4972]: I1121 11:14:54.016359 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:54 crc kubenswrapper[4972]: I1121 11:14:54.352584 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f8c4cb9bc-p66g8"] Nov 21 11:14:54 crc kubenswrapper[4972]: I1121 11:14:54.479994 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bjtgf"] Nov 21 11:14:54 crc kubenswrapper[4972]: I1121 11:14:54.631244 4972 generic.go:334] "Generic (PLEG): container finished" podID="893f2fb2-c476-44ae-a954-6d7463ccf560" containerID="364c58d7b49f66d45c9ecfa017d911d3ab693a0eb666fc8fc6e0e3e964032a77" exitCode=0 Nov 21 11:14:54 crc kubenswrapper[4972]: I1121 11:14:54.631327 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" event={"ID":"893f2fb2-c476-44ae-a954-6d7463ccf560","Type":"ContainerDied","Data":"364c58d7b49f66d45c9ecfa017d911d3ab693a0eb666fc8fc6e0e3e964032a77"} Nov 21 11:14:54 crc kubenswrapper[4972]: I1121 11:14:54.631392 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" event={"ID":"893f2fb2-c476-44ae-a954-6d7463ccf560","Type":"ContainerStarted","Data":"e194c48c1aac2d9b588df3e5b7799dd5982bf75d65cae2c8d4e44bd0803d88c3"} Nov 21 11:14:54 crc kubenswrapper[4972]: I1121 11:14:54.635054 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bjtgf" event={"ID":"26e55123-6d03-4c7c-aa57-40d72627784e","Type":"ContainerStarted","Data":"f73a589d5af6812764da0ce5c34377dd14de6ea9f11a4da6aa4e7ba25890a521"} Nov 21 11:14:55 crc kubenswrapper[4972]: I1121 11:14:55.648696 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" event={"ID":"893f2fb2-c476-44ae-a954-6d7463ccf560","Type":"ContainerStarted","Data":"97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14"} Nov 21 11:14:55 crc kubenswrapper[4972]: I1121 11:14:55.649098 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:55 crc kubenswrapper[4972]: I1121 11:14:55.650411 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bjtgf" event={"ID":"26e55123-6d03-4c7c-aa57-40d72627784e","Type":"ContainerStarted","Data":"83e6ffa81950d7d1a1705f6f604258d788d90dbd3c48b97804b4fd001a869cc4"} Nov 21 11:14:55 crc kubenswrapper[4972]: I1121 11:14:55.678949 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" podStartSLOduration=2.678924698 podStartE2EDuration="2.678924698s" podCreationTimestamp="2025-11-21 11:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:14:55.67258497 +0000 UTC m=+5640.781727498" watchObservedRunningTime="2025-11-21 11:14:55.678924698 +0000 UTC m=+5640.788067206" Nov 21 11:14:55 crc kubenswrapper[4972]: I1121 11:14:55.704770 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-bjtgf" podStartSLOduration=2.704741692 podStartE2EDuration="2.704741692s" podCreationTimestamp="2025-11-21 11:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:14:55.694742867 +0000 UTC m=+5640.803885405" watchObservedRunningTime="2025-11-21 11:14:55.704741692 +0000 UTC m=+5640.813884230" Nov 21 11:14:56 crc kubenswrapper[4972]: I1121 11:14:56.178707 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:14:56 crc kubenswrapper[4972]: I1121 11:14:56.179185 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:14:56 crc kubenswrapper[4972]: I1121 11:14:56.666855 4972 generic.go:334] "Generic (PLEG): container finished" podID="26e55123-6d03-4c7c-aa57-40d72627784e" containerID="83e6ffa81950d7d1a1705f6f604258d788d90dbd3c48b97804b4fd001a869cc4" exitCode=0 Nov 21 11:14:56 crc kubenswrapper[4972]: I1121 11:14:56.666976 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bjtgf" event={"ID":"26e55123-6d03-4c7c-aa57-40d72627784e","Type":"ContainerDied","Data":"83e6ffa81950d7d1a1705f6f604258d788d90dbd3c48b97804b4fd001a869cc4"} Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.134779 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.168406 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-scripts\") pod \"26e55123-6d03-4c7c-aa57-40d72627784e\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.168468 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhqqh\" (UniqueName: \"kubernetes.io/projected/26e55123-6d03-4c7c-aa57-40d72627784e-kube-api-access-zhqqh\") pod \"26e55123-6d03-4c7c-aa57-40d72627784e\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.168521 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-config-data\") pod \"26e55123-6d03-4c7c-aa57-40d72627784e\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.168553 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26e55123-6d03-4c7c-aa57-40d72627784e-logs\") pod \"26e55123-6d03-4c7c-aa57-40d72627784e\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.168693 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-combined-ca-bundle\") pod \"26e55123-6d03-4c7c-aa57-40d72627784e\" (UID: \"26e55123-6d03-4c7c-aa57-40d72627784e\") " Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.170606 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26e55123-6d03-4c7c-aa57-40d72627784e-logs" (OuterVolumeSpecName: "logs") pod "26e55123-6d03-4c7c-aa57-40d72627784e" (UID: "26e55123-6d03-4c7c-aa57-40d72627784e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.179932 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e55123-6d03-4c7c-aa57-40d72627784e-kube-api-access-zhqqh" (OuterVolumeSpecName: "kube-api-access-zhqqh") pod "26e55123-6d03-4c7c-aa57-40d72627784e" (UID: "26e55123-6d03-4c7c-aa57-40d72627784e"). InnerVolumeSpecName "kube-api-access-zhqqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.182307 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-scripts" (OuterVolumeSpecName: "scripts") pod "26e55123-6d03-4c7c-aa57-40d72627784e" (UID: "26e55123-6d03-4c7c-aa57-40d72627784e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.200802 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-config-data" (OuterVolumeSpecName: "config-data") pod "26e55123-6d03-4c7c-aa57-40d72627784e" (UID: "26e55123-6d03-4c7c-aa57-40d72627784e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.200957 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26e55123-6d03-4c7c-aa57-40d72627784e" (UID: "26e55123-6d03-4c7c-aa57-40d72627784e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.269954 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.269993 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.270004 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhqqh\" (UniqueName: \"kubernetes.io/projected/26e55123-6d03-4c7c-aa57-40d72627784e-kube-api-access-zhqqh\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.270016 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e55123-6d03-4c7c-aa57-40d72627784e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.270032 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26e55123-6d03-4c7c-aa57-40d72627784e-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.695229 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bjtgf" event={"ID":"26e55123-6d03-4c7c-aa57-40d72627784e","Type":"ContainerDied","Data":"f73a589d5af6812764da0ce5c34377dd14de6ea9f11a4da6aa4e7ba25890a521"} Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.695318 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f73a589d5af6812764da0ce5c34377dd14de6ea9f11a4da6aa4e7ba25890a521" Nov 21 11:14:58 crc kubenswrapper[4972]: I1121 11:14:58.695335 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bjtgf" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.019122 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.106860 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-746ff6c485-ccrh2"] Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.107354 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" podUID="445cf995-4393-496e-963e-42f1745d0610" containerName="dnsmasq-dns" containerID="cri-o://93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9" gracePeriod=10 Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.267093 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-559fdb5d84-5bqm6"] Nov 21 11:14:59 crc kubenswrapper[4972]: E1121 11:14:59.267555 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e55123-6d03-4c7c-aa57-40d72627784e" containerName="placement-db-sync" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.267568 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e55123-6d03-4c7c-aa57-40d72627784e" containerName="placement-db-sync" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.267745 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e55123-6d03-4c7c-aa57-40d72627784e" containerName="placement-db-sync" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.268669 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.273575 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zdjt7" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.274159 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.277291 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.277994 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-559fdb5d84-5bqm6"] Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.396172 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b1646b4-4971-4f8c-9198-65ff1e995e5d-scripts\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.396239 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b1646b4-4971-4f8c-9198-65ff1e995e5d-logs\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.396300 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b1646b4-4971-4f8c-9198-65ff1e995e5d-config-data\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.396333 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1646b4-4971-4f8c-9198-65ff1e995e5d-combined-ca-bundle\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.396371 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8gbw\" (UniqueName: \"kubernetes.io/projected/2b1646b4-4971-4f8c-9198-65ff1e995e5d-kube-api-access-p8gbw\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.498171 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b1646b4-4971-4f8c-9198-65ff1e995e5d-scripts\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.498234 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b1646b4-4971-4f8c-9198-65ff1e995e5d-logs\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.499231 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b1646b4-4971-4f8c-9198-65ff1e995e5d-config-data\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.499333 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1646b4-4971-4f8c-9198-65ff1e995e5d-combined-ca-bundle\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.499377 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8gbw\" (UniqueName: \"kubernetes.io/projected/2b1646b4-4971-4f8c-9198-65ff1e995e5d-kube-api-access-p8gbw\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.499503 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b1646b4-4971-4f8c-9198-65ff1e995e5d-logs\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.505619 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b1646b4-4971-4f8c-9198-65ff1e995e5d-scripts\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.506319 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1646b4-4971-4f8c-9198-65ff1e995e5d-combined-ca-bundle\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.519110 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b1646b4-4971-4f8c-9198-65ff1e995e5d-config-data\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.529811 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8gbw\" (UniqueName: \"kubernetes.io/projected/2b1646b4-4971-4f8c-9198-65ff1e995e5d-kube-api-access-p8gbw\") pod \"placement-559fdb5d84-5bqm6\" (UID: \"2b1646b4-4971-4f8c-9198-65ff1e995e5d\") " pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.598891 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.678684 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.733244 4972 generic.go:334] "Generic (PLEG): container finished" podID="445cf995-4393-496e-963e-42f1745d0610" containerID="93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9" exitCode=0 Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.733309 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" event={"ID":"445cf995-4393-496e-963e-42f1745d0610","Type":"ContainerDied","Data":"93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9"} Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.733355 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" event={"ID":"445cf995-4393-496e-963e-42f1745d0610","Type":"ContainerDied","Data":"524f22d19b208aff9ede4750280e621e84576bb4069cd5663b56c9289c9dd4f7"} Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.733380 4972 scope.go:117] "RemoveContainer" containerID="93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.733444 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-746ff6c485-ccrh2" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.777227 4972 scope.go:117] "RemoveContainer" containerID="b7dd9381674bd3bb344827ea68c901f52e1f53c09ebb0f24fa62e9c9cb25c151" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.802006 4972 scope.go:117] "RemoveContainer" containerID="93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9" Nov 21 11:14:59 crc kubenswrapper[4972]: E1121 11:14:59.803243 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9\": container with ID starting with 93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9 not found: ID does not exist" containerID="93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.804077 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9"} err="failed to get container status \"93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9\": rpc error: code = NotFound desc = could not find container \"93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9\": container with ID starting with 93b3705d8065315e1349d66d89558cf6dc1cddd4b26a7876f76105fabff2f8c9 not found: ID does not exist" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.804112 4972 scope.go:117] "RemoveContainer" containerID="b7dd9381674bd3bb344827ea68c901f52e1f53c09ebb0f24fa62e9c9cb25c151" Nov 21 11:14:59 crc kubenswrapper[4972]: E1121 11:14:59.804708 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7dd9381674bd3bb344827ea68c901f52e1f53c09ebb0f24fa62e9c9cb25c151\": container with ID starting with b7dd9381674bd3bb344827ea68c901f52e1f53c09ebb0f24fa62e9c9cb25c151 not found: ID does not exist" containerID="b7dd9381674bd3bb344827ea68c901f52e1f53c09ebb0f24fa62e9c9cb25c151" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.804727 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7dd9381674bd3bb344827ea68c901f52e1f53c09ebb0f24fa62e9c9cb25c151"} err="failed to get container status \"b7dd9381674bd3bb344827ea68c901f52e1f53c09ebb0f24fa62e9c9cb25c151\": rpc error: code = NotFound desc = could not find container \"b7dd9381674bd3bb344827ea68c901f52e1f53c09ebb0f24fa62e9c9cb25c151\": container with ID starting with b7dd9381674bd3bb344827ea68c901f52e1f53c09ebb0f24fa62e9c9cb25c151 not found: ID does not exist" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.809209 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-ovsdbserver-nb\") pod \"445cf995-4393-496e-963e-42f1745d0610\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.809257 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghr77\" (UniqueName: \"kubernetes.io/projected/445cf995-4393-496e-963e-42f1745d0610-kube-api-access-ghr77\") pod \"445cf995-4393-496e-963e-42f1745d0610\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.809296 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-ovsdbserver-sb\") pod \"445cf995-4393-496e-963e-42f1745d0610\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.809360 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-dns-svc\") pod \"445cf995-4393-496e-963e-42f1745d0610\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.809458 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-config\") pod \"445cf995-4393-496e-963e-42f1745d0610\" (UID: \"445cf995-4393-496e-963e-42f1745d0610\") " Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.815572 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/445cf995-4393-496e-963e-42f1745d0610-kube-api-access-ghr77" (OuterVolumeSpecName: "kube-api-access-ghr77") pod "445cf995-4393-496e-963e-42f1745d0610" (UID: "445cf995-4393-496e-963e-42f1745d0610"). InnerVolumeSpecName "kube-api-access-ghr77". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.857657 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "445cf995-4393-496e-963e-42f1745d0610" (UID: "445cf995-4393-496e-963e-42f1745d0610"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.858522 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "445cf995-4393-496e-963e-42f1745d0610" (UID: "445cf995-4393-496e-963e-42f1745d0610"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.861224 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-config" (OuterVolumeSpecName: "config") pod "445cf995-4393-496e-963e-42f1745d0610" (UID: "445cf995-4393-496e-963e-42f1745d0610"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.866473 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "445cf995-4393-496e-963e-42f1745d0610" (UID: "445cf995-4393-496e-963e-42f1745d0610"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.911350 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.911384 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.911395 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.911404 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghr77\" (UniqueName: \"kubernetes.io/projected/445cf995-4393-496e-963e-42f1745d0610-kube-api-access-ghr77\") on node \"crc\" DevicePath \"\"" Nov 21 11:14:59 crc kubenswrapper[4972]: I1121 11:14:59.911413 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/445cf995-4393-496e-963e-42f1745d0610-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.077131 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-746ff6c485-ccrh2"] Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.083129 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-746ff6c485-ccrh2"] Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.102068 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-559fdb5d84-5bqm6"] Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.149586 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7"] Nov 21 11:15:00 crc kubenswrapper[4972]: E1121 11:15:00.150556 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445cf995-4393-496e-963e-42f1745d0610" containerName="init" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.150579 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="445cf995-4393-496e-963e-42f1745d0610" containerName="init" Nov 21 11:15:00 crc kubenswrapper[4972]: E1121 11:15:00.150609 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445cf995-4393-496e-963e-42f1745d0610" containerName="dnsmasq-dns" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.150617 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="445cf995-4393-496e-963e-42f1745d0610" containerName="dnsmasq-dns" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.150787 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="445cf995-4393-496e-963e-42f1745d0610" containerName="dnsmasq-dns" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.151341 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.162396 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.162894 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7"] Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.165456 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.318755 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-config-volume\") pod \"collect-profiles-29395395-ddbg7\" (UID: \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.318806 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw6wr\" (UniqueName: \"kubernetes.io/projected/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-kube-api-access-lw6wr\") pod \"collect-profiles-29395395-ddbg7\" (UID: \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.318884 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-secret-volume\") pod \"collect-profiles-29395395-ddbg7\" (UID: \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.420631 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-config-volume\") pod \"collect-profiles-29395395-ddbg7\" (UID: \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.421122 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw6wr\" (UniqueName: \"kubernetes.io/projected/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-kube-api-access-lw6wr\") pod \"collect-profiles-29395395-ddbg7\" (UID: \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.421183 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-secret-volume\") pod \"collect-profiles-29395395-ddbg7\" (UID: \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.422191 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-config-volume\") pod \"collect-profiles-29395395-ddbg7\" (UID: \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.425443 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-secret-volume\") pod \"collect-profiles-29395395-ddbg7\" (UID: \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.453626 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw6wr\" (UniqueName: \"kubernetes.io/projected/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-kube-api-access-lw6wr\") pod \"collect-profiles-29395395-ddbg7\" (UID: \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.469046 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.703462 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7"] Nov 21 11:15:00 crc kubenswrapper[4972]: W1121 11:15:00.712984 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c17eb9c_5eb2_4c5b_8594_453d42bf1db9.slice/crio-5fc68df960d282330116fefb5beb413e118f0f9a19f61c061e74b498e95e919e WatchSource:0}: Error finding container 5fc68df960d282330116fefb5beb413e118f0f9a19f61c061e74b498e95e919e: Status 404 returned error can't find the container with id 5fc68df960d282330116fefb5beb413e118f0f9a19f61c061e74b498e95e919e Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.743276 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-559fdb5d84-5bqm6" event={"ID":"2b1646b4-4971-4f8c-9198-65ff1e995e5d","Type":"ContainerStarted","Data":"fa46923528586bd1f13c6eac44e4d76d75e621dcfaca414f1e1691154daae536"} Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.743328 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-559fdb5d84-5bqm6" event={"ID":"2b1646b4-4971-4f8c-9198-65ff1e995e5d","Type":"ContainerStarted","Data":"bb8d3a80e1d759be1adaeb04643b43e03e92c51e9c260e52a4dda0bd1a138c93"} Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.743343 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-559fdb5d84-5bqm6" event={"ID":"2b1646b4-4971-4f8c-9198-65ff1e995e5d","Type":"ContainerStarted","Data":"8e1e29b93ac28ea744d14f569be59e1b415b5bee1466c22cefc60987b8d99e83"} Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.744399 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.744430 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.745336 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" event={"ID":"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9","Type":"ContainerStarted","Data":"5fc68df960d282330116fefb5beb413e118f0f9a19f61c061e74b498e95e919e"} Nov 21 11:15:00 crc kubenswrapper[4972]: I1121 11:15:00.770808 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-559fdb5d84-5bqm6" podStartSLOduration=1.770786054 podStartE2EDuration="1.770786054s" podCreationTimestamp="2025-11-21 11:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:15:00.761688133 +0000 UTC m=+5645.870830631" watchObservedRunningTime="2025-11-21 11:15:00.770786054 +0000 UTC m=+5645.879928552" Nov 21 11:15:01 crc kubenswrapper[4972]: I1121 11:15:01.763171 4972 generic.go:334] "Generic (PLEG): container finished" podID="8c17eb9c-5eb2-4c5b-8594-453d42bf1db9" containerID="09283e95076cbe8d98ebe99c56c6e8e91ad86db72dd3f1d6c9251f3a81eb9391" exitCode=0 Nov 21 11:15:01 crc kubenswrapper[4972]: I1121 11:15:01.785262 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="445cf995-4393-496e-963e-42f1745d0610" path="/var/lib/kubelet/pods/445cf995-4393-496e-963e-42f1745d0610/volumes" Nov 21 11:15:01 crc kubenswrapper[4972]: I1121 11:15:01.786516 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" event={"ID":"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9","Type":"ContainerDied","Data":"09283e95076cbe8d98ebe99c56c6e8e91ad86db72dd3f1d6c9251f3a81eb9391"} Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.214480 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.306911 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw6wr\" (UniqueName: \"kubernetes.io/projected/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-kube-api-access-lw6wr\") pod \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\" (UID: \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\") " Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.307019 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-secret-volume\") pod \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\" (UID: \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\") " Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.307148 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-config-volume\") pod \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\" (UID: \"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9\") " Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.308092 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c17eb9c-5eb2-4c5b-8594-453d42bf1db9" (UID: "8c17eb9c-5eb2-4c5b-8594-453d42bf1db9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.320230 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8c17eb9c-5eb2-4c5b-8594-453d42bf1db9" (UID: "8c17eb9c-5eb2-4c5b-8594-453d42bf1db9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.328246 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-kube-api-access-lw6wr" (OuterVolumeSpecName: "kube-api-access-lw6wr") pod "8c17eb9c-5eb2-4c5b-8594-453d42bf1db9" (UID: "8c17eb9c-5eb2-4c5b-8594-453d42bf1db9"). InnerVolumeSpecName "kube-api-access-lw6wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.410190 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.410248 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.410271 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw6wr\" (UniqueName: \"kubernetes.io/projected/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9-kube-api-access-lw6wr\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.790509 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" event={"ID":"8c17eb9c-5eb2-4c5b-8594-453d42bf1db9","Type":"ContainerDied","Data":"5fc68df960d282330116fefb5beb413e118f0f9a19f61c061e74b498e95e919e"} Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.790894 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fc68df960d282330116fefb5beb413e118f0f9a19f61c061e74b498e95e919e" Nov 21 11:15:03 crc kubenswrapper[4972]: I1121 11:15:03.790589 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7" Nov 21 11:15:04 crc kubenswrapper[4972]: I1121 11:15:04.331066 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw"] Nov 21 11:15:04 crc kubenswrapper[4972]: I1121 11:15:04.345120 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395350-m99lw"] Nov 21 11:15:05 crc kubenswrapper[4972]: I1121 11:15:05.778654 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dba89cec-c76c-4040-8da1-81f2a55f0332" path="/var/lib/kubelet/pods/dba89cec-c76c-4040-8da1-81f2a55f0332/volumes" Nov 21 11:15:26 crc kubenswrapper[4972]: I1121 11:15:26.179186 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:15:26 crc kubenswrapper[4972]: I1121 11:15:26.180057 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:15:27 crc kubenswrapper[4972]: I1121 11:15:27.579009 4972 scope.go:117] "RemoveContainer" containerID="ea97997b580b03c35c3ce2afba3df5d22a4afa054b6c2c2dfff85743f7607c82" Nov 21 11:15:27 crc kubenswrapper[4972]: I1121 11:15:27.617805 4972 scope.go:117] "RemoveContainer" containerID="654a2b3aef63ad326a51c2e672e35464d88ffd577b778285db493f7d532fdddb" Nov 21 11:15:27 crc kubenswrapper[4972]: I1121 11:15:27.654517 4972 scope.go:117] "RemoveContainer" containerID="64a2fa7533a43efce4a82578aae37efa3366ee75d91fab451c5dbdfc68b0033c" Nov 21 11:15:30 crc kubenswrapper[4972]: I1121 11:15:30.684460 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:15:30 crc kubenswrapper[4972]: I1121 11:15:30.687607 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-559fdb5d84-5bqm6" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.492794 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tn2cn"] Nov 21 11:15:54 crc kubenswrapper[4972]: E1121 11:15:54.493607 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c17eb9c-5eb2-4c5b-8594-453d42bf1db9" containerName="collect-profiles" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.493620 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c17eb9c-5eb2-4c5b-8594-453d42bf1db9" containerName="collect-profiles" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.493786 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c17eb9c-5eb2-4c5b-8594-453d42bf1db9" containerName="collect-profiles" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.494351 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tn2cn" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.505376 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tn2cn"] Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.585044 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-k5w5d"] Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.586275 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k5w5d" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.601336 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-k5w5d"] Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.650499 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7h9\" (UniqueName: \"kubernetes.io/projected/dc141f37-940b-4971-a219-f63fc76d7489-kube-api-access-ch7h9\") pod \"nova-api-db-create-tn2cn\" (UID: \"dc141f37-940b-4971-a219-f63fc76d7489\") " pod="openstack/nova-api-db-create-tn2cn" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.650563 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc141f37-940b-4971-a219-f63fc76d7489-operator-scripts\") pod \"nova-api-db-create-tn2cn\" (UID: \"dc141f37-940b-4971-a219-f63fc76d7489\") " pod="openstack/nova-api-db-create-tn2cn" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.689168 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-rbsnw"] Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.690865 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rbsnw" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.706423 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-49d4-account-create-gj62l"] Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.707617 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-49d4-account-create-gj62l" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.709123 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.715766 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rbsnw"] Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.743354 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-49d4-account-create-gj62l"] Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.752152 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc141f37-940b-4971-a219-f63fc76d7489-operator-scripts\") pod \"nova-api-db-create-tn2cn\" (UID: \"dc141f37-940b-4971-a219-f63fc76d7489\") " pod="openstack/nova-api-db-create-tn2cn" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.752314 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hl2z\" (UniqueName: \"kubernetes.io/projected/a1756680-029d-4523-935d-48c659172cc2-kube-api-access-2hl2z\") pod \"nova-cell0-db-create-k5w5d\" (UID: \"a1756680-029d-4523-935d-48c659172cc2\") " pod="openstack/nova-cell0-db-create-k5w5d" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.752379 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch7h9\" (UniqueName: \"kubernetes.io/projected/dc141f37-940b-4971-a219-f63fc76d7489-kube-api-access-ch7h9\") pod \"nova-api-db-create-tn2cn\" (UID: \"dc141f37-940b-4971-a219-f63fc76d7489\") " pod="openstack/nova-api-db-create-tn2cn" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.752405 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1756680-029d-4523-935d-48c659172cc2-operator-scripts\") pod \"nova-cell0-db-create-k5w5d\" (UID: \"a1756680-029d-4523-935d-48c659172cc2\") " pod="openstack/nova-cell0-db-create-k5w5d" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.753092 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc141f37-940b-4971-a219-f63fc76d7489-operator-scripts\") pod \"nova-api-db-create-tn2cn\" (UID: \"dc141f37-940b-4971-a219-f63fc76d7489\") " pod="openstack/nova-api-db-create-tn2cn" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.774206 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch7h9\" (UniqueName: \"kubernetes.io/projected/dc141f37-940b-4971-a219-f63fc76d7489-kube-api-access-ch7h9\") pod \"nova-api-db-create-tn2cn\" (UID: \"dc141f37-940b-4971-a219-f63fc76d7489\") " pod="openstack/nova-api-db-create-tn2cn" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.810938 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tn2cn" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.853742 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spxp7\" (UniqueName: \"kubernetes.io/projected/1a67f3f2-1df1-4150-b686-397ce5c67721-kube-api-access-spxp7\") pod \"nova-cell1-db-create-rbsnw\" (UID: \"1a67f3f2-1df1-4150-b686-397ce5c67721\") " pod="openstack/nova-cell1-db-create-rbsnw" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.853789 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a67f3f2-1df1-4150-b686-397ce5c67721-operator-scripts\") pod \"nova-cell1-db-create-rbsnw\" (UID: \"1a67f3f2-1df1-4150-b686-397ce5c67721\") " pod="openstack/nova-cell1-db-create-rbsnw" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.853826 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hl2z\" (UniqueName: \"kubernetes.io/projected/a1756680-029d-4523-935d-48c659172cc2-kube-api-access-2hl2z\") pod \"nova-cell0-db-create-k5w5d\" (UID: \"a1756680-029d-4523-935d-48c659172cc2\") " pod="openstack/nova-cell0-db-create-k5w5d" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.853932 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1756680-029d-4523-935d-48c659172cc2-operator-scripts\") pod \"nova-cell0-db-create-k5w5d\" (UID: \"a1756680-029d-4523-935d-48c659172cc2\") " pod="openstack/nova-cell0-db-create-k5w5d" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.854488 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d08f4-50a4-417d-aa99-c18f80db60d8-operator-scripts\") pod \"nova-api-49d4-account-create-gj62l\" (UID: \"5a0d08f4-50a4-417d-aa99-c18f80db60d8\") " pod="openstack/nova-api-49d4-account-create-gj62l" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.854526 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx69w\" (UniqueName: \"kubernetes.io/projected/5a0d08f4-50a4-417d-aa99-c18f80db60d8-kube-api-access-fx69w\") pod \"nova-api-49d4-account-create-gj62l\" (UID: \"5a0d08f4-50a4-417d-aa99-c18f80db60d8\") " pod="openstack/nova-api-49d4-account-create-gj62l" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.854660 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1756680-029d-4523-935d-48c659172cc2-operator-scripts\") pod \"nova-cell0-db-create-k5w5d\" (UID: \"a1756680-029d-4523-935d-48c659172cc2\") " pod="openstack/nova-cell0-db-create-k5w5d" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.878039 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hl2z\" (UniqueName: \"kubernetes.io/projected/a1756680-029d-4523-935d-48c659172cc2-kube-api-access-2hl2z\") pod \"nova-cell0-db-create-k5w5d\" (UID: \"a1756680-029d-4523-935d-48c659172cc2\") " pod="openstack/nova-cell0-db-create-k5w5d" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.898655 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1013-account-create-7s4nl"] Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.901791 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k5w5d" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.908192 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1013-account-create-7s4nl" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.908886 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1013-account-create-7s4nl"] Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.910669 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.955953 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d08f4-50a4-417d-aa99-c18f80db60d8-operator-scripts\") pod \"nova-api-49d4-account-create-gj62l\" (UID: \"5a0d08f4-50a4-417d-aa99-c18f80db60d8\") " pod="openstack/nova-api-49d4-account-create-gj62l" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.956148 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx69w\" (UniqueName: \"kubernetes.io/projected/5a0d08f4-50a4-417d-aa99-c18f80db60d8-kube-api-access-fx69w\") pod \"nova-api-49d4-account-create-gj62l\" (UID: \"5a0d08f4-50a4-417d-aa99-c18f80db60d8\") " pod="openstack/nova-api-49d4-account-create-gj62l" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.956225 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3202a840-5409-4ab4-8905-994551c69dd8-operator-scripts\") pod \"nova-cell0-1013-account-create-7s4nl\" (UID: \"3202a840-5409-4ab4-8905-994551c69dd8\") " pod="openstack/nova-cell0-1013-account-create-7s4nl" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.956245 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spxp7\" (UniqueName: \"kubernetes.io/projected/1a67f3f2-1df1-4150-b686-397ce5c67721-kube-api-access-spxp7\") pod \"nova-cell1-db-create-rbsnw\" (UID: \"1a67f3f2-1df1-4150-b686-397ce5c67721\") " pod="openstack/nova-cell1-db-create-rbsnw" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.956272 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a67f3f2-1df1-4150-b686-397ce5c67721-operator-scripts\") pod \"nova-cell1-db-create-rbsnw\" (UID: \"1a67f3f2-1df1-4150-b686-397ce5c67721\") " pod="openstack/nova-cell1-db-create-rbsnw" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.956313 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6t25\" (UniqueName: \"kubernetes.io/projected/3202a840-5409-4ab4-8905-994551c69dd8-kube-api-access-r6t25\") pod \"nova-cell0-1013-account-create-7s4nl\" (UID: \"3202a840-5409-4ab4-8905-994551c69dd8\") " pod="openstack/nova-cell0-1013-account-create-7s4nl" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.957155 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d08f4-50a4-417d-aa99-c18f80db60d8-operator-scripts\") pod \"nova-api-49d4-account-create-gj62l\" (UID: \"5a0d08f4-50a4-417d-aa99-c18f80db60d8\") " pod="openstack/nova-api-49d4-account-create-gj62l" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.957564 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a67f3f2-1df1-4150-b686-397ce5c67721-operator-scripts\") pod \"nova-cell1-db-create-rbsnw\" (UID: \"1a67f3f2-1df1-4150-b686-397ce5c67721\") " pod="openstack/nova-cell1-db-create-rbsnw" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.978146 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx69w\" (UniqueName: \"kubernetes.io/projected/5a0d08f4-50a4-417d-aa99-c18f80db60d8-kube-api-access-fx69w\") pod \"nova-api-49d4-account-create-gj62l\" (UID: \"5a0d08f4-50a4-417d-aa99-c18f80db60d8\") " pod="openstack/nova-api-49d4-account-create-gj62l" Nov 21 11:15:54 crc kubenswrapper[4972]: I1121 11:15:54.978241 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spxp7\" (UniqueName: \"kubernetes.io/projected/1a67f3f2-1df1-4150-b686-397ce5c67721-kube-api-access-spxp7\") pod \"nova-cell1-db-create-rbsnw\" (UID: \"1a67f3f2-1df1-4150-b686-397ce5c67721\") " pod="openstack/nova-cell1-db-create-rbsnw" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.005792 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rbsnw" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.035363 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-49d4-account-create-gj62l" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.057406 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3202a840-5409-4ab4-8905-994551c69dd8-operator-scripts\") pod \"nova-cell0-1013-account-create-7s4nl\" (UID: \"3202a840-5409-4ab4-8905-994551c69dd8\") " pod="openstack/nova-cell0-1013-account-create-7s4nl" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.057471 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6t25\" (UniqueName: \"kubernetes.io/projected/3202a840-5409-4ab4-8905-994551c69dd8-kube-api-access-r6t25\") pod \"nova-cell0-1013-account-create-7s4nl\" (UID: \"3202a840-5409-4ab4-8905-994551c69dd8\") " pod="openstack/nova-cell0-1013-account-create-7s4nl" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.058181 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3202a840-5409-4ab4-8905-994551c69dd8-operator-scripts\") pod \"nova-cell0-1013-account-create-7s4nl\" (UID: \"3202a840-5409-4ab4-8905-994551c69dd8\") " pod="openstack/nova-cell0-1013-account-create-7s4nl" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.080005 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6t25\" (UniqueName: \"kubernetes.io/projected/3202a840-5409-4ab4-8905-994551c69dd8-kube-api-access-r6t25\") pod \"nova-cell0-1013-account-create-7s4nl\" (UID: \"3202a840-5409-4ab4-8905-994551c69dd8\") " pod="openstack/nova-cell0-1013-account-create-7s4nl" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.099928 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-40e0-account-create-bqmgl"] Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.107344 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-40e0-account-create-bqmgl" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.109995 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.112866 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-40e0-account-create-bqmgl"] Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.261973 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8xx5\" (UniqueName: \"kubernetes.io/projected/a32d0c1a-b8ac-4785-aea6-b9a76765b559-kube-api-access-c8xx5\") pod \"nova-cell1-40e0-account-create-bqmgl\" (UID: \"a32d0c1a-b8ac-4785-aea6-b9a76765b559\") " pod="openstack/nova-cell1-40e0-account-create-bqmgl" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.262081 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a32d0c1a-b8ac-4785-aea6-b9a76765b559-operator-scripts\") pod \"nova-cell1-40e0-account-create-bqmgl\" (UID: \"a32d0c1a-b8ac-4785-aea6-b9a76765b559\") " pod="openstack/nova-cell1-40e0-account-create-bqmgl" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.297467 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1013-account-create-7s4nl" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.326173 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tn2cn"] Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.363582 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a32d0c1a-b8ac-4785-aea6-b9a76765b559-operator-scripts\") pod \"nova-cell1-40e0-account-create-bqmgl\" (UID: \"a32d0c1a-b8ac-4785-aea6-b9a76765b559\") " pod="openstack/nova-cell1-40e0-account-create-bqmgl" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.363746 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8xx5\" (UniqueName: \"kubernetes.io/projected/a32d0c1a-b8ac-4785-aea6-b9a76765b559-kube-api-access-c8xx5\") pod \"nova-cell1-40e0-account-create-bqmgl\" (UID: \"a32d0c1a-b8ac-4785-aea6-b9a76765b559\") " pod="openstack/nova-cell1-40e0-account-create-bqmgl" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.364331 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a32d0c1a-b8ac-4785-aea6-b9a76765b559-operator-scripts\") pod \"nova-cell1-40e0-account-create-bqmgl\" (UID: \"a32d0c1a-b8ac-4785-aea6-b9a76765b559\") " pod="openstack/nova-cell1-40e0-account-create-bqmgl" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.378718 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8xx5\" (UniqueName: \"kubernetes.io/projected/a32d0c1a-b8ac-4785-aea6-b9a76765b559-kube-api-access-c8xx5\") pod \"nova-cell1-40e0-account-create-bqmgl\" (UID: \"a32d0c1a-b8ac-4785-aea6-b9a76765b559\") " pod="openstack/nova-cell1-40e0-account-create-bqmgl" Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.434011 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-k5w5d"] Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.436423 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-40e0-account-create-bqmgl" Nov 21 11:15:55 crc kubenswrapper[4972]: W1121 11:15:55.442391 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1756680_029d_4523_935d_48c659172cc2.slice/crio-74818b99464af1c7a12bdd13c787d46c4d8f0b6de2691d7aaeb2e38ce495fe45 WatchSource:0}: Error finding container 74818b99464af1c7a12bdd13c787d46c4d8f0b6de2691d7aaeb2e38ce495fe45: Status 404 returned error can't find the container with id 74818b99464af1c7a12bdd13c787d46c4d8f0b6de2691d7aaeb2e38ce495fe45 Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.509652 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rbsnw"] Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.553321 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1013-account-create-7s4nl"] Nov 21 11:15:55 crc kubenswrapper[4972]: W1121 11:15:55.560504 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3202a840_5409_4ab4_8905_994551c69dd8.slice/crio-91b15c86fa16628fa995bf6b00a250178aa267e8eb53d4e1d92cedf9a54bbb2a WatchSource:0}: Error finding container 91b15c86fa16628fa995bf6b00a250178aa267e8eb53d4e1d92cedf9a54bbb2a: Status 404 returned error can't find the container with id 91b15c86fa16628fa995bf6b00a250178aa267e8eb53d4e1d92cedf9a54bbb2a Nov 21 11:15:55 crc kubenswrapper[4972]: I1121 11:15:55.588151 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-49d4-account-create-gj62l"] Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:55.974327 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-40e0-account-create-bqmgl"] Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:55.983353 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.179269 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.179341 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.179398 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.180032 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.180103 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" gracePeriod=600 Nov 21 11:15:56 crc kubenswrapper[4972]: E1121 11:15:56.322786 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.343714 4972 generic.go:334] "Generic (PLEG): container finished" podID="1a67f3f2-1df1-4150-b686-397ce5c67721" containerID="aa1611f54350d3bb2eefd79418e24029f6d7059877fc6e744d2a0c523b308e96" exitCode=0 Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.343781 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rbsnw" event={"ID":"1a67f3f2-1df1-4150-b686-397ce5c67721","Type":"ContainerDied","Data":"aa1611f54350d3bb2eefd79418e24029f6d7059877fc6e744d2a0c523b308e96"} Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.343812 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rbsnw" event={"ID":"1a67f3f2-1df1-4150-b686-397ce5c67721","Type":"ContainerStarted","Data":"747a01e1315a2dd3a2f255f9d2ebff0e161e12e9b3706166ea7d345bd98596bb"} Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.345723 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-40e0-account-create-bqmgl" event={"ID":"a32d0c1a-b8ac-4785-aea6-b9a76765b559","Type":"ContainerStarted","Data":"c802838539c224782722e2df20db7b3eb6f1d0986f5ef9849d72ed3b8775cb59"} Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.349064 4972 generic.go:334] "Generic (PLEG): container finished" podID="3202a840-5409-4ab4-8905-994551c69dd8" containerID="585d35533198d77197bb6e79bd5aa5dc2fa72cc3cabd84fc2d52d2db35f47e83" exitCode=0 Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.349144 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1013-account-create-7s4nl" event={"ID":"3202a840-5409-4ab4-8905-994551c69dd8","Type":"ContainerDied","Data":"585d35533198d77197bb6e79bd5aa5dc2fa72cc3cabd84fc2d52d2db35f47e83"} Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.349164 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1013-account-create-7s4nl" event={"ID":"3202a840-5409-4ab4-8905-994551c69dd8","Type":"ContainerStarted","Data":"91b15c86fa16628fa995bf6b00a250178aa267e8eb53d4e1d92cedf9a54bbb2a"} Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.351333 4972 generic.go:334] "Generic (PLEG): container finished" podID="dc141f37-940b-4971-a219-f63fc76d7489" containerID="eabc9c7cb994c59b59ed7c9793f3bd33583370edbbc9fccfd3711edca267d83d" exitCode=0 Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.351385 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tn2cn" event={"ID":"dc141f37-940b-4971-a219-f63fc76d7489","Type":"ContainerDied","Data":"eabc9c7cb994c59b59ed7c9793f3bd33583370edbbc9fccfd3711edca267d83d"} Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.351403 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tn2cn" event={"ID":"dc141f37-940b-4971-a219-f63fc76d7489","Type":"ContainerStarted","Data":"3292de926f56b6b5a4e8c36f4c2ce6eb42c3fa35f2a17c04bf475a60e7b6a65e"} Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.354258 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" exitCode=0 Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.354360 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1"} Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.354399 4972 scope.go:117] "RemoveContainer" containerID="24e4e8c91bec69fac6579b1048275d2e2e1a69f272656a33d0af882dd887ca1f" Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.355157 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:15:56 crc kubenswrapper[4972]: E1121 11:15:56.355435 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.364086 4972 generic.go:334] "Generic (PLEG): container finished" podID="5a0d08f4-50a4-417d-aa99-c18f80db60d8" containerID="2ac79c90e11a93881e44b12a64e4b7fe99824c2949d7bd4e8f7380ffe95d6500" exitCode=0 Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.364165 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-49d4-account-create-gj62l" event={"ID":"5a0d08f4-50a4-417d-aa99-c18f80db60d8","Type":"ContainerDied","Data":"2ac79c90e11a93881e44b12a64e4b7fe99824c2949d7bd4e8f7380ffe95d6500"} Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.364599 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-49d4-account-create-gj62l" event={"ID":"5a0d08f4-50a4-417d-aa99-c18f80db60d8","Type":"ContainerStarted","Data":"54ba7c73a81297903fe20a7d3cf5b76105557673a5c031cf2b9f90ac674b8d77"} Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.365998 4972 generic.go:334] "Generic (PLEG): container finished" podID="a1756680-029d-4523-935d-48c659172cc2" containerID="87377a21a0caa5160ffbfe980690cdb412d303c1fb285c4fb153d814f435eb96" exitCode=0 Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.366028 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k5w5d" event={"ID":"a1756680-029d-4523-935d-48c659172cc2","Type":"ContainerDied","Data":"87377a21a0caa5160ffbfe980690cdb412d303c1fb285c4fb153d814f435eb96"} Nov 21 11:15:56 crc kubenswrapper[4972]: I1121 11:15:56.366046 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k5w5d" event={"ID":"a1756680-029d-4523-935d-48c659172cc2","Type":"ContainerStarted","Data":"74818b99464af1c7a12bdd13c787d46c4d8f0b6de2691d7aaeb2e38ce495fe45"} Nov 21 11:15:57 crc kubenswrapper[4972]: I1121 11:15:57.376655 4972 generic.go:334] "Generic (PLEG): container finished" podID="a32d0c1a-b8ac-4785-aea6-b9a76765b559" containerID="eb35947f3c5d99fa806dab5337412caa70c41f5ac9689e1a2ff8d39dd1d89601" exitCode=0 Nov 21 11:15:57 crc kubenswrapper[4972]: I1121 11:15:57.376726 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-40e0-account-create-bqmgl" event={"ID":"a32d0c1a-b8ac-4785-aea6-b9a76765b559","Type":"ContainerDied","Data":"eb35947f3c5d99fa806dab5337412caa70c41f5ac9689e1a2ff8d39dd1d89601"} Nov 21 11:15:57 crc kubenswrapper[4972]: I1121 11:15:57.727475 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rbsnw" Nov 21 11:15:57 crc kubenswrapper[4972]: I1121 11:15:57.923395 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a67f3f2-1df1-4150-b686-397ce5c67721-operator-scripts\") pod \"1a67f3f2-1df1-4150-b686-397ce5c67721\" (UID: \"1a67f3f2-1df1-4150-b686-397ce5c67721\") " Nov 21 11:15:57 crc kubenswrapper[4972]: I1121 11:15:57.923697 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spxp7\" (UniqueName: \"kubernetes.io/projected/1a67f3f2-1df1-4150-b686-397ce5c67721-kube-api-access-spxp7\") pod \"1a67f3f2-1df1-4150-b686-397ce5c67721\" (UID: \"1a67f3f2-1df1-4150-b686-397ce5c67721\") " Nov 21 11:15:57 crc kubenswrapper[4972]: I1121 11:15:57.924236 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a67f3f2-1df1-4150-b686-397ce5c67721-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a67f3f2-1df1-4150-b686-397ce5c67721" (UID: "1a67f3f2-1df1-4150-b686-397ce5c67721"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:15:57 crc kubenswrapper[4972]: I1121 11:15:57.931985 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a67f3f2-1df1-4150-b686-397ce5c67721-kube-api-access-spxp7" (OuterVolumeSpecName: "kube-api-access-spxp7") pod "1a67f3f2-1df1-4150-b686-397ce5c67721" (UID: "1a67f3f2-1df1-4150-b686-397ce5c67721"). InnerVolumeSpecName "kube-api-access-spxp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:15:57 crc kubenswrapper[4972]: I1121 11:15:57.994315 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-49d4-account-create-gj62l" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.001142 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k5w5d" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.013754 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tn2cn" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.018794 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1013-account-create-7s4nl" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.034094 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spxp7\" (UniqueName: \"kubernetes.io/projected/1a67f3f2-1df1-4150-b686-397ce5c67721-kube-api-access-spxp7\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.034144 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a67f3f2-1df1-4150-b686-397ce5c67721-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.135086 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc141f37-940b-4971-a219-f63fc76d7489-operator-scripts\") pod \"dc141f37-940b-4971-a219-f63fc76d7489\" (UID: \"dc141f37-940b-4971-a219-f63fc76d7489\") " Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.135435 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d08f4-50a4-417d-aa99-c18f80db60d8-operator-scripts\") pod \"5a0d08f4-50a4-417d-aa99-c18f80db60d8\" (UID: \"5a0d08f4-50a4-417d-aa99-c18f80db60d8\") " Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.135531 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1756680-029d-4523-935d-48c659172cc2-operator-scripts\") pod \"a1756680-029d-4523-935d-48c659172cc2\" (UID: \"a1756680-029d-4523-935d-48c659172cc2\") " Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.135570 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6t25\" (UniqueName: \"kubernetes.io/projected/3202a840-5409-4ab4-8905-994551c69dd8-kube-api-access-r6t25\") pod \"3202a840-5409-4ab4-8905-994551c69dd8\" (UID: \"3202a840-5409-4ab4-8905-994551c69dd8\") " Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.135593 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx69w\" (UniqueName: \"kubernetes.io/projected/5a0d08f4-50a4-417d-aa99-c18f80db60d8-kube-api-access-fx69w\") pod \"5a0d08f4-50a4-417d-aa99-c18f80db60d8\" (UID: \"5a0d08f4-50a4-417d-aa99-c18f80db60d8\") " Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.135664 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hl2z\" (UniqueName: \"kubernetes.io/projected/a1756680-029d-4523-935d-48c659172cc2-kube-api-access-2hl2z\") pod \"a1756680-029d-4523-935d-48c659172cc2\" (UID: \"a1756680-029d-4523-935d-48c659172cc2\") " Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.135699 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch7h9\" (UniqueName: \"kubernetes.io/projected/dc141f37-940b-4971-a219-f63fc76d7489-kube-api-access-ch7h9\") pod \"dc141f37-940b-4971-a219-f63fc76d7489\" (UID: \"dc141f37-940b-4971-a219-f63fc76d7489\") " Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.135771 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3202a840-5409-4ab4-8905-994551c69dd8-operator-scripts\") pod \"3202a840-5409-4ab4-8905-994551c69dd8\" (UID: \"3202a840-5409-4ab4-8905-994551c69dd8\") " Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.135973 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a0d08f4-50a4-417d-aa99-c18f80db60d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a0d08f4-50a4-417d-aa99-c18f80db60d8" (UID: "5a0d08f4-50a4-417d-aa99-c18f80db60d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.136540 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3202a840-5409-4ab4-8905-994551c69dd8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3202a840-5409-4ab4-8905-994551c69dd8" (UID: "3202a840-5409-4ab4-8905-994551c69dd8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.136648 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc141f37-940b-4971-a219-f63fc76d7489-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc141f37-940b-4971-a219-f63fc76d7489" (UID: "dc141f37-940b-4971-a219-f63fc76d7489"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.136954 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3202a840-5409-4ab4-8905-994551c69dd8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.136980 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc141f37-940b-4971-a219-f63fc76d7489-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.136993 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d08f4-50a4-417d-aa99-c18f80db60d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.137345 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1756680-029d-4523-935d-48c659172cc2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1756680-029d-4523-935d-48c659172cc2" (UID: "a1756680-029d-4523-935d-48c659172cc2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.139630 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a0d08f4-50a4-417d-aa99-c18f80db60d8-kube-api-access-fx69w" (OuterVolumeSpecName: "kube-api-access-fx69w") pod "5a0d08f4-50a4-417d-aa99-c18f80db60d8" (UID: "5a0d08f4-50a4-417d-aa99-c18f80db60d8"). InnerVolumeSpecName "kube-api-access-fx69w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.139858 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1756680-029d-4523-935d-48c659172cc2-kube-api-access-2hl2z" (OuterVolumeSpecName: "kube-api-access-2hl2z") pod "a1756680-029d-4523-935d-48c659172cc2" (UID: "a1756680-029d-4523-935d-48c659172cc2"). InnerVolumeSpecName "kube-api-access-2hl2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.140306 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc141f37-940b-4971-a219-f63fc76d7489-kube-api-access-ch7h9" (OuterVolumeSpecName: "kube-api-access-ch7h9") pod "dc141f37-940b-4971-a219-f63fc76d7489" (UID: "dc141f37-940b-4971-a219-f63fc76d7489"). InnerVolumeSpecName "kube-api-access-ch7h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.140488 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3202a840-5409-4ab4-8905-994551c69dd8-kube-api-access-r6t25" (OuterVolumeSpecName: "kube-api-access-r6t25") pod "3202a840-5409-4ab4-8905-994551c69dd8" (UID: "3202a840-5409-4ab4-8905-994551c69dd8"). InnerVolumeSpecName "kube-api-access-r6t25". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.239263 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1756680-029d-4523-935d-48c659172cc2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.239321 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6t25\" (UniqueName: \"kubernetes.io/projected/3202a840-5409-4ab4-8905-994551c69dd8-kube-api-access-r6t25\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.239339 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx69w\" (UniqueName: \"kubernetes.io/projected/5a0d08f4-50a4-417d-aa99-c18f80db60d8-kube-api-access-fx69w\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.239371 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hl2z\" (UniqueName: \"kubernetes.io/projected/a1756680-029d-4523-935d-48c659172cc2-kube-api-access-2hl2z\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.239386 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch7h9\" (UniqueName: \"kubernetes.io/projected/dc141f37-940b-4971-a219-f63fc76d7489-kube-api-access-ch7h9\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.394271 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tn2cn" event={"ID":"dc141f37-940b-4971-a219-f63fc76d7489","Type":"ContainerDied","Data":"3292de926f56b6b5a4e8c36f4c2ce6eb42c3fa35f2a17c04bf475a60e7b6a65e"} Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.394351 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3292de926f56b6b5a4e8c36f4c2ce6eb42c3fa35f2a17c04bf475a60e7b6a65e" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.394294 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tn2cn" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.397791 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-49d4-account-create-gj62l" event={"ID":"5a0d08f4-50a4-417d-aa99-c18f80db60d8","Type":"ContainerDied","Data":"54ba7c73a81297903fe20a7d3cf5b76105557673a5c031cf2b9f90ac674b8d77"} Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.397839 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-49d4-account-create-gj62l" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.397858 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ba7c73a81297903fe20a7d3cf5b76105557673a5c031cf2b9f90ac674b8d77" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.400456 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k5w5d" event={"ID":"a1756680-029d-4523-935d-48c659172cc2","Type":"ContainerDied","Data":"74818b99464af1c7a12bdd13c787d46c4d8f0b6de2691d7aaeb2e38ce495fe45"} Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.400533 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74818b99464af1c7a12bdd13c787d46c4d8f0b6de2691d7aaeb2e38ce495fe45" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.400489 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k5w5d" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.402758 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rbsnw" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.402885 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rbsnw" event={"ID":"1a67f3f2-1df1-4150-b686-397ce5c67721","Type":"ContainerDied","Data":"747a01e1315a2dd3a2f255f9d2ebff0e161e12e9b3706166ea7d345bd98596bb"} Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.402917 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="747a01e1315a2dd3a2f255f9d2ebff0e161e12e9b3706166ea7d345bd98596bb" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.405236 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1013-account-create-7s4nl" event={"ID":"3202a840-5409-4ab4-8905-994551c69dd8","Type":"ContainerDied","Data":"91b15c86fa16628fa995bf6b00a250178aa267e8eb53d4e1d92cedf9a54bbb2a"} Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.405290 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b15c86fa16628fa995bf6b00a250178aa267e8eb53d4e1d92cedf9a54bbb2a" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.405354 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1013-account-create-7s4nl" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.825491 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-40e0-account-create-bqmgl" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.864526 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a32d0c1a-b8ac-4785-aea6-b9a76765b559-operator-scripts\") pod \"a32d0c1a-b8ac-4785-aea6-b9a76765b559\" (UID: \"a32d0c1a-b8ac-4785-aea6-b9a76765b559\") " Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.865025 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8xx5\" (UniqueName: \"kubernetes.io/projected/a32d0c1a-b8ac-4785-aea6-b9a76765b559-kube-api-access-c8xx5\") pod \"a32d0c1a-b8ac-4785-aea6-b9a76765b559\" (UID: \"a32d0c1a-b8ac-4785-aea6-b9a76765b559\") " Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.865557 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a32d0c1a-b8ac-4785-aea6-b9a76765b559-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a32d0c1a-b8ac-4785-aea6-b9a76765b559" (UID: "a32d0c1a-b8ac-4785-aea6-b9a76765b559"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.865874 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a32d0c1a-b8ac-4785-aea6-b9a76765b559-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.872139 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a32d0c1a-b8ac-4785-aea6-b9a76765b559-kube-api-access-c8xx5" (OuterVolumeSpecName: "kube-api-access-c8xx5") pod "a32d0c1a-b8ac-4785-aea6-b9a76765b559" (UID: "a32d0c1a-b8ac-4785-aea6-b9a76765b559"). InnerVolumeSpecName "kube-api-access-c8xx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:15:58 crc kubenswrapper[4972]: I1121 11:15:58.967142 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8xx5\" (UniqueName: \"kubernetes.io/projected/a32d0c1a-b8ac-4785-aea6-b9a76765b559-kube-api-access-c8xx5\") on node \"crc\" DevicePath \"\"" Nov 21 11:15:59 crc kubenswrapper[4972]: I1121 11:15:59.420781 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-40e0-account-create-bqmgl" event={"ID":"a32d0c1a-b8ac-4785-aea6-b9a76765b559","Type":"ContainerDied","Data":"c802838539c224782722e2df20db7b3eb6f1d0986f5ef9849d72ed3b8775cb59"} Nov 21 11:15:59 crc kubenswrapper[4972]: I1121 11:15:59.420872 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c802838539c224782722e2df20db7b3eb6f1d0986f5ef9849d72ed3b8775cb59" Nov 21 11:15:59 crc kubenswrapper[4972]: I1121 11:15:59.421004 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-40e0-account-create-bqmgl" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.189417 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptxgz"] Nov 21 11:16:00 crc kubenswrapper[4972]: E1121 11:16:00.189962 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3202a840-5409-4ab4-8905-994551c69dd8" containerName="mariadb-account-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.189991 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="3202a840-5409-4ab4-8905-994551c69dd8" containerName="mariadb-account-create" Nov 21 11:16:00 crc kubenswrapper[4972]: E1121 11:16:00.190015 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1756680-029d-4523-935d-48c659172cc2" containerName="mariadb-database-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.190028 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1756680-029d-4523-935d-48c659172cc2" containerName="mariadb-database-create" Nov 21 11:16:00 crc kubenswrapper[4972]: E1121 11:16:00.190070 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a0d08f4-50a4-417d-aa99-c18f80db60d8" containerName="mariadb-account-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.190081 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a0d08f4-50a4-417d-aa99-c18f80db60d8" containerName="mariadb-account-create" Nov 21 11:16:00 crc kubenswrapper[4972]: E1121 11:16:00.190093 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32d0c1a-b8ac-4785-aea6-b9a76765b559" containerName="mariadb-account-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.190103 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32d0c1a-b8ac-4785-aea6-b9a76765b559" containerName="mariadb-account-create" Nov 21 11:16:00 crc kubenswrapper[4972]: E1121 11:16:00.190125 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a67f3f2-1df1-4150-b686-397ce5c67721" containerName="mariadb-database-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.190135 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a67f3f2-1df1-4150-b686-397ce5c67721" containerName="mariadb-database-create" Nov 21 11:16:00 crc kubenswrapper[4972]: E1121 11:16:00.190165 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc141f37-940b-4971-a219-f63fc76d7489" containerName="mariadb-database-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.190176 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc141f37-940b-4971-a219-f63fc76d7489" containerName="mariadb-database-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.190463 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a32d0c1a-b8ac-4785-aea6-b9a76765b559" containerName="mariadb-account-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.190487 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="3202a840-5409-4ab4-8905-994551c69dd8" containerName="mariadb-account-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.190509 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc141f37-940b-4971-a219-f63fc76d7489" containerName="mariadb-database-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.190532 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a0d08f4-50a4-417d-aa99-c18f80db60d8" containerName="mariadb-account-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.190559 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1756680-029d-4523-935d-48c659172cc2" containerName="mariadb-database-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.190578 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a67f3f2-1df1-4150-b686-397ce5c67721" containerName="mariadb-database-create" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.191592 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.195483 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.195809 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fdcsr" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.196231 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.208993 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptxgz"] Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.295817 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7kmh\" (UniqueName: \"kubernetes.io/projected/189cef8d-65b6-4c2d-a7db-313fa6399a08-kube-api-access-s7kmh\") pod \"nova-cell0-conductor-db-sync-ptxgz\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.295880 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ptxgz\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.295914 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-config-data\") pod \"nova-cell0-conductor-db-sync-ptxgz\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.295929 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-scripts\") pod \"nova-cell0-conductor-db-sync-ptxgz\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.398026 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7kmh\" (UniqueName: \"kubernetes.io/projected/189cef8d-65b6-4c2d-a7db-313fa6399a08-kube-api-access-s7kmh\") pod \"nova-cell0-conductor-db-sync-ptxgz\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.398085 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ptxgz\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.398124 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-config-data\") pod \"nova-cell0-conductor-db-sync-ptxgz\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.398146 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-scripts\") pod \"nova-cell0-conductor-db-sync-ptxgz\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.402754 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ptxgz\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.403926 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-scripts\") pod \"nova-cell0-conductor-db-sync-ptxgz\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.407347 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-config-data\") pod \"nova-cell0-conductor-db-sync-ptxgz\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.427766 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7kmh\" (UniqueName: \"kubernetes.io/projected/189cef8d-65b6-4c2d-a7db-313fa6399a08-kube-api-access-s7kmh\") pod \"nova-cell0-conductor-db-sync-ptxgz\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:00 crc kubenswrapper[4972]: I1121 11:16:00.528374 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:01 crc kubenswrapper[4972]: I1121 11:16:01.003364 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptxgz"] Nov 21 11:16:01 crc kubenswrapper[4972]: W1121 11:16:01.006434 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod189cef8d_65b6_4c2d_a7db_313fa6399a08.slice/crio-ae700b198c2063faa3817707b002bda6cede978816883d70592690b2eb43533c WatchSource:0}: Error finding container ae700b198c2063faa3817707b002bda6cede978816883d70592690b2eb43533c: Status 404 returned error can't find the container with id ae700b198c2063faa3817707b002bda6cede978816883d70592690b2eb43533c Nov 21 11:16:01 crc kubenswrapper[4972]: I1121 11:16:01.444741 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptxgz" event={"ID":"189cef8d-65b6-4c2d-a7db-313fa6399a08","Type":"ContainerStarted","Data":"9ab750ed9ef66c0fc52d6551c1c182aeb67ed253a65038d638b4ab811a858128"} Nov 21 11:16:01 crc kubenswrapper[4972]: I1121 11:16:01.444787 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptxgz" event={"ID":"189cef8d-65b6-4c2d-a7db-313fa6399a08","Type":"ContainerStarted","Data":"ae700b198c2063faa3817707b002bda6cede978816883d70592690b2eb43533c"} Nov 21 11:16:01 crc kubenswrapper[4972]: I1121 11:16:01.467321 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ptxgz" podStartSLOduration=1.467303025 podStartE2EDuration="1.467303025s" podCreationTimestamp="2025-11-21 11:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:01.464676265 +0000 UTC m=+5706.573818763" watchObservedRunningTime="2025-11-21 11:16:01.467303025 +0000 UTC m=+5706.576445523" Nov 21 11:16:06 crc kubenswrapper[4972]: I1121 11:16:06.543809 4972 generic.go:334] "Generic (PLEG): container finished" podID="189cef8d-65b6-4c2d-a7db-313fa6399a08" containerID="9ab750ed9ef66c0fc52d6551c1c182aeb67ed253a65038d638b4ab811a858128" exitCode=0 Nov 21 11:16:06 crc kubenswrapper[4972]: I1121 11:16:06.543906 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptxgz" event={"ID":"189cef8d-65b6-4c2d-a7db-313fa6399a08","Type":"ContainerDied","Data":"9ab750ed9ef66c0fc52d6551c1c182aeb67ed253a65038d638b4ab811a858128"} Nov 21 11:16:07 crc kubenswrapper[4972]: I1121 11:16:07.920034 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.049706 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7kmh\" (UniqueName: \"kubernetes.io/projected/189cef8d-65b6-4c2d-a7db-313fa6399a08-kube-api-access-s7kmh\") pod \"189cef8d-65b6-4c2d-a7db-313fa6399a08\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.049872 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-combined-ca-bundle\") pod \"189cef8d-65b6-4c2d-a7db-313fa6399a08\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.049930 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-config-data\") pod \"189cef8d-65b6-4c2d-a7db-313fa6399a08\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.049988 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-scripts\") pod \"189cef8d-65b6-4c2d-a7db-313fa6399a08\" (UID: \"189cef8d-65b6-4c2d-a7db-313fa6399a08\") " Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.056016 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-scripts" (OuterVolumeSpecName: "scripts") pod "189cef8d-65b6-4c2d-a7db-313fa6399a08" (UID: "189cef8d-65b6-4c2d-a7db-313fa6399a08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.056036 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189cef8d-65b6-4c2d-a7db-313fa6399a08-kube-api-access-s7kmh" (OuterVolumeSpecName: "kube-api-access-s7kmh") pod "189cef8d-65b6-4c2d-a7db-313fa6399a08" (UID: "189cef8d-65b6-4c2d-a7db-313fa6399a08"). InnerVolumeSpecName "kube-api-access-s7kmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.079390 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "189cef8d-65b6-4c2d-a7db-313fa6399a08" (UID: "189cef8d-65b6-4c2d-a7db-313fa6399a08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.100656 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-config-data" (OuterVolumeSpecName: "config-data") pod "189cef8d-65b6-4c2d-a7db-313fa6399a08" (UID: "189cef8d-65b6-4c2d-a7db-313fa6399a08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.152638 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.152674 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.152686 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/189cef8d-65b6-4c2d-a7db-313fa6399a08-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.152702 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7kmh\" (UniqueName: \"kubernetes.io/projected/189cef8d-65b6-4c2d-a7db-313fa6399a08-kube-api-access-s7kmh\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.593241 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptxgz" event={"ID":"189cef8d-65b6-4c2d-a7db-313fa6399a08","Type":"ContainerDied","Data":"ae700b198c2063faa3817707b002bda6cede978816883d70592690b2eb43533c"} Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.593301 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae700b198c2063faa3817707b002bda6cede978816883d70592690b2eb43533c" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.593397 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptxgz" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.689034 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 11:16:08 crc kubenswrapper[4972]: E1121 11:16:08.689818 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189cef8d-65b6-4c2d-a7db-313fa6399a08" containerName="nova-cell0-conductor-db-sync" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.689850 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="189cef8d-65b6-4c2d-a7db-313fa6399a08" containerName="nova-cell0-conductor-db-sync" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.690762 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="189cef8d-65b6-4c2d-a7db-313fa6399a08" containerName="nova-cell0-conductor-db-sync" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.693659 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.696499 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fdcsr" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.698687 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.699875 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.882022 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e058b60f-7e51-450d-8330-1b96ad510032-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e058b60f-7e51-450d-8330-1b96ad510032\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.882401 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6jbc\" (UniqueName: \"kubernetes.io/projected/e058b60f-7e51-450d-8330-1b96ad510032-kube-api-access-z6jbc\") pod \"nova-cell0-conductor-0\" (UID: \"e058b60f-7e51-450d-8330-1b96ad510032\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.882702 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e058b60f-7e51-450d-8330-1b96ad510032-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e058b60f-7e51-450d-8330-1b96ad510032\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.984751 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e058b60f-7e51-450d-8330-1b96ad510032-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e058b60f-7e51-450d-8330-1b96ad510032\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.984898 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e058b60f-7e51-450d-8330-1b96ad510032-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e058b60f-7e51-450d-8330-1b96ad510032\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.984933 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6jbc\" (UniqueName: \"kubernetes.io/projected/e058b60f-7e51-450d-8330-1b96ad510032-kube-api-access-z6jbc\") pod \"nova-cell0-conductor-0\" (UID: \"e058b60f-7e51-450d-8330-1b96ad510032\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:08 crc kubenswrapper[4972]: I1121 11:16:08.991058 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e058b60f-7e51-450d-8330-1b96ad510032-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e058b60f-7e51-450d-8330-1b96ad510032\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:09 crc kubenswrapper[4972]: I1121 11:16:09.003099 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e058b60f-7e51-450d-8330-1b96ad510032-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e058b60f-7e51-450d-8330-1b96ad510032\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:09 crc kubenswrapper[4972]: I1121 11:16:09.011747 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6jbc\" (UniqueName: \"kubernetes.io/projected/e058b60f-7e51-450d-8330-1b96ad510032-kube-api-access-z6jbc\") pod \"nova-cell0-conductor-0\" (UID: \"e058b60f-7e51-450d-8330-1b96ad510032\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:09 crc kubenswrapper[4972]: I1121 11:16:09.015735 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:09 crc kubenswrapper[4972]: I1121 11:16:09.485344 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 11:16:09 crc kubenswrapper[4972]: I1121 11:16:09.604951 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e058b60f-7e51-450d-8330-1b96ad510032","Type":"ContainerStarted","Data":"362ed96badcee102adc5f769133407067b3c6b1422128c642b2d60565dca40ab"} Nov 21 11:16:10 crc kubenswrapper[4972]: I1121 11:16:10.618804 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e058b60f-7e51-450d-8330-1b96ad510032","Type":"ContainerStarted","Data":"195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2"} Nov 21 11:16:10 crc kubenswrapper[4972]: I1121 11:16:10.620651 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:10 crc kubenswrapper[4972]: I1121 11:16:10.759948 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:16:10 crc kubenswrapper[4972]: E1121 11:16:10.760352 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.061655 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.088231 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=6.088207374 podStartE2EDuration="6.088207374s" podCreationTimestamp="2025-11-21 11:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:10.649265009 +0000 UTC m=+5715.758407527" watchObservedRunningTime="2025-11-21 11:16:14.088207374 +0000 UTC m=+5719.197349872" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.625645 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-fmtr6"] Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.627620 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.630582 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.630615 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.634633 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fmtr6"] Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.708629 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fmtr6\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.708936 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-scripts\") pod \"nova-cell0-cell-mapping-fmtr6\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.710784 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2jd6\" (UniqueName: \"kubernetes.io/projected/adefeeef-e030-49b7-ade0-f4b728b3de7a-kube-api-access-z2jd6\") pod \"nova-cell0-cell-mapping-fmtr6\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.710829 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-config-data\") pod \"nova-cell0-cell-mapping-fmtr6\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.743591 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.745429 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.757534 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.759630 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.799050 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.800103 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.802922 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.811222 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.812012 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vrlk\" (UniqueName: \"kubernetes.io/projected/8510db5e-9884-41b1-a7d1-575592454efd-kube-api-access-8vrlk\") pod \"nova-cell1-novncproxy-0\" (UID: \"8510db5e-9884-41b1-a7d1-575592454efd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.812055 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54497d1e-ba95-42fc-9886-f8ab39a146dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.812117 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54497d1e-ba95-42fc-9886-f8ab39a146dd-config-data\") pod \"nova-api-0\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.812140 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njl5t\" (UniqueName: \"kubernetes.io/projected/54497d1e-ba95-42fc-9886-f8ab39a146dd-kube-api-access-njl5t\") pod \"nova-api-0\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.812181 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8510db5e-9884-41b1-a7d1-575592454efd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8510db5e-9884-41b1-a7d1-575592454efd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.812221 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8510db5e-9884-41b1-a7d1-575592454efd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8510db5e-9884-41b1-a7d1-575592454efd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.812269 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54497d1e-ba95-42fc-9886-f8ab39a146dd-logs\") pod \"nova-api-0\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.812304 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fmtr6\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.812340 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-scripts\") pod \"nova-cell0-cell-mapping-fmtr6\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.812371 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2jd6\" (UniqueName: \"kubernetes.io/projected/adefeeef-e030-49b7-ade0-f4b728b3de7a-kube-api-access-z2jd6\") pod \"nova-cell0-cell-mapping-fmtr6\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.812392 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-config-data\") pod \"nova-cell0-cell-mapping-fmtr6\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.819740 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-config-data\") pod \"nova-cell0-cell-mapping-fmtr6\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.826217 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fmtr6\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.826526 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-scripts\") pod \"nova-cell0-cell-mapping-fmtr6\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.848955 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.850868 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.854344 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2jd6\" (UniqueName: \"kubernetes.io/projected/adefeeef-e030-49b7-ade0-f4b728b3de7a-kube-api-access-z2jd6\") pod \"nova-cell0-cell-mapping-fmtr6\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.854532 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.857211 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.913381 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vrlk\" (UniqueName: \"kubernetes.io/projected/8510db5e-9884-41b1-a7d1-575592454efd-kube-api-access-8vrlk\") pod \"nova-cell1-novncproxy-0\" (UID: \"8510db5e-9884-41b1-a7d1-575592454efd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.913414 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54497d1e-ba95-42fc-9886-f8ab39a146dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.913452 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54497d1e-ba95-42fc-9886-f8ab39a146dd-config-data\") pod \"nova-api-0\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.913473 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f00219-11c0-4104-9050-91013673a2fe-logs\") pod \"nova-metadata-0\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " pod="openstack/nova-metadata-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.913496 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njl5t\" (UniqueName: \"kubernetes.io/projected/54497d1e-ba95-42fc-9886-f8ab39a146dd-kube-api-access-njl5t\") pod \"nova-api-0\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.913518 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8510db5e-9884-41b1-a7d1-575592454efd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8510db5e-9884-41b1-a7d1-575592454efd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.913536 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f00219-11c0-4104-9050-91013673a2fe-config-data\") pod \"nova-metadata-0\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " pod="openstack/nova-metadata-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.913569 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8510db5e-9884-41b1-a7d1-575592454efd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8510db5e-9884-41b1-a7d1-575592454efd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.913587 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8nw6\" (UniqueName: \"kubernetes.io/projected/08f00219-11c0-4104-9050-91013673a2fe-kube-api-access-j8nw6\") pod \"nova-metadata-0\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " pod="openstack/nova-metadata-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.913621 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54497d1e-ba95-42fc-9886-f8ab39a146dd-logs\") pod \"nova-api-0\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.913828 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f00219-11c0-4104-9050-91013673a2fe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " pod="openstack/nova-metadata-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.914057 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.915156 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54497d1e-ba95-42fc-9886-f8ab39a146dd-logs\") pod \"nova-api-0\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.915249 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.918611 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8510db5e-9884-41b1-a7d1-575592454efd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8510db5e-9884-41b1-a7d1-575592454efd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.919304 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.922210 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54497d1e-ba95-42fc-9886-f8ab39a146dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.923357 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54497d1e-ba95-42fc-9886-f8ab39a146dd-config-data\") pod \"nova-api-0\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.937808 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.940623 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8510db5e-9884-41b1-a7d1-575592454efd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8510db5e-9884-41b1-a7d1-575592454efd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.946388 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njl5t\" (UniqueName: \"kubernetes.io/projected/54497d1e-ba95-42fc-9886-f8ab39a146dd-kube-api-access-njl5t\") pod \"nova-api-0\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " pod="openstack/nova-api-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.957369 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vrlk\" (UniqueName: \"kubernetes.io/projected/8510db5e-9884-41b1-a7d1-575592454efd-kube-api-access-8vrlk\") pod \"nova-cell1-novncproxy-0\" (UID: \"8510db5e-9884-41b1-a7d1-575592454efd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.968867 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.978197 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6df6f8fcbc-52ls7"] Nov 21 11:16:14 crc kubenswrapper[4972]: I1121 11:16:14.980105 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.026179 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f00219-11c0-4104-9050-91013673a2fe-logs\") pod \"nova-metadata-0\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " pod="openstack/nova-metadata-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.026310 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f00219-11c0-4104-9050-91013673a2fe-config-data\") pod \"nova-metadata-0\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " pod="openstack/nova-metadata-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.026415 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8nw6\" (UniqueName: \"kubernetes.io/projected/08f00219-11c0-4104-9050-91013673a2fe-kube-api-access-j8nw6\") pod \"nova-metadata-0\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " pod="openstack/nova-metadata-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.026617 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f00219-11c0-4104-9050-91013673a2fe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " pod="openstack/nova-metadata-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.029311 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f00219-11c0-4104-9050-91013673a2fe-logs\") pod \"nova-metadata-0\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " pod="openstack/nova-metadata-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.031881 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f00219-11c0-4104-9050-91013673a2fe-config-data\") pod \"nova-metadata-0\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " pod="openstack/nova-metadata-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.034824 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6df6f8fcbc-52ls7"] Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.049811 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f00219-11c0-4104-9050-91013673a2fe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " pod="openstack/nova-metadata-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.050152 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8nw6\" (UniqueName: \"kubernetes.io/projected/08f00219-11c0-4104-9050-91013673a2fe-kube-api-access-j8nw6\") pod \"nova-metadata-0\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " pod="openstack/nova-metadata-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.088143 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.134144 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6030d365-75c4-442a-93fa-4539c43df118-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6030d365-75c4-442a-93fa-4539c43df118\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.134198 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-ovsdbserver-nb\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.134230 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6030d365-75c4-442a-93fa-4539c43df118-config-data\") pod \"nova-scheduler-0\" (UID: \"6030d365-75c4-442a-93fa-4539c43df118\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.134851 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-ovsdbserver-sb\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.135043 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnfpp\" (UniqueName: \"kubernetes.io/projected/41ee93c6-845e-4aac-8cb3-16a222d124b1-kube-api-access-fnfpp\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.135102 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-config\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.135139 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lj8f\" (UniqueName: \"kubernetes.io/projected/6030d365-75c4-442a-93fa-4539c43df118-kube-api-access-6lj8f\") pod \"nova-scheduler-0\" (UID: \"6030d365-75c4-442a-93fa-4539c43df118\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.135240 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-dns-svc\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.204851 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.216068 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.237665 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-dns-svc\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.237744 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6030d365-75c4-442a-93fa-4539c43df118-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6030d365-75c4-442a-93fa-4539c43df118\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.237761 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-ovsdbserver-nb\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.237779 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6030d365-75c4-442a-93fa-4539c43df118-config-data\") pod \"nova-scheduler-0\" (UID: \"6030d365-75c4-442a-93fa-4539c43df118\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.237816 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-ovsdbserver-sb\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.237882 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnfpp\" (UniqueName: \"kubernetes.io/projected/41ee93c6-845e-4aac-8cb3-16a222d124b1-kube-api-access-fnfpp\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.237907 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-config\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.237938 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lj8f\" (UniqueName: \"kubernetes.io/projected/6030d365-75c4-442a-93fa-4539c43df118-kube-api-access-6lj8f\") pod \"nova-scheduler-0\" (UID: \"6030d365-75c4-442a-93fa-4539c43df118\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.238686 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-dns-svc\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.239108 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-ovsdbserver-sb\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.239671 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-config\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.240374 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-ovsdbserver-nb\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.245236 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6030d365-75c4-442a-93fa-4539c43df118-config-data\") pod \"nova-scheduler-0\" (UID: \"6030d365-75c4-442a-93fa-4539c43df118\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.245364 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6030d365-75c4-442a-93fa-4539c43df118-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6030d365-75c4-442a-93fa-4539c43df118\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.261320 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lj8f\" (UniqueName: \"kubernetes.io/projected/6030d365-75c4-442a-93fa-4539c43df118-kube-api-access-6lj8f\") pod \"nova-scheduler-0\" (UID: \"6030d365-75c4-442a-93fa-4539c43df118\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.267350 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnfpp\" (UniqueName: \"kubernetes.io/projected/41ee93c6-845e-4aac-8cb3-16a222d124b1-kube-api-access-fnfpp\") pod \"dnsmasq-dns-6df6f8fcbc-52ls7\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.377076 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.388488 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.505508 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fmtr6"] Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.547262 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t8fss"] Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.548382 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.550940 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.559410 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.581192 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t8fss"] Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.648180 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-config-data\") pod \"nova-cell1-conductor-db-sync-t8fss\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.648587 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2kwk\" (UniqueName: \"kubernetes.io/projected/aca3ab65-ceb2-4d2f-8310-31573a28f17b-kube-api-access-s2kwk\") pod \"nova-cell1-conductor-db-sync-t8fss\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.648775 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t8fss\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.648901 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-scripts\") pod \"nova-cell1-conductor-db-sync-t8fss\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.695727 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fmtr6" event={"ID":"adefeeef-e030-49b7-ade0-f4b728b3de7a","Type":"ContainerStarted","Data":"b4a13dc03ce21060946d532cb85ec6844b6f35c71c444d0c8515c344392e7d05"} Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.707599 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:15 crc kubenswrapper[4972]: W1121 11:16:15.737872 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54497d1e_ba95_42fc_9886_f8ab39a146dd.slice/crio-908489e41feab51de13fc8b2bf1f0769e7de5922b854f78a3f6f72da191e9487 WatchSource:0}: Error finding container 908489e41feab51de13fc8b2bf1f0769e7de5922b854f78a3f6f72da191e9487: Status 404 returned error can't find the container with id 908489e41feab51de13fc8b2bf1f0769e7de5922b854f78a3f6f72da191e9487 Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.749911 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-scripts\") pod \"nova-cell1-conductor-db-sync-t8fss\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.749988 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-config-data\") pod \"nova-cell1-conductor-db-sync-t8fss\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.750026 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kwk\" (UniqueName: \"kubernetes.io/projected/aca3ab65-ceb2-4d2f-8310-31573a28f17b-kube-api-access-s2kwk\") pod \"nova-cell1-conductor-db-sync-t8fss\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.750103 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t8fss\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.754885 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-scripts\") pod \"nova-cell1-conductor-db-sync-t8fss\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.755458 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-config-data\") pod \"nova-cell1-conductor-db-sync-t8fss\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.767481 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t8fss\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.771563 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kwk\" (UniqueName: \"kubernetes.io/projected/aca3ab65-ceb2-4d2f-8310-31573a28f17b-kube-api-access-s2kwk\") pod \"nova-cell1-conductor-db-sync-t8fss\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:15 crc kubenswrapper[4972]: W1121 11:16:15.786555 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8510db5e_9884_41b1_a7d1_575592454efd.slice/crio-b988035671b0e6e1b574b36c3f215ff34fbfe85f3da6574f5471e7ab447e9e86 WatchSource:0}: Error finding container b988035671b0e6e1b574b36c3f215ff34fbfe85f3da6574f5471e7ab447e9e86: Status 404 returned error can't find the container with id b988035671b0e6e1b574b36c3f215ff34fbfe85f3da6574f5471e7ab447e9e86 Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.790291 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.877442 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:15 crc kubenswrapper[4972]: I1121 11:16:15.902822 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.042858 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6df6f8fcbc-52ls7"] Nov 21 11:16:16 crc kubenswrapper[4972]: W1121 11:16:16.060079 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41ee93c6_845e_4aac_8cb3_16a222d124b1.slice/crio-b3d3ab68caeea3f2902a89d367165df9fd039399ddec98197230200806b7cc5c WatchSource:0}: Error finding container b3d3ab68caeea3f2902a89d367165df9fd039399ddec98197230200806b7cc5c: Status 404 returned error can't find the container with id b3d3ab68caeea3f2902a89d367165df9fd039399ddec98197230200806b7cc5c Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.066542 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.322936 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t8fss"] Nov 21 11:16:16 crc kubenswrapper[4972]: W1121 11:16:16.349516 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaca3ab65_ceb2_4d2f_8310_31573a28f17b.slice/crio-a07c1553825198cca7bd83cf1c91a5a8bd3c64d4e0248529c55f8a306c860bf1 WatchSource:0}: Error finding container a07c1553825198cca7bd83cf1c91a5a8bd3c64d4e0248529c55f8a306c860bf1: Status 404 returned error can't find the container with id a07c1553825198cca7bd83cf1c91a5a8bd3c64d4e0248529c55f8a306c860bf1 Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.709787 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6030d365-75c4-442a-93fa-4539c43df118","Type":"ContainerStarted","Data":"660644f399acaafd0ec6a4b9da8b234d477b7f66505d1e20b639ff2feefe850b"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.710190 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6030d365-75c4-442a-93fa-4539c43df118","Type":"ContainerStarted","Data":"01f247f6b6c1ca6fe890ae35b8042fc406961a658e5027a9c457235c5cbae322"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.714640 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t8fss" event={"ID":"aca3ab65-ceb2-4d2f-8310-31573a28f17b","Type":"ContainerStarted","Data":"23656fea8eda7e69489448748bd2fc8239b5c9530fa9ed22c4dd977fefd0b3d9"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.715256 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t8fss" event={"ID":"aca3ab65-ceb2-4d2f-8310-31573a28f17b","Type":"ContainerStarted","Data":"a07c1553825198cca7bd83cf1c91a5a8bd3c64d4e0248529c55f8a306c860bf1"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.716790 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8510db5e-9884-41b1-a7d1-575592454efd","Type":"ContainerStarted","Data":"9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.716851 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8510db5e-9884-41b1-a7d1-575592454efd","Type":"ContainerStarted","Data":"b988035671b0e6e1b574b36c3f215ff34fbfe85f3da6574f5471e7ab447e9e86"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.718609 4972 generic.go:334] "Generic (PLEG): container finished" podID="41ee93c6-845e-4aac-8cb3-16a222d124b1" containerID="75c42bd939b767b3ed2d045f4188d3ceba3a4884a2eccba57dd3cc91b440e8db" exitCode=0 Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.718905 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" event={"ID":"41ee93c6-845e-4aac-8cb3-16a222d124b1","Type":"ContainerDied","Data":"75c42bd939b767b3ed2d045f4188d3ceba3a4884a2eccba57dd3cc91b440e8db"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.718946 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" event={"ID":"41ee93c6-845e-4aac-8cb3-16a222d124b1","Type":"ContainerStarted","Data":"b3d3ab68caeea3f2902a89d367165df9fd039399ddec98197230200806b7cc5c"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.720991 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54497d1e-ba95-42fc-9886-f8ab39a146dd","Type":"ContainerStarted","Data":"1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.721028 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54497d1e-ba95-42fc-9886-f8ab39a146dd","Type":"ContainerStarted","Data":"1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.721045 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54497d1e-ba95-42fc-9886-f8ab39a146dd","Type":"ContainerStarted","Data":"908489e41feab51de13fc8b2bf1f0769e7de5922b854f78a3f6f72da191e9487"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.732341 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08f00219-11c0-4104-9050-91013673a2fe","Type":"ContainerStarted","Data":"7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.732386 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08f00219-11c0-4104-9050-91013673a2fe","Type":"ContainerStarted","Data":"959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.732396 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08f00219-11c0-4104-9050-91013673a2fe","Type":"ContainerStarted","Data":"97fce1efbef835b6c31464a5d9595b984aeb361fd6b58d60aa99c768f818f4d7"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.743906 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fmtr6" event={"ID":"adefeeef-e030-49b7-ade0-f4b728b3de7a","Type":"ContainerStarted","Data":"fa5388b849b0af756eb934797c853c4869053c4f9985bccbe633da234e108278"} Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.752777 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.752726162 podStartE2EDuration="2.752726162s" podCreationTimestamp="2025-11-21 11:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:16.741198257 +0000 UTC m=+5721.850340805" watchObservedRunningTime="2025-11-21 11:16:16.752726162 +0000 UTC m=+5721.861868680" Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.767081 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-t8fss" podStartSLOduration=1.767061232 podStartE2EDuration="1.767061232s" podCreationTimestamp="2025-11-21 11:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:16.756874532 +0000 UTC m=+5721.866017070" watchObservedRunningTime="2025-11-21 11:16:16.767061232 +0000 UTC m=+5721.876203740" Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.778873 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.7788503540000002 podStartE2EDuration="2.778850354s" podCreationTimestamp="2025-11-21 11:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:16.774348915 +0000 UTC m=+5721.883491423" watchObservedRunningTime="2025-11-21 11:16:16.778850354 +0000 UTC m=+5721.887992872" Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.808611 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.808587021 podStartE2EDuration="2.808587021s" podCreationTimestamp="2025-11-21 11:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:16.794998791 +0000 UTC m=+5721.904141299" watchObservedRunningTime="2025-11-21 11:16:16.808587021 +0000 UTC m=+5721.917729519" Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.824543 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.824515743 podStartE2EDuration="2.824515743s" podCreationTimestamp="2025-11-21 11:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:16.809769453 +0000 UTC m=+5721.918911961" watchObservedRunningTime="2025-11-21 11:16:16.824515743 +0000 UTC m=+5721.933658241" Nov 21 11:16:16 crc kubenswrapper[4972]: I1121 11:16:16.884313 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-fmtr6" podStartSLOduration=2.884222994 podStartE2EDuration="2.884222994s" podCreationTimestamp="2025-11-21 11:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:16.860515196 +0000 UTC m=+5721.969657774" watchObservedRunningTime="2025-11-21 11:16:16.884222994 +0000 UTC m=+5721.993365502" Nov 21 11:16:17 crc kubenswrapper[4972]: I1121 11:16:17.774411 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" event={"ID":"41ee93c6-845e-4aac-8cb3-16a222d124b1","Type":"ContainerStarted","Data":"e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a"} Nov 21 11:16:17 crc kubenswrapper[4972]: I1121 11:16:17.801420 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" podStartSLOduration=3.801401582 podStartE2EDuration="3.801401582s" podCreationTimestamp="2025-11-21 11:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:17.799095401 +0000 UTC m=+5722.908237979" watchObservedRunningTime="2025-11-21 11:16:17.801401582 +0000 UTC m=+5722.910544080" Nov 21 11:16:18 crc kubenswrapper[4972]: I1121 11:16:18.778268 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:19 crc kubenswrapper[4972]: I1121 11:16:19.790965 4972 generic.go:334] "Generic (PLEG): container finished" podID="aca3ab65-ceb2-4d2f-8310-31573a28f17b" containerID="23656fea8eda7e69489448748bd2fc8239b5c9530fa9ed22c4dd977fefd0b3d9" exitCode=0 Nov 21 11:16:19 crc kubenswrapper[4972]: I1121 11:16:19.791052 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t8fss" event={"ID":"aca3ab65-ceb2-4d2f-8310-31573a28f17b","Type":"ContainerDied","Data":"23656fea8eda7e69489448748bd2fc8239b5c9530fa9ed22c4dd977fefd0b3d9"} Nov 21 11:16:20 crc kubenswrapper[4972]: I1121 11:16:20.205520 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:20 crc kubenswrapper[4972]: I1121 11:16:20.216767 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 11:16:20 crc kubenswrapper[4972]: I1121 11:16:20.216942 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 11:16:20 crc kubenswrapper[4972]: I1121 11:16:20.378531 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 11:16:20 crc kubenswrapper[4972]: I1121 11:16:20.803515 4972 generic.go:334] "Generic (PLEG): container finished" podID="adefeeef-e030-49b7-ade0-f4b728b3de7a" containerID="fa5388b849b0af756eb934797c853c4869053c4f9985bccbe633da234e108278" exitCode=0 Nov 21 11:16:20 crc kubenswrapper[4972]: I1121 11:16:20.803621 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fmtr6" event={"ID":"adefeeef-e030-49b7-ade0-f4b728b3de7a","Type":"ContainerDied","Data":"fa5388b849b0af756eb934797c853c4869053c4f9985bccbe633da234e108278"} Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.263762 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.382360 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-config-data\") pod \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.382464 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2kwk\" (UniqueName: \"kubernetes.io/projected/aca3ab65-ceb2-4d2f-8310-31573a28f17b-kube-api-access-s2kwk\") pod \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.382538 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-combined-ca-bundle\") pod \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.382624 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-scripts\") pod \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\" (UID: \"aca3ab65-ceb2-4d2f-8310-31573a28f17b\") " Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.390819 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca3ab65-ceb2-4d2f-8310-31573a28f17b-kube-api-access-s2kwk" (OuterVolumeSpecName: "kube-api-access-s2kwk") pod "aca3ab65-ceb2-4d2f-8310-31573a28f17b" (UID: "aca3ab65-ceb2-4d2f-8310-31573a28f17b"). InnerVolumeSpecName "kube-api-access-s2kwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.392574 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-scripts" (OuterVolumeSpecName: "scripts") pod "aca3ab65-ceb2-4d2f-8310-31573a28f17b" (UID: "aca3ab65-ceb2-4d2f-8310-31573a28f17b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.423206 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aca3ab65-ceb2-4d2f-8310-31573a28f17b" (UID: "aca3ab65-ceb2-4d2f-8310-31573a28f17b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.428032 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-config-data" (OuterVolumeSpecName: "config-data") pod "aca3ab65-ceb2-4d2f-8310-31573a28f17b" (UID: "aca3ab65-ceb2-4d2f-8310-31573a28f17b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.485905 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.485937 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.485954 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2kwk\" (UniqueName: \"kubernetes.io/projected/aca3ab65-ceb2-4d2f-8310-31573a28f17b-kube-api-access-s2kwk\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.485968 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca3ab65-ceb2-4d2f-8310-31573a28f17b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.818178 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t8fss" event={"ID":"aca3ab65-ceb2-4d2f-8310-31573a28f17b","Type":"ContainerDied","Data":"a07c1553825198cca7bd83cf1c91a5a8bd3c64d4e0248529c55f8a306c860bf1"} Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.818577 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a07c1553825198cca7bd83cf1c91a5a8bd3c64d4e0248529c55f8a306c860bf1" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.818247 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t8fss" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.935787 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 11:16:21 crc kubenswrapper[4972]: E1121 11:16:21.936498 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca3ab65-ceb2-4d2f-8310-31573a28f17b" containerName="nova-cell1-conductor-db-sync" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.936537 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca3ab65-ceb2-4d2f-8310-31573a28f17b" containerName="nova-cell1-conductor-db-sync" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.936980 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca3ab65-ceb2-4d2f-8310-31573a28f17b" containerName="nova-cell1-conductor-db-sync" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.938166 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.944991 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 21 11:16:21 crc kubenswrapper[4972]: I1121 11:16:21.954571 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.108877 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xk89\" (UniqueName: \"kubernetes.io/projected/186387b6-541f-44f7-811a-2814418ff1cd-kube-api-access-8xk89\") pod \"nova-cell1-conductor-0\" (UID: \"186387b6-541f-44f7-811a-2814418ff1cd\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.108976 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/186387b6-541f-44f7-811a-2814418ff1cd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"186387b6-541f-44f7-811a-2814418ff1cd\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.109157 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186387b6-541f-44f7-811a-2814418ff1cd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"186387b6-541f-44f7-811a-2814418ff1cd\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.211029 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/186387b6-541f-44f7-811a-2814418ff1cd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"186387b6-541f-44f7-811a-2814418ff1cd\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.211211 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186387b6-541f-44f7-811a-2814418ff1cd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"186387b6-541f-44f7-811a-2814418ff1cd\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.211258 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xk89\" (UniqueName: \"kubernetes.io/projected/186387b6-541f-44f7-811a-2814418ff1cd-kube-api-access-8xk89\") pod \"nova-cell1-conductor-0\" (UID: \"186387b6-541f-44f7-811a-2814418ff1cd\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.217922 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186387b6-541f-44f7-811a-2814418ff1cd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"186387b6-541f-44f7-811a-2814418ff1cd\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.218396 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/186387b6-541f-44f7-811a-2814418ff1cd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"186387b6-541f-44f7-811a-2814418ff1cd\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.233411 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xk89\" (UniqueName: \"kubernetes.io/projected/186387b6-541f-44f7-811a-2814418ff1cd-kube-api-access-8xk89\") pod \"nova-cell1-conductor-0\" (UID: \"186387b6-541f-44f7-811a-2814418ff1cd\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.279048 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.343165 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.413727 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-scripts\") pod \"adefeeef-e030-49b7-ade0-f4b728b3de7a\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.413967 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2jd6\" (UniqueName: \"kubernetes.io/projected/adefeeef-e030-49b7-ade0-f4b728b3de7a-kube-api-access-z2jd6\") pod \"adefeeef-e030-49b7-ade0-f4b728b3de7a\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.414119 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-config-data\") pod \"adefeeef-e030-49b7-ade0-f4b728b3de7a\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.414274 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-combined-ca-bundle\") pod \"adefeeef-e030-49b7-ade0-f4b728b3de7a\" (UID: \"adefeeef-e030-49b7-ade0-f4b728b3de7a\") " Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.417509 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adefeeef-e030-49b7-ade0-f4b728b3de7a-kube-api-access-z2jd6" (OuterVolumeSpecName: "kube-api-access-z2jd6") pod "adefeeef-e030-49b7-ade0-f4b728b3de7a" (UID: "adefeeef-e030-49b7-ade0-f4b728b3de7a"). InnerVolumeSpecName "kube-api-access-z2jd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.419033 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-scripts" (OuterVolumeSpecName: "scripts") pod "adefeeef-e030-49b7-ade0-f4b728b3de7a" (UID: "adefeeef-e030-49b7-ade0-f4b728b3de7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.447772 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adefeeef-e030-49b7-ade0-f4b728b3de7a" (UID: "adefeeef-e030-49b7-ade0-f4b728b3de7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.462100 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-config-data" (OuterVolumeSpecName: "config-data") pod "adefeeef-e030-49b7-ade0-f4b728b3de7a" (UID: "adefeeef-e030-49b7-ade0-f4b728b3de7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.516149 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.516181 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.516190 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2jd6\" (UniqueName: \"kubernetes.io/projected/adefeeef-e030-49b7-ade0-f4b728b3de7a-kube-api-access-z2jd6\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.516200 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adefeeef-e030-49b7-ade0-f4b728b3de7a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.715175 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 11:16:22 crc kubenswrapper[4972]: W1121 11:16:22.719385 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod186387b6_541f_44f7_811a_2814418ff1cd.slice/crio-0ef7a5394aae16670c272519e8e32aa1c0eb01aae5fd3d5c84a41a3afaed6674 WatchSource:0}: Error finding container 0ef7a5394aae16670c272519e8e32aa1c0eb01aae5fd3d5c84a41a3afaed6674: Status 404 returned error can't find the container with id 0ef7a5394aae16670c272519e8e32aa1c0eb01aae5fd3d5c84a41a3afaed6674 Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.760162 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:16:22 crc kubenswrapper[4972]: E1121 11:16:22.760523 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.837803 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fmtr6" event={"ID":"adefeeef-e030-49b7-ade0-f4b728b3de7a","Type":"ContainerDied","Data":"b4a13dc03ce21060946d532cb85ec6844b6f35c71c444d0c8515c344392e7d05"} Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.838349 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a13dc03ce21060946d532cb85ec6844b6f35c71c444d0c8515c344392e7d05" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.837968 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fmtr6" Nov 21 11:16:22 crc kubenswrapper[4972]: I1121 11:16:22.850955 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"186387b6-541f-44f7-811a-2814418ff1cd","Type":"ContainerStarted","Data":"0ef7a5394aae16670c272519e8e32aa1c0eb01aae5fd3d5c84a41a3afaed6674"} Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.118981 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.119268 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="54497d1e-ba95-42fc-9886-f8ab39a146dd" containerName="nova-api-log" containerID="cri-o://1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139" gracePeriod=30 Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.120265 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="54497d1e-ba95-42fc-9886-f8ab39a146dd" containerName="nova-api-api" containerID="cri-o://1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d" gracePeriod=30 Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.143706 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.143946 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6030d365-75c4-442a-93fa-4539c43df118" containerName="nova-scheduler-scheduler" containerID="cri-o://660644f399acaafd0ec6a4b9da8b234d477b7f66505d1e20b639ff2feefe850b" gracePeriod=30 Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.283702 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.284007 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="08f00219-11c0-4104-9050-91013673a2fe" containerName="nova-metadata-log" containerID="cri-o://959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9" gracePeriod=30 Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.284465 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="08f00219-11c0-4104-9050-91013673a2fe" containerName="nova-metadata-metadata" containerID="cri-o://7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8" gracePeriod=30 Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.695606 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.815995 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.852796 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54497d1e-ba95-42fc-9886-f8ab39a146dd-combined-ca-bundle\") pod \"54497d1e-ba95-42fc-9886-f8ab39a146dd\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.852879 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54497d1e-ba95-42fc-9886-f8ab39a146dd-logs\") pod \"54497d1e-ba95-42fc-9886-f8ab39a146dd\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.852994 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54497d1e-ba95-42fc-9886-f8ab39a146dd-config-data\") pod \"54497d1e-ba95-42fc-9886-f8ab39a146dd\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.853079 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njl5t\" (UniqueName: \"kubernetes.io/projected/54497d1e-ba95-42fc-9886-f8ab39a146dd-kube-api-access-njl5t\") pod \"54497d1e-ba95-42fc-9886-f8ab39a146dd\" (UID: \"54497d1e-ba95-42fc-9886-f8ab39a146dd\") " Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.854136 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54497d1e-ba95-42fc-9886-f8ab39a146dd-logs" (OuterVolumeSpecName: "logs") pod "54497d1e-ba95-42fc-9886-f8ab39a146dd" (UID: "54497d1e-ba95-42fc-9886-f8ab39a146dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.866065 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54497d1e-ba95-42fc-9886-f8ab39a146dd-kube-api-access-njl5t" (OuterVolumeSpecName: "kube-api-access-njl5t") pod "54497d1e-ba95-42fc-9886-f8ab39a146dd" (UID: "54497d1e-ba95-42fc-9886-f8ab39a146dd"). InnerVolumeSpecName "kube-api-access-njl5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.868083 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"186387b6-541f-44f7-811a-2814418ff1cd","Type":"ContainerStarted","Data":"21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33"} Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.868549 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.871177 4972 generic.go:334] "Generic (PLEG): container finished" podID="54497d1e-ba95-42fc-9886-f8ab39a146dd" containerID="1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d" exitCode=0 Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.871238 4972 generic.go:334] "Generic (PLEG): container finished" podID="54497d1e-ba95-42fc-9886-f8ab39a146dd" containerID="1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139" exitCode=143 Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.871315 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54497d1e-ba95-42fc-9886-f8ab39a146dd","Type":"ContainerDied","Data":"1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d"} Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.871351 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54497d1e-ba95-42fc-9886-f8ab39a146dd","Type":"ContainerDied","Data":"1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139"} Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.871380 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54497d1e-ba95-42fc-9886-f8ab39a146dd","Type":"ContainerDied","Data":"908489e41feab51de13fc8b2bf1f0769e7de5922b854f78a3f6f72da191e9487"} Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.871401 4972 scope.go:117] "RemoveContainer" containerID="1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.871625 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.888482 4972 generic.go:334] "Generic (PLEG): container finished" podID="08f00219-11c0-4104-9050-91013673a2fe" containerID="7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8" exitCode=0 Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.888514 4972 generic.go:334] "Generic (PLEG): container finished" podID="08f00219-11c0-4104-9050-91013673a2fe" containerID="959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9" exitCode=143 Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.888544 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08f00219-11c0-4104-9050-91013673a2fe","Type":"ContainerDied","Data":"7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8"} Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.888574 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08f00219-11c0-4104-9050-91013673a2fe","Type":"ContainerDied","Data":"959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9"} Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.888586 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08f00219-11c0-4104-9050-91013673a2fe","Type":"ContainerDied","Data":"97fce1efbef835b6c31464a5d9595b984aeb361fd6b58d60aa99c768f818f4d7"} Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.888637 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.890243 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54497d1e-ba95-42fc-9886-f8ab39a146dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54497d1e-ba95-42fc-9886-f8ab39a146dd" (UID: "54497d1e-ba95-42fc-9886-f8ab39a146dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.896424 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54497d1e-ba95-42fc-9886-f8ab39a146dd-config-data" (OuterVolumeSpecName: "config-data") pod "54497d1e-ba95-42fc-9886-f8ab39a146dd" (UID: "54497d1e-ba95-42fc-9886-f8ab39a146dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.910430 4972 scope.go:117] "RemoveContainer" containerID="1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.929820 4972 scope.go:117] "RemoveContainer" containerID="1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d" Nov 21 11:16:23 crc kubenswrapper[4972]: E1121 11:16:23.930423 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d\": container with ID starting with 1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d not found: ID does not exist" containerID="1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.930453 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d"} err="failed to get container status \"1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d\": rpc error: code = NotFound desc = could not find container \"1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d\": container with ID starting with 1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d not found: ID does not exist" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.930475 4972 scope.go:117] "RemoveContainer" containerID="1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139" Nov 21 11:16:23 crc kubenswrapper[4972]: E1121 11:16:23.930865 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139\": container with ID starting with 1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139 not found: ID does not exist" containerID="1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.930932 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139"} err="failed to get container status \"1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139\": rpc error: code = NotFound desc = could not find container \"1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139\": container with ID starting with 1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139 not found: ID does not exist" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.930949 4972 scope.go:117] "RemoveContainer" containerID="1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.931213 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d"} err="failed to get container status \"1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d\": rpc error: code = NotFound desc = could not find container \"1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d\": container with ID starting with 1c115292aac7efce85ebae1a524ddc439c27b836a927e20791af661542d79d6d not found: ID does not exist" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.931234 4972 scope.go:117] "RemoveContainer" containerID="1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.931580 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139"} err="failed to get container status \"1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139\": rpc error: code = NotFound desc = could not find container \"1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139\": container with ID starting with 1fcdedeec7da5af76c766dcd1df813fea3761d44398455c576dd5220a38d4139 not found: ID does not exist" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.931596 4972 scope.go:117] "RemoveContainer" containerID="7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.948562 4972 scope.go:117] "RemoveContainer" containerID="959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.954245 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f00219-11c0-4104-9050-91013673a2fe-combined-ca-bundle\") pod \"08f00219-11c0-4104-9050-91013673a2fe\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.954474 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f00219-11c0-4104-9050-91013673a2fe-config-data\") pod \"08f00219-11c0-4104-9050-91013673a2fe\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.954570 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f00219-11c0-4104-9050-91013673a2fe-logs\") pod \"08f00219-11c0-4104-9050-91013673a2fe\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.954645 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8nw6\" (UniqueName: \"kubernetes.io/projected/08f00219-11c0-4104-9050-91013673a2fe-kube-api-access-j8nw6\") pod \"08f00219-11c0-4104-9050-91013673a2fe\" (UID: \"08f00219-11c0-4104-9050-91013673a2fe\") " Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.955474 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08f00219-11c0-4104-9050-91013673a2fe-logs" (OuterVolumeSpecName: "logs") pod "08f00219-11c0-4104-9050-91013673a2fe" (UID: "08f00219-11c0-4104-9050-91013673a2fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.956938 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54497d1e-ba95-42fc-9886-f8ab39a146dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.958252 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f00219-11c0-4104-9050-91013673a2fe-kube-api-access-j8nw6" (OuterVolumeSpecName: "kube-api-access-j8nw6") pod "08f00219-11c0-4104-9050-91013673a2fe" (UID: "08f00219-11c0-4104-9050-91013673a2fe"). InnerVolumeSpecName "kube-api-access-j8nw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.959338 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54497d1e-ba95-42fc-9886-f8ab39a146dd-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.959368 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f00219-11c0-4104-9050-91013673a2fe-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.959384 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54497d1e-ba95-42fc-9886-f8ab39a146dd-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.959402 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njl5t\" (UniqueName: \"kubernetes.io/projected/54497d1e-ba95-42fc-9886-f8ab39a146dd-kube-api-access-njl5t\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.969184 4972 scope.go:117] "RemoveContainer" containerID="7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8" Nov 21 11:16:23 crc kubenswrapper[4972]: E1121 11:16:23.970004 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8\": container with ID starting with 7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8 not found: ID does not exist" containerID="7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.970056 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8"} err="failed to get container status \"7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8\": rpc error: code = NotFound desc = could not find container \"7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8\": container with ID starting with 7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8 not found: ID does not exist" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.970083 4972 scope.go:117] "RemoveContainer" containerID="959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9" Nov 21 11:16:23 crc kubenswrapper[4972]: E1121 11:16:23.970480 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9\": container with ID starting with 959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9 not found: ID does not exist" containerID="959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.970539 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9"} err="failed to get container status \"959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9\": rpc error: code = NotFound desc = could not find container \"959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9\": container with ID starting with 959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9 not found: ID does not exist" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.970569 4972 scope.go:117] "RemoveContainer" containerID="7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.971549 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8"} err="failed to get container status \"7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8\": rpc error: code = NotFound desc = could not find container \"7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8\": container with ID starting with 7edd489eac45d11f84d6b9ebd4e9eacf0440e3fbf9410791a1ab9eec66fef6f8 not found: ID does not exist" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.971666 4972 scope.go:117] "RemoveContainer" containerID="959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.972039 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9"} err="failed to get container status \"959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9\": rpc error: code = NotFound desc = could not find container \"959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9\": container with ID starting with 959bd044b144c3afbaedde8cd47cbfe09b9ee16b215bcdb6bc6d4623e36e29a9 not found: ID does not exist" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.985290 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08f00219-11c0-4104-9050-91013673a2fe-config-data" (OuterVolumeSpecName: "config-data") pod "08f00219-11c0-4104-9050-91013673a2fe" (UID: "08f00219-11c0-4104-9050-91013673a2fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:23 crc kubenswrapper[4972]: I1121 11:16:23.986594 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08f00219-11c0-4104-9050-91013673a2fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08f00219-11c0-4104-9050-91013673a2fe" (UID: "08f00219-11c0-4104-9050-91013673a2fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.064787 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f00219-11c0-4104-9050-91013673a2fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.065071 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f00219-11c0-4104-9050-91013673a2fe-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.065091 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8nw6\" (UniqueName: \"kubernetes.io/projected/08f00219-11c0-4104-9050-91013673a2fe-kube-api-access-j8nw6\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.253542 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.253513947 podStartE2EDuration="3.253513947s" podCreationTimestamp="2025-11-21 11:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:23.886509639 +0000 UTC m=+5728.995652127" watchObservedRunningTime="2025-11-21 11:16:24.253513947 +0000 UTC m=+5729.362656485" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.265068 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.277350 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.287073 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.299906 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.310012 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:24 crc kubenswrapper[4972]: E1121 11:16:24.310665 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f00219-11c0-4104-9050-91013673a2fe" containerName="nova-metadata-log" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.310699 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f00219-11c0-4104-9050-91013673a2fe" containerName="nova-metadata-log" Nov 21 11:16:24 crc kubenswrapper[4972]: E1121 11:16:24.310755 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54497d1e-ba95-42fc-9886-f8ab39a146dd" containerName="nova-api-log" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.310769 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="54497d1e-ba95-42fc-9886-f8ab39a146dd" containerName="nova-api-log" Nov 21 11:16:24 crc kubenswrapper[4972]: E1121 11:16:24.310789 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54497d1e-ba95-42fc-9886-f8ab39a146dd" containerName="nova-api-api" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.310804 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="54497d1e-ba95-42fc-9886-f8ab39a146dd" containerName="nova-api-api" Nov 21 11:16:24 crc kubenswrapper[4972]: E1121 11:16:24.310857 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adefeeef-e030-49b7-ade0-f4b728b3de7a" containerName="nova-manage" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.310871 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="adefeeef-e030-49b7-ade0-f4b728b3de7a" containerName="nova-manage" Nov 21 11:16:24 crc kubenswrapper[4972]: E1121 11:16:24.310913 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f00219-11c0-4104-9050-91013673a2fe" containerName="nova-metadata-metadata" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.310927 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f00219-11c0-4104-9050-91013673a2fe" containerName="nova-metadata-metadata" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.311250 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="54497d1e-ba95-42fc-9886-f8ab39a146dd" containerName="nova-api-log" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.311280 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f00219-11c0-4104-9050-91013673a2fe" containerName="nova-metadata-metadata" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.311313 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="54497d1e-ba95-42fc-9886-f8ab39a146dd" containerName="nova-api-api" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.311335 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f00219-11c0-4104-9050-91013673a2fe" containerName="nova-metadata-log" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.311364 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="adefeeef-e030-49b7-ade0-f4b728b3de7a" containerName="nova-manage" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.313157 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.317388 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.322356 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.335004 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.337096 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.341705 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.346350 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.472935 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48grl\" (UniqueName: \"kubernetes.io/projected/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-kube-api-access-48grl\") pod \"nova-api-0\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.473020 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18f2524-b1bd-40af-b06f-0475cc6a47e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.473069 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18f2524-b1bd-40af-b06f-0475cc6a47e4-config-data\") pod \"nova-metadata-0\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.473159 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-logs\") pod \"nova-api-0\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.473273 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.473332 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-config-data\") pod \"nova-api-0\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.473366 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18f2524-b1bd-40af-b06f-0475cc6a47e4-logs\") pod \"nova-metadata-0\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.473407 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4g4x\" (UniqueName: \"kubernetes.io/projected/b18f2524-b1bd-40af-b06f-0475cc6a47e4-kube-api-access-f4g4x\") pod \"nova-metadata-0\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.574876 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.575001 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-config-data\") pod \"nova-api-0\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.575040 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18f2524-b1bd-40af-b06f-0475cc6a47e4-logs\") pod \"nova-metadata-0\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.575091 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4g4x\" (UniqueName: \"kubernetes.io/projected/b18f2524-b1bd-40af-b06f-0475cc6a47e4-kube-api-access-f4g4x\") pod \"nova-metadata-0\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.575208 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48grl\" (UniqueName: \"kubernetes.io/projected/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-kube-api-access-48grl\") pod \"nova-api-0\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.575259 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18f2524-b1bd-40af-b06f-0475cc6a47e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.575305 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18f2524-b1bd-40af-b06f-0475cc6a47e4-config-data\") pod \"nova-metadata-0\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.575364 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-logs\") pod \"nova-api-0\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.575795 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18f2524-b1bd-40af-b06f-0475cc6a47e4-logs\") pod \"nova-metadata-0\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.576070 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-logs\") pod \"nova-api-0\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.580657 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.581466 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18f2524-b1bd-40af-b06f-0475cc6a47e4-config-data\") pod \"nova-metadata-0\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.581526 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-config-data\") pod \"nova-api-0\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.591405 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4g4x\" (UniqueName: \"kubernetes.io/projected/b18f2524-b1bd-40af-b06f-0475cc6a47e4-kube-api-access-f4g4x\") pod \"nova-metadata-0\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.593475 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48grl\" (UniqueName: \"kubernetes.io/projected/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-kube-api-access-48grl\") pod \"nova-api-0\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.600032 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18f2524-b1bd-40af-b06f-0475cc6a47e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.648008 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.659326 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:16:24 crc kubenswrapper[4972]: I1121 11:16:24.975515 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.206021 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.222518 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.280096 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:25 crc kubenswrapper[4972]: W1121 11:16:25.283422 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb18f2524_b1bd_40af_b06f_0475cc6a47e4.slice/crio-8d4555eb3401c5f2eb00261c7f2a39ed7195ac6a30215a6dbc64e84738e5197a WatchSource:0}: Error finding container 8d4555eb3401c5f2eb00261c7f2a39ed7195ac6a30215a6dbc64e84738e5197a: Status 404 returned error can't find the container with id 8d4555eb3401c5f2eb00261c7f2a39ed7195ac6a30215a6dbc64e84738e5197a Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.390981 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.452498 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f8c4cb9bc-p66g8"] Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.452763 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" podUID="893f2fb2-c476-44ae-a954-6d7463ccf560" containerName="dnsmasq-dns" containerID="cri-o://97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14" gracePeriod=10 Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.786317 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08f00219-11c0-4104-9050-91013673a2fe" path="/var/lib/kubelet/pods/08f00219-11c0-4104-9050-91013673a2fe/volumes" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.787187 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54497d1e-ba95-42fc-9886-f8ab39a146dd" path="/var/lib/kubelet/pods/54497d1e-ba95-42fc-9886-f8ab39a146dd/volumes" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.891919 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.918967 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b18f2524-b1bd-40af-b06f-0475cc6a47e4","Type":"ContainerStarted","Data":"5aa43b4dc3a74122047bb19b8b11a8041972557b62159b7396ec35e55f292d47"} Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.919017 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b18f2524-b1bd-40af-b06f-0475cc6a47e4","Type":"ContainerStarted","Data":"e8c2cce835be8fdc15fc66306ec37e75d7ed7f75ea056aa7d83489cc98edeb9a"} Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.919026 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b18f2524-b1bd-40af-b06f-0475cc6a47e4","Type":"ContainerStarted","Data":"8d4555eb3401c5f2eb00261c7f2a39ed7195ac6a30215a6dbc64e84738e5197a"} Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.920824 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff8a8760-e4e8-4d6d-bde8-8e391be9002d","Type":"ContainerStarted","Data":"717a8fe9f80539afef7de85492134d7cfaaa62c6692d49a3732a5ac1d6f6b05b"} Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.921257 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff8a8760-e4e8-4d6d-bde8-8e391be9002d","Type":"ContainerStarted","Data":"426606877e6667cc0ba1d5f374802b6f941fbb8659a599fc436ee0889e3fddb2"} Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.921278 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff8a8760-e4e8-4d6d-bde8-8e391be9002d","Type":"ContainerStarted","Data":"1bd0706a94ad434fee74c6aa1d35db368034de2410ad21b054f47d9524ce000f"} Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.922959 4972 generic.go:334] "Generic (PLEG): container finished" podID="893f2fb2-c476-44ae-a954-6d7463ccf560" containerID="97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14" exitCode=0 Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.923018 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.923048 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" event={"ID":"893f2fb2-c476-44ae-a954-6d7463ccf560","Type":"ContainerDied","Data":"97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14"} Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.923127 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8c4cb9bc-p66g8" event={"ID":"893f2fb2-c476-44ae-a954-6d7463ccf560","Type":"ContainerDied","Data":"e194c48c1aac2d9b588df3e5b7799dd5982bf75d65cae2c8d4e44bd0803d88c3"} Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.923146 4972 scope.go:117] "RemoveContainer" containerID="97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.931336 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.958628 4972 scope.go:117] "RemoveContainer" containerID="364c58d7b49f66d45c9ecfa017d911d3ab693a0eb666fc8fc6e0e3e964032a77" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.977742 4972 scope.go:117] "RemoveContainer" containerID="97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14" Nov 21 11:16:25 crc kubenswrapper[4972]: E1121 11:16:25.978135 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14\": container with ID starting with 97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14 not found: ID does not exist" containerID="97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.978175 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14"} err="failed to get container status \"97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14\": rpc error: code = NotFound desc = could not find container \"97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14\": container with ID starting with 97c172179fe31c9f6482faeb8d8f38b032ef13c14f894bda6505d29c482dcf14 not found: ID does not exist" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.978199 4972 scope.go:117] "RemoveContainer" containerID="364c58d7b49f66d45c9ecfa017d911d3ab693a0eb666fc8fc6e0e3e964032a77" Nov 21 11:16:25 crc kubenswrapper[4972]: E1121 11:16:25.978511 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"364c58d7b49f66d45c9ecfa017d911d3ab693a0eb666fc8fc6e0e3e964032a77\": container with ID starting with 364c58d7b49f66d45c9ecfa017d911d3ab693a0eb666fc8fc6e0e3e964032a77 not found: ID does not exist" containerID="364c58d7b49f66d45c9ecfa017d911d3ab693a0eb666fc8fc6e0e3e964032a77" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.978525 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"364c58d7b49f66d45c9ecfa017d911d3ab693a0eb666fc8fc6e0e3e964032a77"} err="failed to get container status \"364c58d7b49f66d45c9ecfa017d911d3ab693a0eb666fc8fc6e0e3e964032a77\": rpc error: code = NotFound desc = could not find container \"364c58d7b49f66d45c9ecfa017d911d3ab693a0eb666fc8fc6e0e3e964032a77\": container with ID starting with 364c58d7b49f66d45c9ecfa017d911d3ab693a0eb666fc8fc6e0e3e964032a77 not found: ID does not exist" Nov 21 11:16:25 crc kubenswrapper[4972]: I1121 11:16:25.991549 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.991533481 podStartE2EDuration="1.991533481s" podCreationTimestamp="2025-11-21 11:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:25.960356205 +0000 UTC m=+5731.069498713" watchObservedRunningTime="2025-11-21 11:16:25.991533481 +0000 UTC m=+5731.100675979" Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.017165 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwmw4\" (UniqueName: \"kubernetes.io/projected/893f2fb2-c476-44ae-a954-6d7463ccf560-kube-api-access-xwmw4\") pod \"893f2fb2-c476-44ae-a954-6d7463ccf560\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.017247 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-config\") pod \"893f2fb2-c476-44ae-a954-6d7463ccf560\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.017290 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-ovsdbserver-nb\") pod \"893f2fb2-c476-44ae-a954-6d7463ccf560\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.017323 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-dns-svc\") pod \"893f2fb2-c476-44ae-a954-6d7463ccf560\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.017365 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-ovsdbserver-sb\") pod \"893f2fb2-c476-44ae-a954-6d7463ccf560\" (UID: \"893f2fb2-c476-44ae-a954-6d7463ccf560\") " Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.022733 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/893f2fb2-c476-44ae-a954-6d7463ccf560-kube-api-access-xwmw4" (OuterVolumeSpecName: "kube-api-access-xwmw4") pod "893f2fb2-c476-44ae-a954-6d7463ccf560" (UID: "893f2fb2-c476-44ae-a954-6d7463ccf560"). InnerVolumeSpecName "kube-api-access-xwmw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.027392 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.02737218 podStartE2EDuration="2.02737218s" podCreationTimestamp="2025-11-21 11:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:26.022956673 +0000 UTC m=+5731.132099201" watchObservedRunningTime="2025-11-21 11:16:26.02737218 +0000 UTC m=+5731.136514678" Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.070749 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-config" (OuterVolumeSpecName: "config") pod "893f2fb2-c476-44ae-a954-6d7463ccf560" (UID: "893f2fb2-c476-44ae-a954-6d7463ccf560"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.074393 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "893f2fb2-c476-44ae-a954-6d7463ccf560" (UID: "893f2fb2-c476-44ae-a954-6d7463ccf560"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.074452 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "893f2fb2-c476-44ae-a954-6d7463ccf560" (UID: "893f2fb2-c476-44ae-a954-6d7463ccf560"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.078719 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "893f2fb2-c476-44ae-a954-6d7463ccf560" (UID: "893f2fb2-c476-44ae-a954-6d7463ccf560"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.119163 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwmw4\" (UniqueName: \"kubernetes.io/projected/893f2fb2-c476-44ae-a954-6d7463ccf560-kube-api-access-xwmw4\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.119377 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.119433 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.119506 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.119561 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/893f2fb2-c476-44ae-a954-6d7463ccf560-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.264287 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f8c4cb9bc-p66g8"] Nov 21 11:16:26 crc kubenswrapper[4972]: I1121 11:16:26.281502 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f8c4cb9bc-p66g8"] Nov 21 11:16:27 crc kubenswrapper[4972]: I1121 11:16:27.384561 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 21 11:16:27 crc kubenswrapper[4972]: I1121 11:16:27.794708 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="893f2fb2-c476-44ae-a954-6d7463ccf560" path="/var/lib/kubelet/pods/893f2fb2-c476-44ae-a954-6d7463ccf560/volumes" Nov 21 11:16:27 crc kubenswrapper[4972]: I1121 11:16:27.813996 4972 scope.go:117] "RemoveContainer" containerID="91f8d8bf62a1c54e267466696c3fd9a6378ff36ee1e4a0e8e53c5fe3bcc3b4ad" Nov 21 11:16:27 crc kubenswrapper[4972]: I1121 11:16:27.962889 4972 generic.go:334] "Generic (PLEG): container finished" podID="6030d365-75c4-442a-93fa-4539c43df118" containerID="660644f399acaafd0ec6a4b9da8b234d477b7f66505d1e20b639ff2feefe850b" exitCode=0 Nov 21 11:16:27 crc kubenswrapper[4972]: I1121 11:16:27.962988 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6030d365-75c4-442a-93fa-4539c43df118","Type":"ContainerDied","Data":"660644f399acaafd0ec6a4b9da8b234d477b7f66505d1e20b639ff2feefe850b"} Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.000732 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qrs7s"] Nov 21 11:16:28 crc kubenswrapper[4972]: E1121 11:16:28.001204 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f2fb2-c476-44ae-a954-6d7463ccf560" containerName="dnsmasq-dns" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.001230 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f2fb2-c476-44ae-a954-6d7463ccf560" containerName="dnsmasq-dns" Nov 21 11:16:28 crc kubenswrapper[4972]: E1121 11:16:28.001242 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f2fb2-c476-44ae-a954-6d7463ccf560" containerName="init" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.001251 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f2fb2-c476-44ae-a954-6d7463ccf560" containerName="init" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.001535 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f2fb2-c476-44ae-a954-6d7463ccf560" containerName="dnsmasq-dns" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.002319 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.004691 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.005070 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.011738 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qrs7s"] Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.157395 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgltb\" (UniqueName: \"kubernetes.io/projected/377e513a-7c18-4d50-a092-5942a3bd679a-kube-api-access-vgltb\") pod \"nova-cell1-cell-mapping-qrs7s\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.157473 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-config-data\") pod \"nova-cell1-cell-mapping-qrs7s\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.157492 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-scripts\") pod \"nova-cell1-cell-mapping-qrs7s\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.157514 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qrs7s\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.223435 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.259218 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgltb\" (UniqueName: \"kubernetes.io/projected/377e513a-7c18-4d50-a092-5942a3bd679a-kube-api-access-vgltb\") pod \"nova-cell1-cell-mapping-qrs7s\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.259332 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-config-data\") pod \"nova-cell1-cell-mapping-qrs7s\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.259377 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-scripts\") pod \"nova-cell1-cell-mapping-qrs7s\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.259445 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qrs7s\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.265510 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-scripts\") pod \"nova-cell1-cell-mapping-qrs7s\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.277938 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-config-data\") pod \"nova-cell1-cell-mapping-qrs7s\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.278552 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qrs7s\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.282087 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgltb\" (UniqueName: \"kubernetes.io/projected/377e513a-7c18-4d50-a092-5942a3bd679a-kube-api-access-vgltb\") pod \"nova-cell1-cell-mapping-qrs7s\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.324798 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.361464 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lj8f\" (UniqueName: \"kubernetes.io/projected/6030d365-75c4-442a-93fa-4539c43df118-kube-api-access-6lj8f\") pod \"6030d365-75c4-442a-93fa-4539c43df118\" (UID: \"6030d365-75c4-442a-93fa-4539c43df118\") " Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.361639 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6030d365-75c4-442a-93fa-4539c43df118-combined-ca-bundle\") pod \"6030d365-75c4-442a-93fa-4539c43df118\" (UID: \"6030d365-75c4-442a-93fa-4539c43df118\") " Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.361714 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6030d365-75c4-442a-93fa-4539c43df118-config-data\") pod \"6030d365-75c4-442a-93fa-4539c43df118\" (UID: \"6030d365-75c4-442a-93fa-4539c43df118\") " Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.365914 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6030d365-75c4-442a-93fa-4539c43df118-kube-api-access-6lj8f" (OuterVolumeSpecName: "kube-api-access-6lj8f") pod "6030d365-75c4-442a-93fa-4539c43df118" (UID: "6030d365-75c4-442a-93fa-4539c43df118"). InnerVolumeSpecName "kube-api-access-6lj8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.385506 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6030d365-75c4-442a-93fa-4539c43df118-config-data" (OuterVolumeSpecName: "config-data") pod "6030d365-75c4-442a-93fa-4539c43df118" (UID: "6030d365-75c4-442a-93fa-4539c43df118"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.400762 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6030d365-75c4-442a-93fa-4539c43df118-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6030d365-75c4-442a-93fa-4539c43df118" (UID: "6030d365-75c4-442a-93fa-4539c43df118"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.465485 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6030d365-75c4-442a-93fa-4539c43df118-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.465513 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6030d365-75c4-442a-93fa-4539c43df118-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.465522 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lj8f\" (UniqueName: \"kubernetes.io/projected/6030d365-75c4-442a-93fa-4539c43df118-kube-api-access-6lj8f\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.742239 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qrs7s"] Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.974561 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qrs7s" event={"ID":"377e513a-7c18-4d50-a092-5942a3bd679a","Type":"ContainerStarted","Data":"5f3d9bb5035a5caf03b52cee22696afeffd9f7d1dce7aec44a8b03ab26de8f62"} Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.983927 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6030d365-75c4-442a-93fa-4539c43df118","Type":"ContainerDied","Data":"01f247f6b6c1ca6fe890ae35b8042fc406961a658e5027a9c457235c5cbae322"} Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.984003 4972 scope.go:117] "RemoveContainer" containerID="660644f399acaafd0ec6a4b9da8b234d477b7f66505d1e20b639ff2feefe850b" Nov 21 11:16:28 crc kubenswrapper[4972]: I1121 11:16:28.984360 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.027774 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.040549 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.062057 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:29 crc kubenswrapper[4972]: E1121 11:16:29.062593 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6030d365-75c4-442a-93fa-4539c43df118" containerName="nova-scheduler-scheduler" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.062619 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6030d365-75c4-442a-93fa-4539c43df118" containerName="nova-scheduler-scheduler" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.062903 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="6030d365-75c4-442a-93fa-4539c43df118" containerName="nova-scheduler-scheduler" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.063920 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.066100 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.077802 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.180148 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa44802-c092-44a6-9980-b08916b01c92-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"baa44802-c092-44a6-9980-b08916b01c92\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.180267 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa44802-c092-44a6-9980-b08916b01c92-config-data\") pod \"nova-scheduler-0\" (UID: \"baa44802-c092-44a6-9980-b08916b01c92\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.180345 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqvh\" (UniqueName: \"kubernetes.io/projected/baa44802-c092-44a6-9980-b08916b01c92-kube-api-access-zkqvh\") pod \"nova-scheduler-0\" (UID: \"baa44802-c092-44a6-9980-b08916b01c92\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.283594 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkqvh\" (UniqueName: \"kubernetes.io/projected/baa44802-c092-44a6-9980-b08916b01c92-kube-api-access-zkqvh\") pod \"nova-scheduler-0\" (UID: \"baa44802-c092-44a6-9980-b08916b01c92\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.283816 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa44802-c092-44a6-9980-b08916b01c92-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"baa44802-c092-44a6-9980-b08916b01c92\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.284086 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa44802-c092-44a6-9980-b08916b01c92-config-data\") pod \"nova-scheduler-0\" (UID: \"baa44802-c092-44a6-9980-b08916b01c92\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.292958 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa44802-c092-44a6-9980-b08916b01c92-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"baa44802-c092-44a6-9980-b08916b01c92\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.296684 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa44802-c092-44a6-9980-b08916b01c92-config-data\") pod \"nova-scheduler-0\" (UID: \"baa44802-c092-44a6-9980-b08916b01c92\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.324963 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkqvh\" (UniqueName: \"kubernetes.io/projected/baa44802-c092-44a6-9980-b08916b01c92-kube-api-access-zkqvh\") pod \"nova-scheduler-0\" (UID: \"baa44802-c092-44a6-9980-b08916b01c92\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.384116 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.659720 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.660186 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.780477 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6030d365-75c4-442a-93fa-4539c43df118" path="/var/lib/kubelet/pods/6030d365-75c4-442a-93fa-4539c43df118/volumes" Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.894511 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.995253 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"baa44802-c092-44a6-9980-b08916b01c92","Type":"ContainerStarted","Data":"7e7040ed5f4f9557a9cf9e56818974ce03a19ccbf43b86f1d37a7f8c8e0881ff"} Nov 21 11:16:29 crc kubenswrapper[4972]: I1121 11:16:29.999878 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qrs7s" event={"ID":"377e513a-7c18-4d50-a092-5942a3bd679a","Type":"ContainerStarted","Data":"c84e744ed0f0e2062a254ad7cab31ddd0300dfdcd95d28a3f49f9af6042a06a1"} Nov 21 11:16:30 crc kubenswrapper[4972]: I1121 11:16:30.023923 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qrs7s" podStartSLOduration=3.023901331 podStartE2EDuration="3.023901331s" podCreationTimestamp="2025-11-21 11:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:30.016623118 +0000 UTC m=+5735.125765686" watchObservedRunningTime="2025-11-21 11:16:30.023901331 +0000 UTC m=+5735.133043829" Nov 21 11:16:31 crc kubenswrapper[4972]: I1121 11:16:31.023107 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"baa44802-c092-44a6-9980-b08916b01c92","Type":"ContainerStarted","Data":"496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09"} Nov 21 11:16:31 crc kubenswrapper[4972]: I1121 11:16:31.061913 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.061893538 podStartE2EDuration="2.061893538s" podCreationTimestamp="2025-11-21 11:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:31.054759099 +0000 UTC m=+5736.163901607" watchObservedRunningTime="2025-11-21 11:16:31.061893538 +0000 UTC m=+5736.171036046" Nov 21 11:16:34 crc kubenswrapper[4972]: I1121 11:16:34.067596 4972 generic.go:334] "Generic (PLEG): container finished" podID="377e513a-7c18-4d50-a092-5942a3bd679a" containerID="c84e744ed0f0e2062a254ad7cab31ddd0300dfdcd95d28a3f49f9af6042a06a1" exitCode=0 Nov 21 11:16:34 crc kubenswrapper[4972]: I1121 11:16:34.067874 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qrs7s" event={"ID":"377e513a-7c18-4d50-a092-5942a3bd679a","Type":"ContainerDied","Data":"c84e744ed0f0e2062a254ad7cab31ddd0300dfdcd95d28a3f49f9af6042a06a1"} Nov 21 11:16:34 crc kubenswrapper[4972]: I1121 11:16:34.384682 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 11:16:34 crc kubenswrapper[4972]: I1121 11:16:34.649341 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 11:16:34 crc kubenswrapper[4972]: I1121 11:16:34.649391 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 11:16:34 crc kubenswrapper[4972]: I1121 11:16:34.660204 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 11:16:34 crc kubenswrapper[4972]: I1121 11:16:34.660269 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 11:16:34 crc kubenswrapper[4972]: I1121 11:16:34.758806 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:16:34 crc kubenswrapper[4972]: E1121 11:16:34.759174 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.415766 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.544102 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgltb\" (UniqueName: \"kubernetes.io/projected/377e513a-7c18-4d50-a092-5942a3bd679a-kube-api-access-vgltb\") pod \"377e513a-7c18-4d50-a092-5942a3bd679a\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.544176 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-scripts\") pod \"377e513a-7c18-4d50-a092-5942a3bd679a\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.544254 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-combined-ca-bundle\") pod \"377e513a-7c18-4d50-a092-5942a3bd679a\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.544414 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-config-data\") pod \"377e513a-7c18-4d50-a092-5942a3bd679a\" (UID: \"377e513a-7c18-4d50-a092-5942a3bd679a\") " Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.550040 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-scripts" (OuterVolumeSpecName: "scripts") pod "377e513a-7c18-4d50-a092-5942a3bd679a" (UID: "377e513a-7c18-4d50-a092-5942a3bd679a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.551152 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377e513a-7c18-4d50-a092-5942a3bd679a-kube-api-access-vgltb" (OuterVolumeSpecName: "kube-api-access-vgltb") pod "377e513a-7c18-4d50-a092-5942a3bd679a" (UID: "377e513a-7c18-4d50-a092-5942a3bd679a"). InnerVolumeSpecName "kube-api-access-vgltb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.576481 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "377e513a-7c18-4d50-a092-5942a3bd679a" (UID: "377e513a-7c18-4d50-a092-5942a3bd679a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.578321 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-config-data" (OuterVolumeSpecName: "config-data") pod "377e513a-7c18-4d50-a092-5942a3bd679a" (UID: "377e513a-7c18-4d50-a092-5942a3bd679a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.646156 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.646197 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgltb\" (UniqueName: \"kubernetes.io/projected/377e513a-7c18-4d50-a092-5942a3bd679a-kube-api-access-vgltb\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.646215 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.646227 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377e513a-7c18-4d50-a092-5942a3bd679a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.691089 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.64:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.816064 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.64:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.816120 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 11:16:35 crc kubenswrapper[4972]: I1121 11:16:35.816079 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 11:16:36 crc kubenswrapper[4972]: I1121 11:16:36.086043 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qrs7s" event={"ID":"377e513a-7c18-4d50-a092-5942a3bd679a","Type":"ContainerDied","Data":"5f3d9bb5035a5caf03b52cee22696afeffd9f7d1dce7aec44a8b03ab26de8f62"} Nov 21 11:16:36 crc kubenswrapper[4972]: I1121 11:16:36.086084 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f3d9bb5035a5caf03b52cee22696afeffd9f7d1dce7aec44a8b03ab26de8f62" Nov 21 11:16:36 crc kubenswrapper[4972]: I1121 11:16:36.086102 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qrs7s" Nov 21 11:16:36 crc kubenswrapper[4972]: I1121 11:16:36.273262 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:36 crc kubenswrapper[4972]: I1121 11:16:36.273580 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" containerName="nova-api-log" containerID="cri-o://426606877e6667cc0ba1d5f374802b6f941fbb8659a599fc436ee0889e3fddb2" gracePeriod=30 Nov 21 11:16:36 crc kubenswrapper[4972]: I1121 11:16:36.273677 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" containerName="nova-api-api" containerID="cri-o://717a8fe9f80539afef7de85492134d7cfaaa62c6692d49a3732a5ac1d6f6b05b" gracePeriod=30 Nov 21 11:16:36 crc kubenswrapper[4972]: I1121 11:16:36.283586 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:36 crc kubenswrapper[4972]: I1121 11:16:36.283777 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="baa44802-c092-44a6-9980-b08916b01c92" containerName="nova-scheduler-scheduler" containerID="cri-o://496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09" gracePeriod=30 Nov 21 11:16:36 crc kubenswrapper[4972]: I1121 11:16:36.318128 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:36 crc kubenswrapper[4972]: I1121 11:16:36.318371 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" containerName="nova-metadata-log" containerID="cri-o://e8c2cce835be8fdc15fc66306ec37e75d7ed7f75ea056aa7d83489cc98edeb9a" gracePeriod=30 Nov 21 11:16:36 crc kubenswrapper[4972]: I1121 11:16:36.318461 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" containerName="nova-metadata-metadata" containerID="cri-o://5aa43b4dc3a74122047bb19b8b11a8041972557b62159b7396ec35e55f292d47" gracePeriod=30 Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.114939 4972 generic.go:334] "Generic (PLEG): container finished" podID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" containerID="e8c2cce835be8fdc15fc66306ec37e75d7ed7f75ea056aa7d83489cc98edeb9a" exitCode=143 Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.116410 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b18f2524-b1bd-40af-b06f-0475cc6a47e4","Type":"ContainerDied","Data":"e8c2cce835be8fdc15fc66306ec37e75d7ed7f75ea056aa7d83489cc98edeb9a"} Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.118429 4972 generic.go:334] "Generic (PLEG): container finished" podID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" containerID="426606877e6667cc0ba1d5f374802b6f941fbb8659a599fc436ee0889e3fddb2" exitCode=143 Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.118615 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff8a8760-e4e8-4d6d-bde8-8e391be9002d","Type":"ContainerDied","Data":"426606877e6667cc0ba1d5f374802b6f941fbb8659a599fc436ee0889e3fddb2"} Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.471154 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.584495 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa44802-c092-44a6-9980-b08916b01c92-combined-ca-bundle\") pod \"baa44802-c092-44a6-9980-b08916b01c92\" (UID: \"baa44802-c092-44a6-9980-b08916b01c92\") " Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.584928 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkqvh\" (UniqueName: \"kubernetes.io/projected/baa44802-c092-44a6-9980-b08916b01c92-kube-api-access-zkqvh\") pod \"baa44802-c092-44a6-9980-b08916b01c92\" (UID: \"baa44802-c092-44a6-9980-b08916b01c92\") " Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.584990 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa44802-c092-44a6-9980-b08916b01c92-config-data\") pod \"baa44802-c092-44a6-9980-b08916b01c92\" (UID: \"baa44802-c092-44a6-9980-b08916b01c92\") " Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.601569 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa44802-c092-44a6-9980-b08916b01c92-kube-api-access-zkqvh" (OuterVolumeSpecName: "kube-api-access-zkqvh") pod "baa44802-c092-44a6-9980-b08916b01c92" (UID: "baa44802-c092-44a6-9980-b08916b01c92"). InnerVolumeSpecName "kube-api-access-zkqvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.613754 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa44802-c092-44a6-9980-b08916b01c92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baa44802-c092-44a6-9980-b08916b01c92" (UID: "baa44802-c092-44a6-9980-b08916b01c92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.626061 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa44802-c092-44a6-9980-b08916b01c92-config-data" (OuterVolumeSpecName: "config-data") pod "baa44802-c092-44a6-9980-b08916b01c92" (UID: "baa44802-c092-44a6-9980-b08916b01c92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.687367 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa44802-c092-44a6-9980-b08916b01c92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.687402 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkqvh\" (UniqueName: \"kubernetes.io/projected/baa44802-c092-44a6-9980-b08916b01c92-kube-api-access-zkqvh\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:37 crc kubenswrapper[4972]: I1121 11:16:37.687414 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa44802-c092-44a6-9980-b08916b01c92-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.129912 4972 generic.go:334] "Generic (PLEG): container finished" podID="baa44802-c092-44a6-9980-b08916b01c92" containerID="496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09" exitCode=0 Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.129965 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"baa44802-c092-44a6-9980-b08916b01c92","Type":"ContainerDied","Data":"496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09"} Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.130007 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"baa44802-c092-44a6-9980-b08916b01c92","Type":"ContainerDied","Data":"7e7040ed5f4f9557a9cf9e56818974ce03a19ccbf43b86f1d37a7f8c8e0881ff"} Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.130007 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.130027 4972 scope.go:117] "RemoveContainer" containerID="496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.154523 4972 scope.go:117] "RemoveContainer" containerID="496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09" Nov 21 11:16:38 crc kubenswrapper[4972]: E1121 11:16:38.155005 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09\": container with ID starting with 496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09 not found: ID does not exist" containerID="496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.155049 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09"} err="failed to get container status \"496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09\": rpc error: code = NotFound desc = could not find container \"496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09\": container with ID starting with 496fdf4d3632417bb43080032728936dd97584921c571b2cbe386ef72dc87e09 not found: ID does not exist" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.174847 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.192117 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.204611 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:38 crc kubenswrapper[4972]: E1121 11:16:38.205292 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa44802-c092-44a6-9980-b08916b01c92" containerName="nova-scheduler-scheduler" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.205323 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa44802-c092-44a6-9980-b08916b01c92" containerName="nova-scheduler-scheduler" Nov 21 11:16:38 crc kubenswrapper[4972]: E1121 11:16:38.205392 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377e513a-7c18-4d50-a092-5942a3bd679a" containerName="nova-manage" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.205405 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="377e513a-7c18-4d50-a092-5942a3bd679a" containerName="nova-manage" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.205722 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa44802-c092-44a6-9980-b08916b01c92" containerName="nova-scheduler-scheduler" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.205775 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="377e513a-7c18-4d50-a092-5942a3bd679a" containerName="nova-manage" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.206822 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.211408 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.215028 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.297343 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr286\" (UniqueName: \"kubernetes.io/projected/cbe61d30-0cab-4450-aea9-7f3bfa806221-kube-api-access-vr286\") pod \"nova-scheduler-0\" (UID: \"cbe61d30-0cab-4450-aea9-7f3bfa806221\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.297490 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe61d30-0cab-4450-aea9-7f3bfa806221-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cbe61d30-0cab-4450-aea9-7f3bfa806221\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.297508 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe61d30-0cab-4450-aea9-7f3bfa806221-config-data\") pod \"nova-scheduler-0\" (UID: \"cbe61d30-0cab-4450-aea9-7f3bfa806221\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.399329 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe61d30-0cab-4450-aea9-7f3bfa806221-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cbe61d30-0cab-4450-aea9-7f3bfa806221\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.399383 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe61d30-0cab-4450-aea9-7f3bfa806221-config-data\") pod \"nova-scheduler-0\" (UID: \"cbe61d30-0cab-4450-aea9-7f3bfa806221\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.399445 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr286\" (UniqueName: \"kubernetes.io/projected/cbe61d30-0cab-4450-aea9-7f3bfa806221-kube-api-access-vr286\") pod \"nova-scheduler-0\" (UID: \"cbe61d30-0cab-4450-aea9-7f3bfa806221\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.405428 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe61d30-0cab-4450-aea9-7f3bfa806221-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cbe61d30-0cab-4450-aea9-7f3bfa806221\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.407615 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe61d30-0cab-4450-aea9-7f3bfa806221-config-data\") pod \"nova-scheduler-0\" (UID: \"cbe61d30-0cab-4450-aea9-7f3bfa806221\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.427008 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr286\" (UniqueName: \"kubernetes.io/projected/cbe61d30-0cab-4450-aea9-7f3bfa806221-kube-api-access-vr286\") pod \"nova-scheduler-0\" (UID: \"cbe61d30-0cab-4450-aea9-7f3bfa806221\") " pod="openstack/nova-scheduler-0" Nov 21 11:16:38 crc kubenswrapper[4972]: I1121 11:16:38.546525 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:16:39 crc kubenswrapper[4972]: I1121 11:16:39.046675 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:16:39 crc kubenswrapper[4972]: W1121 11:16:39.058194 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbe61d30_0cab_4450_aea9_7f3bfa806221.slice/crio-91bff2a9340154aa6dc2224feac843498d2f58e15ec9c31908d92063b5de4ab3 WatchSource:0}: Error finding container 91bff2a9340154aa6dc2224feac843498d2f58e15ec9c31908d92063b5de4ab3: Status 404 returned error can't find the container with id 91bff2a9340154aa6dc2224feac843498d2f58e15ec9c31908d92063b5de4ab3 Nov 21 11:16:39 crc kubenswrapper[4972]: I1121 11:16:39.145615 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cbe61d30-0cab-4450-aea9-7f3bfa806221","Type":"ContainerStarted","Data":"91bff2a9340154aa6dc2224feac843498d2f58e15ec9c31908d92063b5de4ab3"} Nov 21 11:16:39 crc kubenswrapper[4972]: I1121 11:16:39.778381 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa44802-c092-44a6-9980-b08916b01c92" path="/var/lib/kubelet/pods/baa44802-c092-44a6-9980-b08916b01c92/volumes" Nov 21 11:16:40 crc kubenswrapper[4972]: I1121 11:16:40.158480 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cbe61d30-0cab-4450-aea9-7f3bfa806221","Type":"ContainerStarted","Data":"88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49"} Nov 21 11:16:40 crc kubenswrapper[4972]: I1121 11:16:40.189524 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.189464921 podStartE2EDuration="2.189464921s" podCreationTimestamp="2025-11-21 11:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:40.182615759 +0000 UTC m=+5745.291758297" watchObservedRunningTime="2025-11-21 11:16:40.189464921 +0000 UTC m=+5745.298607459" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.185295 4972 generic.go:334] "Generic (PLEG): container finished" podID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" containerID="717a8fe9f80539afef7de85492134d7cfaaa62c6692d49a3732a5ac1d6f6b05b" exitCode=0 Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.185372 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff8a8760-e4e8-4d6d-bde8-8e391be9002d","Type":"ContainerDied","Data":"717a8fe9f80539afef7de85492134d7cfaaa62c6692d49a3732a5ac1d6f6b05b"} Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.190997 4972 generic.go:334] "Generic (PLEG): container finished" podID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" containerID="5aa43b4dc3a74122047bb19b8b11a8041972557b62159b7396ec35e55f292d47" exitCode=0 Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.191067 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b18f2524-b1bd-40af-b06f-0475cc6a47e4","Type":"ContainerDied","Data":"5aa43b4dc3a74122047bb19b8b11a8041972557b62159b7396ec35e55f292d47"} Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.366270 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.372215 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.464594 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18f2524-b1bd-40af-b06f-0475cc6a47e4-config-data\") pod \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.464769 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-config-data\") pod \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.464911 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18f2524-b1bd-40af-b06f-0475cc6a47e4-combined-ca-bundle\") pod \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.464996 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4g4x\" (UniqueName: \"kubernetes.io/projected/b18f2524-b1bd-40af-b06f-0475cc6a47e4-kube-api-access-f4g4x\") pod \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.465023 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48grl\" (UniqueName: \"kubernetes.io/projected/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-kube-api-access-48grl\") pod \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.465153 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18f2524-b1bd-40af-b06f-0475cc6a47e4-logs\") pod \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\" (UID: \"b18f2524-b1bd-40af-b06f-0475cc6a47e4\") " Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.465196 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-combined-ca-bundle\") pod \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.465223 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-logs\") pod \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\" (UID: \"ff8a8760-e4e8-4d6d-bde8-8e391be9002d\") " Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.465945 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-logs" (OuterVolumeSpecName: "logs") pod "ff8a8760-e4e8-4d6d-bde8-8e391be9002d" (UID: "ff8a8760-e4e8-4d6d-bde8-8e391be9002d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.466034 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b18f2524-b1bd-40af-b06f-0475cc6a47e4-logs" (OuterVolumeSpecName: "logs") pod "b18f2524-b1bd-40af-b06f-0475cc6a47e4" (UID: "b18f2524-b1bd-40af-b06f-0475cc6a47e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.466881 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18f2524-b1bd-40af-b06f-0475cc6a47e4-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.466909 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.471218 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18f2524-b1bd-40af-b06f-0475cc6a47e4-kube-api-access-f4g4x" (OuterVolumeSpecName: "kube-api-access-f4g4x") pod "b18f2524-b1bd-40af-b06f-0475cc6a47e4" (UID: "b18f2524-b1bd-40af-b06f-0475cc6a47e4"). InnerVolumeSpecName "kube-api-access-f4g4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.471755 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-kube-api-access-48grl" (OuterVolumeSpecName: "kube-api-access-48grl") pod "ff8a8760-e4e8-4d6d-bde8-8e391be9002d" (UID: "ff8a8760-e4e8-4d6d-bde8-8e391be9002d"). InnerVolumeSpecName "kube-api-access-48grl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.489760 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b18f2524-b1bd-40af-b06f-0475cc6a47e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b18f2524-b1bd-40af-b06f-0475cc6a47e4" (UID: "b18f2524-b1bd-40af-b06f-0475cc6a47e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.489899 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b18f2524-b1bd-40af-b06f-0475cc6a47e4-config-data" (OuterVolumeSpecName: "config-data") pod "b18f2524-b1bd-40af-b06f-0475cc6a47e4" (UID: "b18f2524-b1bd-40af-b06f-0475cc6a47e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.502993 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff8a8760-e4e8-4d6d-bde8-8e391be9002d" (UID: "ff8a8760-e4e8-4d6d-bde8-8e391be9002d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.525949 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-config-data" (OuterVolumeSpecName: "config-data") pod "ff8a8760-e4e8-4d6d-bde8-8e391be9002d" (UID: "ff8a8760-e4e8-4d6d-bde8-8e391be9002d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.568276 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.568581 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18f2524-b1bd-40af-b06f-0475cc6a47e4-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.568642 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.568695 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18f2524-b1bd-40af-b06f-0475cc6a47e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.568746 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4g4x\" (UniqueName: \"kubernetes.io/projected/b18f2524-b1bd-40af-b06f-0475cc6a47e4-kube-api-access-f4g4x\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:41 crc kubenswrapper[4972]: I1121 11:16:41.568801 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48grl\" (UniqueName: \"kubernetes.io/projected/ff8a8760-e4e8-4d6d-bde8-8e391be9002d-kube-api-access-48grl\") on node \"crc\" DevicePath \"\"" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.228353 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b18f2524-b1bd-40af-b06f-0475cc6a47e4","Type":"ContainerDied","Data":"8d4555eb3401c5f2eb00261c7f2a39ed7195ac6a30215a6dbc64e84738e5197a"} Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.228448 4972 scope.go:117] "RemoveContainer" containerID="5aa43b4dc3a74122047bb19b8b11a8041972557b62159b7396ec35e55f292d47" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.228373 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.234420 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff8a8760-e4e8-4d6d-bde8-8e391be9002d","Type":"ContainerDied","Data":"1bd0706a94ad434fee74c6aa1d35db368034de2410ad21b054f47d9524ce000f"} Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.234592 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.269280 4972 scope.go:117] "RemoveContainer" containerID="e8c2cce835be8fdc15fc66306ec37e75d7ed7f75ea056aa7d83489cc98edeb9a" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.282685 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.315952 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.323141 4972 scope.go:117] "RemoveContainer" containerID="717a8fe9f80539afef7de85492134d7cfaaa62c6692d49a3732a5ac1d6f6b05b" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.331515 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.334120 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.341517 4972 scope.go:117] "RemoveContainer" containerID="426606877e6667cc0ba1d5f374802b6f941fbb8659a599fc436ee0889e3fddb2" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.341661 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:42 crc kubenswrapper[4972]: E1121 11:16:42.342121 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" containerName="nova-metadata-log" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.342138 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" containerName="nova-metadata-log" Nov 21 11:16:42 crc kubenswrapper[4972]: E1121 11:16:42.342172 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" containerName="nova-api-api" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.342179 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" containerName="nova-api-api" Nov 21 11:16:42 crc kubenswrapper[4972]: E1121 11:16:42.342196 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" containerName="nova-metadata-metadata" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.342202 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" containerName="nova-metadata-metadata" Nov 21 11:16:42 crc kubenswrapper[4972]: E1121 11:16:42.342224 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" containerName="nova-api-log" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.342229 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" containerName="nova-api-log" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.342423 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" containerName="nova-api-log" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.342442 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" containerName="nova-metadata-metadata" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.342456 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" containerName="nova-api-api" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.342477 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" containerName="nova-metadata-log" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.343761 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.347026 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.348462 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.349417 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.353381 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.363919 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.391919 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.495122 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1730bac7-5d9e-4989-9995-8920234d3eef-config-data\") pod \"nova-api-0\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.495397 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6969f156-2c8d-44d0-85c0-8b2d08c4c138-config-data\") pod \"nova-metadata-0\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.495466 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1730bac7-5d9e-4989-9995-8920234d3eef-logs\") pod \"nova-api-0\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.495684 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtbfn\" (UniqueName: \"kubernetes.io/projected/1730bac7-5d9e-4989-9995-8920234d3eef-kube-api-access-xtbfn\") pod \"nova-api-0\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.495807 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1730bac7-5d9e-4989-9995-8920234d3eef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.495879 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvrmp\" (UniqueName: \"kubernetes.io/projected/6969f156-2c8d-44d0-85c0-8b2d08c4c138-kube-api-access-fvrmp\") pod \"nova-metadata-0\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.495990 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6969f156-2c8d-44d0-85c0-8b2d08c4c138-logs\") pod \"nova-metadata-0\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.496032 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6969f156-2c8d-44d0-85c0-8b2d08c4c138-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.598190 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6969f156-2c8d-44d0-85c0-8b2d08c4c138-config-data\") pod \"nova-metadata-0\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.598240 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1730bac7-5d9e-4989-9995-8920234d3eef-logs\") pod \"nova-api-0\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.598268 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtbfn\" (UniqueName: \"kubernetes.io/projected/1730bac7-5d9e-4989-9995-8920234d3eef-kube-api-access-xtbfn\") pod \"nova-api-0\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.598295 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1730bac7-5d9e-4989-9995-8920234d3eef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.598311 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvrmp\" (UniqueName: \"kubernetes.io/projected/6969f156-2c8d-44d0-85c0-8b2d08c4c138-kube-api-access-fvrmp\") pod \"nova-metadata-0\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.598343 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6969f156-2c8d-44d0-85c0-8b2d08c4c138-logs\") pod \"nova-metadata-0\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.598359 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6969f156-2c8d-44d0-85c0-8b2d08c4c138-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.598432 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1730bac7-5d9e-4989-9995-8920234d3eef-config-data\") pod \"nova-api-0\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.599289 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1730bac7-5d9e-4989-9995-8920234d3eef-logs\") pod \"nova-api-0\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.599606 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6969f156-2c8d-44d0-85c0-8b2d08c4c138-logs\") pod \"nova-metadata-0\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.605629 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1730bac7-5d9e-4989-9995-8920234d3eef-config-data\") pod \"nova-api-0\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.606383 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1730bac7-5d9e-4989-9995-8920234d3eef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.608347 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6969f156-2c8d-44d0-85c0-8b2d08c4c138-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.612190 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6969f156-2c8d-44d0-85c0-8b2d08c4c138-config-data\") pod \"nova-metadata-0\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.624882 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvrmp\" (UniqueName: \"kubernetes.io/projected/6969f156-2c8d-44d0-85c0-8b2d08c4c138-kube-api-access-fvrmp\") pod \"nova-metadata-0\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.632378 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtbfn\" (UniqueName: \"kubernetes.io/projected/1730bac7-5d9e-4989-9995-8920234d3eef-kube-api-access-xtbfn\") pod \"nova-api-0\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " pod="openstack/nova-api-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.683502 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:16:42 crc kubenswrapper[4972]: I1121 11:16:42.700034 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:16:43 crc kubenswrapper[4972]: I1121 11:16:43.233246 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:16:43 crc kubenswrapper[4972]: W1121 11:16:43.242599 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6969f156_2c8d_44d0_85c0_8b2d08c4c138.slice/crio-410ca064cead4906db092bdfafc2f9eee61e586941b6933636456c1056185ea2 WatchSource:0}: Error finding container 410ca064cead4906db092bdfafc2f9eee61e586941b6933636456c1056185ea2: Status 404 returned error can't find the container with id 410ca064cead4906db092bdfafc2f9eee61e586941b6933636456c1056185ea2 Nov 21 11:16:43 crc kubenswrapper[4972]: I1121 11:16:43.305019 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:16:43 crc kubenswrapper[4972]: W1121 11:16:43.318447 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1730bac7_5d9e_4989_9995_8920234d3eef.slice/crio-7ba4f3130a38db239cb647e853579e0a0d32b4b5b9b532bdc14d3835abca8e36 WatchSource:0}: Error finding container 7ba4f3130a38db239cb647e853579e0a0d32b4b5b9b532bdc14d3835abca8e36: Status 404 returned error can't find the container with id 7ba4f3130a38db239cb647e853579e0a0d32b4b5b9b532bdc14d3835abca8e36 Nov 21 11:16:43 crc kubenswrapper[4972]: I1121 11:16:43.546896 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 11:16:43 crc kubenswrapper[4972]: I1121 11:16:43.771529 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b18f2524-b1bd-40af-b06f-0475cc6a47e4" path="/var/lib/kubelet/pods/b18f2524-b1bd-40af-b06f-0475cc6a47e4/volumes" Nov 21 11:16:43 crc kubenswrapper[4972]: I1121 11:16:43.772154 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff8a8760-e4e8-4d6d-bde8-8e391be9002d" path="/var/lib/kubelet/pods/ff8a8760-e4e8-4d6d-bde8-8e391be9002d/volumes" Nov 21 11:16:44 crc kubenswrapper[4972]: I1121 11:16:44.271308 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1730bac7-5d9e-4989-9995-8920234d3eef","Type":"ContainerStarted","Data":"d75abd60f2559ddeeeb25fd6982cd02bef407cc5a36be3fe6b14dee79ae325d4"} Nov 21 11:16:44 crc kubenswrapper[4972]: I1121 11:16:44.271378 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1730bac7-5d9e-4989-9995-8920234d3eef","Type":"ContainerStarted","Data":"72515e346b510bcb8130cfd85c2ed3efffd9921b3d1b6441d399ee9d6352c28a"} Nov 21 11:16:44 crc kubenswrapper[4972]: I1121 11:16:44.271397 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1730bac7-5d9e-4989-9995-8920234d3eef","Type":"ContainerStarted","Data":"7ba4f3130a38db239cb647e853579e0a0d32b4b5b9b532bdc14d3835abca8e36"} Nov 21 11:16:44 crc kubenswrapper[4972]: I1121 11:16:44.289690 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6969f156-2c8d-44d0-85c0-8b2d08c4c138","Type":"ContainerStarted","Data":"1442d766ed402858dc119cb8d0d0499013b8ae8e878e4366818033e5854a9df0"} Nov 21 11:16:44 crc kubenswrapper[4972]: I1121 11:16:44.289773 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6969f156-2c8d-44d0-85c0-8b2d08c4c138","Type":"ContainerStarted","Data":"838af30eda8e65630edd722cb208ceff9d3bba0e3fe1507637aca192476a25b5"} Nov 21 11:16:44 crc kubenswrapper[4972]: I1121 11:16:44.289796 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6969f156-2c8d-44d0-85c0-8b2d08c4c138","Type":"ContainerStarted","Data":"410ca064cead4906db092bdfafc2f9eee61e586941b6933636456c1056185ea2"} Nov 21 11:16:44 crc kubenswrapper[4972]: I1121 11:16:44.313937 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.313897128 podStartE2EDuration="2.313897128s" podCreationTimestamp="2025-11-21 11:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:44.30415239 +0000 UTC m=+5749.413294898" watchObservedRunningTime="2025-11-21 11:16:44.313897128 +0000 UTC m=+5749.423039646" Nov 21 11:16:44 crc kubenswrapper[4972]: I1121 11:16:44.333434 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.333405165 podStartE2EDuration="2.333405165s" podCreationTimestamp="2025-11-21 11:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:16:44.319961319 +0000 UTC m=+5749.429103817" watchObservedRunningTime="2025-11-21 11:16:44.333405165 +0000 UTC m=+5749.442547683" Nov 21 11:16:46 crc kubenswrapper[4972]: I1121 11:16:46.760500 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:16:46 crc kubenswrapper[4972]: E1121 11:16:46.761565 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:16:47 crc kubenswrapper[4972]: I1121 11:16:47.684932 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 11:16:47 crc kubenswrapper[4972]: I1121 11:16:47.685004 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 11:16:48 crc kubenswrapper[4972]: I1121 11:16:48.547191 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 11:16:48 crc kubenswrapper[4972]: I1121 11:16:48.586170 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 11:16:49 crc kubenswrapper[4972]: I1121 11:16:49.428013 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 11:16:52 crc kubenswrapper[4972]: I1121 11:16:52.684365 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 11:16:52 crc kubenswrapper[4972]: I1121 11:16:52.684889 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 11:16:52 crc kubenswrapper[4972]: I1121 11:16:52.701509 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 11:16:52 crc kubenswrapper[4972]: I1121 11:16:52.701665 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 11:16:53 crc kubenswrapper[4972]: I1121 11:16:53.850808 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.69:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 11:16:53 crc kubenswrapper[4972]: I1121 11:16:53.851237 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.70:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 11:16:53 crc kubenswrapper[4972]: I1121 11:16:53.851154 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.69:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 11:16:53 crc kubenswrapper[4972]: I1121 11:16:53.850915 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.70:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 11:17:01 crc kubenswrapper[4972]: I1121 11:17:01.759599 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:17:01 crc kubenswrapper[4972]: E1121 11:17:01.761035 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:17:02 crc kubenswrapper[4972]: I1121 11:17:02.688068 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 11:17:02 crc kubenswrapper[4972]: I1121 11:17:02.688785 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 11:17:02 crc kubenswrapper[4972]: I1121 11:17:02.692632 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 11:17:02 crc kubenswrapper[4972]: I1121 11:17:02.709373 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 11:17:02 crc kubenswrapper[4972]: I1121 11:17:02.710140 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 11:17:02 crc kubenswrapper[4972]: I1121 11:17:02.712808 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 11:17:02 crc kubenswrapper[4972]: I1121 11:17:02.714322 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 11:17:03 crc kubenswrapper[4972]: I1121 11:17:03.548284 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 11:17:03 crc kubenswrapper[4972]: I1121 11:17:03.551325 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 11:17:03 crc kubenswrapper[4972]: I1121 11:17:03.554401 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 11:17:03 crc kubenswrapper[4972]: I1121 11:17:03.782284 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bdd64c87f-mqbfk"] Nov 21 11:17:03 crc kubenswrapper[4972]: I1121 11:17:03.784429 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:03 crc kubenswrapper[4972]: I1121 11:17:03.799006 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bdd64c87f-mqbfk"] Nov 21 11:17:03 crc kubenswrapper[4972]: I1121 11:17:03.898345 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:03 crc kubenswrapper[4972]: I1121 11:17:03.898504 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-ovsdbserver-sb\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:03 crc kubenswrapper[4972]: I1121 11:17:03.898695 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-dns-svc\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:03 crc kubenswrapper[4972]: I1121 11:17:03.898777 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-config\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:03 crc kubenswrapper[4972]: I1121 11:17:03.898963 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d4hz\" (UniqueName: \"kubernetes.io/projected/446e20bf-e455-4b55-9c86-c5c6cb95070e-kube-api-access-5d4hz\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:04 crc kubenswrapper[4972]: I1121 11:17:04.001668 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-config\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:04 crc kubenswrapper[4972]: I1121 11:17:04.001758 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d4hz\" (UniqueName: \"kubernetes.io/projected/446e20bf-e455-4b55-9c86-c5c6cb95070e-kube-api-access-5d4hz\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:04 crc kubenswrapper[4972]: I1121 11:17:04.001848 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:04 crc kubenswrapper[4972]: I1121 11:17:04.001903 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-ovsdbserver-sb\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:04 crc kubenswrapper[4972]: I1121 11:17:04.002153 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-dns-svc\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:04 crc kubenswrapper[4972]: I1121 11:17:04.002965 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-config\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:04 crc kubenswrapper[4972]: I1121 11:17:04.002982 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:04 crc kubenswrapper[4972]: I1121 11:17:04.003349 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-dns-svc\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:04 crc kubenswrapper[4972]: I1121 11:17:04.004220 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-ovsdbserver-sb\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:04 crc kubenswrapper[4972]: I1121 11:17:04.042420 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d4hz\" (UniqueName: \"kubernetes.io/projected/446e20bf-e455-4b55-9c86-c5c6cb95070e-kube-api-access-5d4hz\") pod \"dnsmasq-dns-5bdd64c87f-mqbfk\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:04 crc kubenswrapper[4972]: I1121 11:17:04.117003 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:04 crc kubenswrapper[4972]: I1121 11:17:04.644738 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bdd64c87f-mqbfk"] Nov 21 11:17:05 crc kubenswrapper[4972]: I1121 11:17:05.568088 4972 generic.go:334] "Generic (PLEG): container finished" podID="446e20bf-e455-4b55-9c86-c5c6cb95070e" containerID="b70ebc0760dc48f43f4e7f8d6eb1808c19b1a500ec5f3fc7f58c845080fd381d" exitCode=0 Nov 21 11:17:05 crc kubenswrapper[4972]: I1121 11:17:05.568256 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" event={"ID":"446e20bf-e455-4b55-9c86-c5c6cb95070e","Type":"ContainerDied","Data":"b70ebc0760dc48f43f4e7f8d6eb1808c19b1a500ec5f3fc7f58c845080fd381d"} Nov 21 11:17:05 crc kubenswrapper[4972]: I1121 11:17:05.568798 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" event={"ID":"446e20bf-e455-4b55-9c86-c5c6cb95070e","Type":"ContainerStarted","Data":"9d1a021012d3e5fc05aa4e9a39abe7ab72b43647f352bcadbf5e71817174ffe2"} Nov 21 11:17:06 crc kubenswrapper[4972]: I1121 11:17:06.582694 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" event={"ID":"446e20bf-e455-4b55-9c86-c5c6cb95070e","Type":"ContainerStarted","Data":"c63a69cbacd7d2cae4dd6c2df6365f2948570a1b5ba66e644349bdc3fd39432e"} Nov 21 11:17:06 crc kubenswrapper[4972]: I1121 11:17:06.583118 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:06 crc kubenswrapper[4972]: I1121 11:17:06.629525 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" podStartSLOduration=3.62949754 podStartE2EDuration="3.62949754s" podCreationTimestamp="2025-11-21 11:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:17:06.620353698 +0000 UTC m=+5771.729496216" watchObservedRunningTime="2025-11-21 11:17:06.62949754 +0000 UTC m=+5771.738640078" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.118322 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.196581 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6df6f8fcbc-52ls7"] Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.197224 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" podUID="41ee93c6-845e-4aac-8cb3-16a222d124b1" containerName="dnsmasq-dns" containerID="cri-o://e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a" gracePeriod=10 Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.674261 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.676507 4972 generic.go:334] "Generic (PLEG): container finished" podID="41ee93c6-845e-4aac-8cb3-16a222d124b1" containerID="e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a" exitCode=0 Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.676547 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" event={"ID":"41ee93c6-845e-4aac-8cb3-16a222d124b1","Type":"ContainerDied","Data":"e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a"} Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.676578 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" event={"ID":"41ee93c6-845e-4aac-8cb3-16a222d124b1","Type":"ContainerDied","Data":"b3d3ab68caeea3f2902a89d367165df9fd039399ddec98197230200806b7cc5c"} Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.676603 4972 scope.go:117] "RemoveContainer" containerID="e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.710547 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-ovsdbserver-nb\") pod \"41ee93c6-845e-4aac-8cb3-16a222d124b1\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.710633 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-config\") pod \"41ee93c6-845e-4aac-8cb3-16a222d124b1\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.710744 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-ovsdbserver-sb\") pod \"41ee93c6-845e-4aac-8cb3-16a222d124b1\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.710944 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnfpp\" (UniqueName: \"kubernetes.io/projected/41ee93c6-845e-4aac-8cb3-16a222d124b1-kube-api-access-fnfpp\") pod \"41ee93c6-845e-4aac-8cb3-16a222d124b1\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.711064 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-dns-svc\") pod \"41ee93c6-845e-4aac-8cb3-16a222d124b1\" (UID: \"41ee93c6-845e-4aac-8cb3-16a222d124b1\") " Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.719732 4972 scope.go:117] "RemoveContainer" containerID="75c42bd939b767b3ed2d045f4188d3ceba3a4884a2eccba57dd3cc91b440e8db" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.723337 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ee93c6-845e-4aac-8cb3-16a222d124b1-kube-api-access-fnfpp" (OuterVolumeSpecName: "kube-api-access-fnfpp") pod "41ee93c6-845e-4aac-8cb3-16a222d124b1" (UID: "41ee93c6-845e-4aac-8cb3-16a222d124b1"). InnerVolumeSpecName "kube-api-access-fnfpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.765407 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "41ee93c6-845e-4aac-8cb3-16a222d124b1" (UID: "41ee93c6-845e-4aac-8cb3-16a222d124b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.782264 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41ee93c6-845e-4aac-8cb3-16a222d124b1" (UID: "41ee93c6-845e-4aac-8cb3-16a222d124b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.784469 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-config" (OuterVolumeSpecName: "config") pod "41ee93c6-845e-4aac-8cb3-16a222d124b1" (UID: "41ee93c6-845e-4aac-8cb3-16a222d124b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.795702 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "41ee93c6-845e-4aac-8cb3-16a222d124b1" (UID: "41ee93c6-845e-4aac-8cb3-16a222d124b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.813694 4972 scope.go:117] "RemoveContainer" containerID="e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.814154 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.814193 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.814205 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.814217 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnfpp\" (UniqueName: \"kubernetes.io/projected/41ee93c6-845e-4aac-8cb3-16a222d124b1-kube-api-access-fnfpp\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.814230 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41ee93c6-845e-4aac-8cb3-16a222d124b1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:14 crc kubenswrapper[4972]: E1121 11:17:14.814364 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a\": container with ID starting with e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a not found: ID does not exist" containerID="e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.814390 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a"} err="failed to get container status \"e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a\": rpc error: code = NotFound desc = could not find container \"e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a\": container with ID starting with e5915c025f1bc3101b6d42dc5a85251dde1e84c04b8d663800ac3819f4d3f57a not found: ID does not exist" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.814410 4972 scope.go:117] "RemoveContainer" containerID="75c42bd939b767b3ed2d045f4188d3ceba3a4884a2eccba57dd3cc91b440e8db" Nov 21 11:17:14 crc kubenswrapper[4972]: E1121 11:17:14.814815 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c42bd939b767b3ed2d045f4188d3ceba3a4884a2eccba57dd3cc91b440e8db\": container with ID starting with 75c42bd939b767b3ed2d045f4188d3ceba3a4884a2eccba57dd3cc91b440e8db not found: ID does not exist" containerID="75c42bd939b767b3ed2d045f4188d3ceba3a4884a2eccba57dd3cc91b440e8db" Nov 21 11:17:14 crc kubenswrapper[4972]: I1121 11:17:14.814872 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c42bd939b767b3ed2d045f4188d3ceba3a4884a2eccba57dd3cc91b440e8db"} err="failed to get container status \"75c42bd939b767b3ed2d045f4188d3ceba3a4884a2eccba57dd3cc91b440e8db\": rpc error: code = NotFound desc = could not find container \"75c42bd939b767b3ed2d045f4188d3ceba3a4884a2eccba57dd3cc91b440e8db\": container with ID starting with 75c42bd939b767b3ed2d045f4188d3ceba3a4884a2eccba57dd3cc91b440e8db not found: ID does not exist" Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.686025 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6df6f8fcbc-52ls7" Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.733587 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6df6f8fcbc-52ls7"] Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.747332 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6df6f8fcbc-52ls7"] Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.797125 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ee93c6-845e-4aac-8cb3-16a222d124b1" path="/var/lib/kubelet/pods/41ee93c6-845e-4aac-8cb3-16a222d124b1/volumes" Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.886332 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-g89pr"] Nov 21 11:17:15 crc kubenswrapper[4972]: E1121 11:17:15.886764 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ee93c6-845e-4aac-8cb3-16a222d124b1" containerName="dnsmasq-dns" Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.886787 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ee93c6-845e-4aac-8cb3-16a222d124b1" containerName="dnsmasq-dns" Nov 21 11:17:15 crc kubenswrapper[4972]: E1121 11:17:15.886812 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ee93c6-845e-4aac-8cb3-16a222d124b1" containerName="init" Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.886821 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ee93c6-845e-4aac-8cb3-16a222d124b1" containerName="init" Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.887076 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ee93c6-845e-4aac-8cb3-16a222d124b1" containerName="dnsmasq-dns" Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.888089 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g89pr" Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.893661 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-g89pr"] Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.933481 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrfc8\" (UniqueName: \"kubernetes.io/projected/8976598c-43fd-4afe-8593-4c275d67f18a-kube-api-access-mrfc8\") pod \"cinder-db-create-g89pr\" (UID: \"8976598c-43fd-4afe-8593-4c275d67f18a\") " pod="openstack/cinder-db-create-g89pr" Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.933627 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8976598c-43fd-4afe-8593-4c275d67f18a-operator-scripts\") pod \"cinder-db-create-g89pr\" (UID: \"8976598c-43fd-4afe-8593-4c275d67f18a\") " pod="openstack/cinder-db-create-g89pr" Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.979189 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e73e-account-create-xfjp5"] Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.980479 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e73e-account-create-xfjp5" Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.982869 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 21 11:17:15 crc kubenswrapper[4972]: I1121 11:17:15.986638 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e73e-account-create-xfjp5"] Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.034175 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrfc8\" (UniqueName: \"kubernetes.io/projected/8976598c-43fd-4afe-8593-4c275d67f18a-kube-api-access-mrfc8\") pod \"cinder-db-create-g89pr\" (UID: \"8976598c-43fd-4afe-8593-4c275d67f18a\") " pod="openstack/cinder-db-create-g89pr" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.034255 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21211899-cf98-49a2-9711-a49cfcbafabd-operator-scripts\") pod \"cinder-e73e-account-create-xfjp5\" (UID: \"21211899-cf98-49a2-9711-a49cfcbafabd\") " pod="openstack/cinder-e73e-account-create-xfjp5" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.034297 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8976598c-43fd-4afe-8593-4c275d67f18a-operator-scripts\") pod \"cinder-db-create-g89pr\" (UID: \"8976598c-43fd-4afe-8593-4c275d67f18a\") " pod="openstack/cinder-db-create-g89pr" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.034317 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkjbh\" (UniqueName: \"kubernetes.io/projected/21211899-cf98-49a2-9711-a49cfcbafabd-kube-api-access-pkjbh\") pod \"cinder-e73e-account-create-xfjp5\" (UID: \"21211899-cf98-49a2-9711-a49cfcbafabd\") " pod="openstack/cinder-e73e-account-create-xfjp5" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.035046 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8976598c-43fd-4afe-8593-4c275d67f18a-operator-scripts\") pod \"cinder-db-create-g89pr\" (UID: \"8976598c-43fd-4afe-8593-4c275d67f18a\") " pod="openstack/cinder-db-create-g89pr" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.052626 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrfc8\" (UniqueName: \"kubernetes.io/projected/8976598c-43fd-4afe-8593-4c275d67f18a-kube-api-access-mrfc8\") pod \"cinder-db-create-g89pr\" (UID: \"8976598c-43fd-4afe-8593-4c275d67f18a\") " pod="openstack/cinder-db-create-g89pr" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.135882 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21211899-cf98-49a2-9711-a49cfcbafabd-operator-scripts\") pod \"cinder-e73e-account-create-xfjp5\" (UID: \"21211899-cf98-49a2-9711-a49cfcbafabd\") " pod="openstack/cinder-e73e-account-create-xfjp5" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.135941 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkjbh\" (UniqueName: \"kubernetes.io/projected/21211899-cf98-49a2-9711-a49cfcbafabd-kube-api-access-pkjbh\") pod \"cinder-e73e-account-create-xfjp5\" (UID: \"21211899-cf98-49a2-9711-a49cfcbafabd\") " pod="openstack/cinder-e73e-account-create-xfjp5" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.136856 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21211899-cf98-49a2-9711-a49cfcbafabd-operator-scripts\") pod \"cinder-e73e-account-create-xfjp5\" (UID: \"21211899-cf98-49a2-9711-a49cfcbafabd\") " pod="openstack/cinder-e73e-account-create-xfjp5" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.158623 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkjbh\" (UniqueName: \"kubernetes.io/projected/21211899-cf98-49a2-9711-a49cfcbafabd-kube-api-access-pkjbh\") pod \"cinder-e73e-account-create-xfjp5\" (UID: \"21211899-cf98-49a2-9711-a49cfcbafabd\") " pod="openstack/cinder-e73e-account-create-xfjp5" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.209052 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g89pr" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.310790 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e73e-account-create-xfjp5" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.724972 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-g89pr"] Nov 21 11:17:16 crc kubenswrapper[4972]: W1121 11:17:16.727231 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8976598c_43fd_4afe_8593_4c275d67f18a.slice/crio-484861757f9bafb96e86fe36e31e4c47fb901f8bea1f5c315681e1a3b658dc0f WatchSource:0}: Error finding container 484861757f9bafb96e86fe36e31e4c47fb901f8bea1f5c315681e1a3b658dc0f: Status 404 returned error can't find the container with id 484861757f9bafb96e86fe36e31e4c47fb901f8bea1f5c315681e1a3b658dc0f Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.759289 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:17:16 crc kubenswrapper[4972]: E1121 11:17:16.759647 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:17:16 crc kubenswrapper[4972]: I1121 11:17:16.839212 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e73e-account-create-xfjp5"] Nov 21 11:17:16 crc kubenswrapper[4972]: W1121 11:17:16.841566 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21211899_cf98_49a2_9711_a49cfcbafabd.slice/crio-02f3c1dcac9abf5d311579b11fa5da06cc53d8bf5dc23e30e6cd094fda128d21 WatchSource:0}: Error finding container 02f3c1dcac9abf5d311579b11fa5da06cc53d8bf5dc23e30e6cd094fda128d21: Status 404 returned error can't find the container with id 02f3c1dcac9abf5d311579b11fa5da06cc53d8bf5dc23e30e6cd094fda128d21 Nov 21 11:17:17 crc kubenswrapper[4972]: I1121 11:17:17.705919 4972 generic.go:334] "Generic (PLEG): container finished" podID="21211899-cf98-49a2-9711-a49cfcbafabd" containerID="9de914ad570d6b9a6029d344e70c1e284df15473cc4a37dcd322d10bb9148ebb" exitCode=0 Nov 21 11:17:17 crc kubenswrapper[4972]: I1121 11:17:17.706025 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e73e-account-create-xfjp5" event={"ID":"21211899-cf98-49a2-9711-a49cfcbafabd","Type":"ContainerDied","Data":"9de914ad570d6b9a6029d344e70c1e284df15473cc4a37dcd322d10bb9148ebb"} Nov 21 11:17:17 crc kubenswrapper[4972]: I1121 11:17:17.706242 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e73e-account-create-xfjp5" event={"ID":"21211899-cf98-49a2-9711-a49cfcbafabd","Type":"ContainerStarted","Data":"02f3c1dcac9abf5d311579b11fa5da06cc53d8bf5dc23e30e6cd094fda128d21"} Nov 21 11:17:17 crc kubenswrapper[4972]: I1121 11:17:17.708952 4972 generic.go:334] "Generic (PLEG): container finished" podID="8976598c-43fd-4afe-8593-4c275d67f18a" containerID="5927faf389b0adc56c50c0100cc43aed2b97c75a4f8d4744cd0dbec48cb41983" exitCode=0 Nov 21 11:17:17 crc kubenswrapper[4972]: I1121 11:17:17.709038 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g89pr" event={"ID":"8976598c-43fd-4afe-8593-4c275d67f18a","Type":"ContainerDied","Data":"5927faf389b0adc56c50c0100cc43aed2b97c75a4f8d4744cd0dbec48cb41983"} Nov 21 11:17:17 crc kubenswrapper[4972]: I1121 11:17:17.709142 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g89pr" event={"ID":"8976598c-43fd-4afe-8593-4c275d67f18a","Type":"ContainerStarted","Data":"484861757f9bafb96e86fe36e31e4c47fb901f8bea1f5c315681e1a3b658dc0f"} Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.204169 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e73e-account-create-xfjp5" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.209798 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g89pr" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.293534 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8976598c-43fd-4afe-8593-4c275d67f18a-operator-scripts\") pod \"8976598c-43fd-4afe-8593-4c275d67f18a\" (UID: \"8976598c-43fd-4afe-8593-4c275d67f18a\") " Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.293576 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkjbh\" (UniqueName: \"kubernetes.io/projected/21211899-cf98-49a2-9711-a49cfcbafabd-kube-api-access-pkjbh\") pod \"21211899-cf98-49a2-9711-a49cfcbafabd\" (UID: \"21211899-cf98-49a2-9711-a49cfcbafabd\") " Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.293638 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrfc8\" (UniqueName: \"kubernetes.io/projected/8976598c-43fd-4afe-8593-4c275d67f18a-kube-api-access-mrfc8\") pod \"8976598c-43fd-4afe-8593-4c275d67f18a\" (UID: \"8976598c-43fd-4afe-8593-4c275d67f18a\") " Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.293654 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21211899-cf98-49a2-9711-a49cfcbafabd-operator-scripts\") pod \"21211899-cf98-49a2-9711-a49cfcbafabd\" (UID: \"21211899-cf98-49a2-9711-a49cfcbafabd\") " Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.294510 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8976598c-43fd-4afe-8593-4c275d67f18a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8976598c-43fd-4afe-8593-4c275d67f18a" (UID: "8976598c-43fd-4afe-8593-4c275d67f18a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.294729 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21211899-cf98-49a2-9711-a49cfcbafabd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21211899-cf98-49a2-9711-a49cfcbafabd" (UID: "21211899-cf98-49a2-9711-a49cfcbafabd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.303507 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21211899-cf98-49a2-9711-a49cfcbafabd-kube-api-access-pkjbh" (OuterVolumeSpecName: "kube-api-access-pkjbh") pod "21211899-cf98-49a2-9711-a49cfcbafabd" (UID: "21211899-cf98-49a2-9711-a49cfcbafabd"). InnerVolumeSpecName "kube-api-access-pkjbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.303808 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8976598c-43fd-4afe-8593-4c275d67f18a-kube-api-access-mrfc8" (OuterVolumeSpecName: "kube-api-access-mrfc8") pod "8976598c-43fd-4afe-8593-4c275d67f18a" (UID: "8976598c-43fd-4afe-8593-4c275d67f18a"). InnerVolumeSpecName "kube-api-access-mrfc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.395887 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8976598c-43fd-4afe-8593-4c275d67f18a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.395934 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkjbh\" (UniqueName: \"kubernetes.io/projected/21211899-cf98-49a2-9711-a49cfcbafabd-kube-api-access-pkjbh\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.395951 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrfc8\" (UniqueName: \"kubernetes.io/projected/8976598c-43fd-4afe-8593-4c275d67f18a-kube-api-access-mrfc8\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.395964 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21211899-cf98-49a2-9711-a49cfcbafabd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.731359 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e73e-account-create-xfjp5" event={"ID":"21211899-cf98-49a2-9711-a49cfcbafabd","Type":"ContainerDied","Data":"02f3c1dcac9abf5d311579b11fa5da06cc53d8bf5dc23e30e6cd094fda128d21"} Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.731389 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e73e-account-create-xfjp5" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.731400 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f3c1dcac9abf5d311579b11fa5da06cc53d8bf5dc23e30e6cd094fda128d21" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.733760 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g89pr" event={"ID":"8976598c-43fd-4afe-8593-4c275d67f18a","Type":"ContainerDied","Data":"484861757f9bafb96e86fe36e31e4c47fb901f8bea1f5c315681e1a3b658dc0f"} Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.733782 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="484861757f9bafb96e86fe36e31e4c47fb901f8bea1f5c315681e1a3b658dc0f" Nov 21 11:17:19 crc kubenswrapper[4972]: I1121 11:17:19.733792 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g89pr" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.246112 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-m2fhd"] Nov 21 11:17:21 crc kubenswrapper[4972]: E1121 11:17:21.246709 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21211899-cf98-49a2-9711-a49cfcbafabd" containerName="mariadb-account-create" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.246722 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="21211899-cf98-49a2-9711-a49cfcbafabd" containerName="mariadb-account-create" Nov 21 11:17:21 crc kubenswrapper[4972]: E1121 11:17:21.246733 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8976598c-43fd-4afe-8593-4c275d67f18a" containerName="mariadb-database-create" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.246739 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8976598c-43fd-4afe-8593-4c275d67f18a" containerName="mariadb-database-create" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.246968 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8976598c-43fd-4afe-8593-4c275d67f18a" containerName="mariadb-database-create" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.246997 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="21211899-cf98-49a2-9711-a49cfcbafabd" containerName="mariadb-account-create" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.247600 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.249543 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2qpqk" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.250794 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.264664 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.265313 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-m2fhd"] Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.436426 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7318c041-d37d-4001-9620-6ef043e28795-etc-machine-id\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.437053 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkbfq\" (UniqueName: \"kubernetes.io/projected/7318c041-d37d-4001-9620-6ef043e28795-kube-api-access-zkbfq\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.437990 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-db-sync-config-data\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.438241 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-config-data\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.438490 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-scripts\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.438625 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-combined-ca-bundle\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.540518 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-scripts\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.540820 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-combined-ca-bundle\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.540922 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7318c041-d37d-4001-9620-6ef043e28795-etc-machine-id\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.541059 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkbfq\" (UniqueName: \"kubernetes.io/projected/7318c041-d37d-4001-9620-6ef043e28795-kube-api-access-zkbfq\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.541093 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-db-sync-config-data\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.541170 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-config-data\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.541249 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7318c041-d37d-4001-9620-6ef043e28795-etc-machine-id\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.546789 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-db-sync-config-data\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.547810 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-config-data\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.548215 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-scripts\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.559686 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-combined-ca-bundle\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.570136 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkbfq\" (UniqueName: \"kubernetes.io/projected/7318c041-d37d-4001-9620-6ef043e28795-kube-api-access-zkbfq\") pod \"cinder-db-sync-m2fhd\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:21 crc kubenswrapper[4972]: I1121 11:17:21.572111 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:22 crc kubenswrapper[4972]: I1121 11:17:22.077708 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-m2fhd"] Nov 21 11:17:22 crc kubenswrapper[4972]: I1121 11:17:22.767661 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m2fhd" event={"ID":"7318c041-d37d-4001-9620-6ef043e28795","Type":"ContainerStarted","Data":"ee2664e2edd2c31a3afe2704bc402913c370ce9c3442540119419716cefa6c4e"} Nov 21 11:17:22 crc kubenswrapper[4972]: I1121 11:17:22.768044 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m2fhd" event={"ID":"7318c041-d37d-4001-9620-6ef043e28795","Type":"ContainerStarted","Data":"7e468b9c368fd51b1158050a0481fc68572d393cafab886c11d8e62ad8d12376"} Nov 21 11:17:22 crc kubenswrapper[4972]: I1121 11:17:22.801888 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-m2fhd" podStartSLOduration=1.801857574 podStartE2EDuration="1.801857574s" podCreationTimestamp="2025-11-21 11:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:17:22.797011656 +0000 UTC m=+5787.906154174" watchObservedRunningTime="2025-11-21 11:17:22.801857574 +0000 UTC m=+5787.911000072" Nov 21 11:17:25 crc kubenswrapper[4972]: I1121 11:17:25.807552 4972 generic.go:334] "Generic (PLEG): container finished" podID="7318c041-d37d-4001-9620-6ef043e28795" containerID="ee2664e2edd2c31a3afe2704bc402913c370ce9c3442540119419716cefa6c4e" exitCode=0 Nov 21 11:17:25 crc kubenswrapper[4972]: I1121 11:17:25.807641 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m2fhd" event={"ID":"7318c041-d37d-4001-9620-6ef043e28795","Type":"ContainerDied","Data":"ee2664e2edd2c31a3afe2704bc402913c370ce9c3442540119419716cefa6c4e"} Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.185534 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.363308 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-combined-ca-bundle\") pod \"7318c041-d37d-4001-9620-6ef043e28795\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.363406 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7318c041-d37d-4001-9620-6ef043e28795-etc-machine-id\") pod \"7318c041-d37d-4001-9620-6ef043e28795\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.363503 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkbfq\" (UniqueName: \"kubernetes.io/projected/7318c041-d37d-4001-9620-6ef043e28795-kube-api-access-zkbfq\") pod \"7318c041-d37d-4001-9620-6ef043e28795\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.363596 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-config-data\") pod \"7318c041-d37d-4001-9620-6ef043e28795\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.363629 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-scripts\") pod \"7318c041-d37d-4001-9620-6ef043e28795\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.363676 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-db-sync-config-data\") pod \"7318c041-d37d-4001-9620-6ef043e28795\" (UID: \"7318c041-d37d-4001-9620-6ef043e28795\") " Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.363608 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7318c041-d37d-4001-9620-6ef043e28795-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7318c041-d37d-4001-9620-6ef043e28795" (UID: "7318c041-d37d-4001-9620-6ef043e28795"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.392393 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7318c041-d37d-4001-9620-6ef043e28795" (UID: "7318c041-d37d-4001-9620-6ef043e28795"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.392508 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7318c041-d37d-4001-9620-6ef043e28795-kube-api-access-zkbfq" (OuterVolumeSpecName: "kube-api-access-zkbfq") pod "7318c041-d37d-4001-9620-6ef043e28795" (UID: "7318c041-d37d-4001-9620-6ef043e28795"). InnerVolumeSpecName "kube-api-access-zkbfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.392974 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-scripts" (OuterVolumeSpecName: "scripts") pod "7318c041-d37d-4001-9620-6ef043e28795" (UID: "7318c041-d37d-4001-9620-6ef043e28795"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.401414 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7318c041-d37d-4001-9620-6ef043e28795" (UID: "7318c041-d37d-4001-9620-6ef043e28795"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.424094 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-config-data" (OuterVolumeSpecName: "config-data") pod "7318c041-d37d-4001-9620-6ef043e28795" (UID: "7318c041-d37d-4001-9620-6ef043e28795"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.465368 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkbfq\" (UniqueName: \"kubernetes.io/projected/7318c041-d37d-4001-9620-6ef043e28795-kube-api-access-zkbfq\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.465409 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.465421 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.465433 4972 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.465443 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7318c041-d37d-4001-9620-6ef043e28795-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.465452 4972 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7318c041-d37d-4001-9620-6ef043e28795-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.835151 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m2fhd" event={"ID":"7318c041-d37d-4001-9620-6ef043e28795","Type":"ContainerDied","Data":"7e468b9c368fd51b1158050a0481fc68572d393cafab886c11d8e62ad8d12376"} Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.835192 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e468b9c368fd51b1158050a0481fc68572d393cafab886c11d8e62ad8d12376" Nov 21 11:17:27 crc kubenswrapper[4972]: I1121 11:17:27.835208 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m2fhd" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.186530 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6467ff5dcf-hjvxk"] Nov 21 11:17:28 crc kubenswrapper[4972]: E1121 11:17:28.188404 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7318c041-d37d-4001-9620-6ef043e28795" containerName="cinder-db-sync" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.188871 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7318c041-d37d-4001-9620-6ef043e28795" containerName="cinder-db-sync" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.189265 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="7318c041-d37d-4001-9620-6ef043e28795" containerName="cinder-db-sync" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.199916 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.212393 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6467ff5dcf-hjvxk"] Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.290173 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-dns-svc\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.290224 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-ovsdbserver-sb\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.290299 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-config\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.290354 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-ovsdbserver-nb\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.290425 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94vqf\" (UniqueName: \"kubernetes.io/projected/79cf0afd-514a-49cf-9e07-13efd464b0b9-kube-api-access-94vqf\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.374743 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.377083 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.379813 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.379851 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.380008 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.380039 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2qpqk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.391659 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-dns-svc\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.391699 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-ovsdbserver-sb\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.391742 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-config\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.391782 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-ovsdbserver-nb\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.391822 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94vqf\" (UniqueName: \"kubernetes.io/projected/79cf0afd-514a-49cf-9e07-13efd464b0b9-kube-api-access-94vqf\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.392723 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-dns-svc\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.393499 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-config\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.395201 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-ovsdbserver-sb\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.396183 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-ovsdbserver-nb\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.408102 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.418127 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94vqf\" (UniqueName: \"kubernetes.io/projected/79cf0afd-514a-49cf-9e07-13efd464b0b9-kube-api-access-94vqf\") pod \"dnsmasq-dns-6467ff5dcf-hjvxk\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.494373 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259f468-d34c-4536-b8b2-c6eda578a447-logs\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.494476 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-config-data\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.494808 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-config-data-custom\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.494996 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.495167 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f259f468-d34c-4536-b8b2-c6eda578a447-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.495211 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j9r2\" (UniqueName: \"kubernetes.io/projected/f259f468-d34c-4536-b8b2-c6eda578a447-kube-api-access-2j9r2\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.495343 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-scripts\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.523147 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.597907 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259f468-d34c-4536-b8b2-c6eda578a447-logs\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.598016 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-config-data\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.598083 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-config-data-custom\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.598121 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.598160 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f259f468-d34c-4536-b8b2-c6eda578a447-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.598184 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j9r2\" (UniqueName: \"kubernetes.io/projected/f259f468-d34c-4536-b8b2-c6eda578a447-kube-api-access-2j9r2\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.598220 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-scripts\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.598331 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259f468-d34c-4536-b8b2-c6eda578a447-logs\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.598870 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f259f468-d34c-4536-b8b2-c6eda578a447-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.608255 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-config-data\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.608770 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.608901 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-config-data-custom\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.610594 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-scripts\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.619692 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j9r2\" (UniqueName: \"kubernetes.io/projected/f259f468-d34c-4536-b8b2-c6eda578a447-kube-api-access-2j9r2\") pod \"cinder-api-0\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.694725 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 11:17:28 crc kubenswrapper[4972]: I1121 11:17:28.760766 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:17:28 crc kubenswrapper[4972]: E1121 11:17:28.761031 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:17:29 crc kubenswrapper[4972]: I1121 11:17:29.074997 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6467ff5dcf-hjvxk"] Nov 21 11:17:29 crc kubenswrapper[4972]: W1121 11:17:29.077124 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79cf0afd_514a_49cf_9e07_13efd464b0b9.slice/crio-1719787277dabcc71d0b3c24b5ce22b58bf7453b4de53d8de9d32135dfa59643 WatchSource:0}: Error finding container 1719787277dabcc71d0b3c24b5ce22b58bf7453b4de53d8de9d32135dfa59643: Status 404 returned error can't find the container with id 1719787277dabcc71d0b3c24b5ce22b58bf7453b4de53d8de9d32135dfa59643 Nov 21 11:17:29 crc kubenswrapper[4972]: I1121 11:17:29.187223 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 11:17:29 crc kubenswrapper[4972]: I1121 11:17:29.864110 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f259f468-d34c-4536-b8b2-c6eda578a447","Type":"ContainerStarted","Data":"3279be070e284fc884487fa9e9cf9a85c43e044010789766fc7062c7677771d1"} Nov 21 11:17:29 crc kubenswrapper[4972]: I1121 11:17:29.864479 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f259f468-d34c-4536-b8b2-c6eda578a447","Type":"ContainerStarted","Data":"a678e64e74e219b7bd99390bfab67f1f61e8876c52c6867d75a9574f929b476f"} Nov 21 11:17:29 crc kubenswrapper[4972]: I1121 11:17:29.867450 4972 generic.go:334] "Generic (PLEG): container finished" podID="79cf0afd-514a-49cf-9e07-13efd464b0b9" containerID="dacf26a15e8d9a42eb01851ce85f076b5c1c412ab0928b88cb1b27faf8fe4934" exitCode=0 Nov 21 11:17:29 crc kubenswrapper[4972]: I1121 11:17:29.867490 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" event={"ID":"79cf0afd-514a-49cf-9e07-13efd464b0b9","Type":"ContainerDied","Data":"dacf26a15e8d9a42eb01851ce85f076b5c1c412ab0928b88cb1b27faf8fe4934"} Nov 21 11:17:29 crc kubenswrapper[4972]: I1121 11:17:29.867515 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" event={"ID":"79cf0afd-514a-49cf-9e07-13efd464b0b9","Type":"ContainerStarted","Data":"1719787277dabcc71d0b3c24b5ce22b58bf7453b4de53d8de9d32135dfa59643"} Nov 21 11:17:30 crc kubenswrapper[4972]: I1121 11:17:30.881168 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" event={"ID":"79cf0afd-514a-49cf-9e07-13efd464b0b9","Type":"ContainerStarted","Data":"c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30"} Nov 21 11:17:30 crc kubenswrapper[4972]: I1121 11:17:30.881590 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:30 crc kubenswrapper[4972]: I1121 11:17:30.883611 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f259f468-d34c-4536-b8b2-c6eda578a447","Type":"ContainerStarted","Data":"7eb7a907d95f6882c5d96c53f136c64646057fbe0046409ee6a4cc5a5cc24240"} Nov 21 11:17:30 crc kubenswrapper[4972]: I1121 11:17:30.883791 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 21 11:17:30 crc kubenswrapper[4972]: I1121 11:17:30.908024 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" podStartSLOduration=2.908002409 podStartE2EDuration="2.908002409s" podCreationTimestamp="2025-11-21 11:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:17:30.905252847 +0000 UTC m=+5796.014395365" watchObservedRunningTime="2025-11-21 11:17:30.908002409 +0000 UTC m=+5796.017144917" Nov 21 11:17:30 crc kubenswrapper[4972]: I1121 11:17:30.927756 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.927733902 podStartE2EDuration="2.927733902s" podCreationTimestamp="2025-11-21 11:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:17:30.921812915 +0000 UTC m=+5796.030955413" watchObservedRunningTime="2025-11-21 11:17:30.927733902 +0000 UTC m=+5796.036876400" Nov 21 11:17:38 crc kubenswrapper[4972]: I1121 11:17:38.525738 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:17:38 crc kubenswrapper[4972]: I1121 11:17:38.602110 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bdd64c87f-mqbfk"] Nov 21 11:17:38 crc kubenswrapper[4972]: I1121 11:17:38.602734 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" podUID="446e20bf-e455-4b55-9c86-c5c6cb95070e" containerName="dnsmasq-dns" containerID="cri-o://c63a69cbacd7d2cae4dd6c2df6365f2948570a1b5ba66e644349bdc3fd39432e" gracePeriod=10 Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.026761 4972 generic.go:334] "Generic (PLEG): container finished" podID="446e20bf-e455-4b55-9c86-c5c6cb95070e" containerID="c63a69cbacd7d2cae4dd6c2df6365f2948570a1b5ba66e644349bdc3fd39432e" exitCode=0 Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.026820 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" event={"ID":"446e20bf-e455-4b55-9c86-c5c6cb95070e","Type":"ContainerDied","Data":"c63a69cbacd7d2cae4dd6c2df6365f2948570a1b5ba66e644349bdc3fd39432e"} Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.146012 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.225349 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-ovsdbserver-nb\") pod \"446e20bf-e455-4b55-9c86-c5c6cb95070e\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.226150 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d4hz\" (UniqueName: \"kubernetes.io/projected/446e20bf-e455-4b55-9c86-c5c6cb95070e-kube-api-access-5d4hz\") pod \"446e20bf-e455-4b55-9c86-c5c6cb95070e\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.226199 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-dns-svc\") pod \"446e20bf-e455-4b55-9c86-c5c6cb95070e\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.226242 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-config\") pod \"446e20bf-e455-4b55-9c86-c5c6cb95070e\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.226426 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-ovsdbserver-sb\") pod \"446e20bf-e455-4b55-9c86-c5c6cb95070e\" (UID: \"446e20bf-e455-4b55-9c86-c5c6cb95070e\") " Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.233897 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446e20bf-e455-4b55-9c86-c5c6cb95070e-kube-api-access-5d4hz" (OuterVolumeSpecName: "kube-api-access-5d4hz") pod "446e20bf-e455-4b55-9c86-c5c6cb95070e" (UID: "446e20bf-e455-4b55-9c86-c5c6cb95070e"). InnerVolumeSpecName "kube-api-access-5d4hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.275227 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "446e20bf-e455-4b55-9c86-c5c6cb95070e" (UID: "446e20bf-e455-4b55-9c86-c5c6cb95070e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.286238 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "446e20bf-e455-4b55-9c86-c5c6cb95070e" (UID: "446e20bf-e455-4b55-9c86-c5c6cb95070e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.289961 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "446e20bf-e455-4b55-9c86-c5c6cb95070e" (UID: "446e20bf-e455-4b55-9c86-c5c6cb95070e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.308357 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-config" (OuterVolumeSpecName: "config") pod "446e20bf-e455-4b55-9c86-c5c6cb95070e" (UID: "446e20bf-e455-4b55-9c86-c5c6cb95070e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.328837 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.329079 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.329137 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d4hz\" (UniqueName: \"kubernetes.io/projected/446e20bf-e455-4b55-9c86-c5c6cb95070e-kube-api-access-5d4hz\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.329192 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.329248 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446e20bf-e455-4b55-9c86-c5c6cb95070e-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.759668 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:17:39 crc kubenswrapper[4972]: E1121 11:17:39.760023 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.811245 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.811533 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="cbe61d30-0cab-4450-aea9-7f3bfa806221" containerName="nova-scheduler-scheduler" containerID="cri-o://88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49" gracePeriod=30 Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.824026 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.824286 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" containerName="nova-api-log" containerID="cri-o://72515e346b510bcb8130cfd85c2ed3efffd9921b3d1b6441d399ee9d6352c28a" gracePeriod=30 Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.824414 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" containerName="nova-api-api" containerID="cri-o://d75abd60f2559ddeeeb25fd6982cd02bef407cc5a36be3fe6b14dee79ae325d4" gracePeriod=30 Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.846849 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.847149 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerName="nova-metadata-log" containerID="cri-o://838af30eda8e65630edd722cb208ceff9d3bba0e3fe1507637aca192476a25b5" gracePeriod=30 Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.847230 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerName="nova-metadata-metadata" containerID="cri-o://1442d766ed402858dc119cb8d0d0499013b8ae8e878e4366818033e5854a9df0" gracePeriod=30 Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.878492 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.878907 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="8510db5e-9884-41b1-a7d1-575592454efd" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec" gracePeriod=30 Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.895178 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 11:17:39 crc kubenswrapper[4972]: I1121 11:17:39.895417 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="e058b60f-7e51-450d-8330-1b96ad510032" containerName="nova-cell0-conductor-conductor" containerID="cri-o://195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2" gracePeriod=30 Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.037494 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" event={"ID":"446e20bf-e455-4b55-9c86-c5c6cb95070e","Type":"ContainerDied","Data":"9d1a021012d3e5fc05aa4e9a39abe7ab72b43647f352bcadbf5e71817174ffe2"} Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.037759 4972 scope.go:117] "RemoveContainer" containerID="c63a69cbacd7d2cae4dd6c2df6365f2948570a1b5ba66e644349bdc3fd39432e" Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.037512 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.039798 4972 generic.go:334] "Generic (PLEG): container finished" podID="1730bac7-5d9e-4989-9995-8920234d3eef" containerID="72515e346b510bcb8130cfd85c2ed3efffd9921b3d1b6441d399ee9d6352c28a" exitCode=143 Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.039955 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1730bac7-5d9e-4989-9995-8920234d3eef","Type":"ContainerDied","Data":"72515e346b510bcb8130cfd85c2ed3efffd9921b3d1b6441d399ee9d6352c28a"} Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.043914 4972 generic.go:334] "Generic (PLEG): container finished" podID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerID="838af30eda8e65630edd722cb208ceff9d3bba0e3fe1507637aca192476a25b5" exitCode=143 Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.043961 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6969f156-2c8d-44d0-85c0-8b2d08c4c138","Type":"ContainerDied","Data":"838af30eda8e65630edd722cb208ceff9d3bba0e3fe1507637aca192476a25b5"} Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.060450 4972 scope.go:117] "RemoveContainer" containerID="b70ebc0760dc48f43f4e7f8d6eb1808c19b1a500ec5f3fc7f58c845080fd381d" Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.066784 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bdd64c87f-mqbfk"] Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.095071 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bdd64c87f-mqbfk"] Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.206781 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="8510db5e-9884-41b1-a7d1-575592454efd" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.1.58:6080/vnc_lite.html\": dial tcp 10.217.1.58:6080: connect: connection refused" Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.523236 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.582473 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.652474 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8510db5e-9884-41b1-a7d1-575592454efd-combined-ca-bundle\") pod \"8510db5e-9884-41b1-a7d1-575592454efd\" (UID: \"8510db5e-9884-41b1-a7d1-575592454efd\") " Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.652576 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8510db5e-9884-41b1-a7d1-575592454efd-config-data\") pod \"8510db5e-9884-41b1-a7d1-575592454efd\" (UID: \"8510db5e-9884-41b1-a7d1-575592454efd\") " Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.652647 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vrlk\" (UniqueName: \"kubernetes.io/projected/8510db5e-9884-41b1-a7d1-575592454efd-kube-api-access-8vrlk\") pod \"8510db5e-9884-41b1-a7d1-575592454efd\" (UID: \"8510db5e-9884-41b1-a7d1-575592454efd\") " Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.660595 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8510db5e-9884-41b1-a7d1-575592454efd-kube-api-access-8vrlk" (OuterVolumeSpecName: "kube-api-access-8vrlk") pod "8510db5e-9884-41b1-a7d1-575592454efd" (UID: "8510db5e-9884-41b1-a7d1-575592454efd"). InnerVolumeSpecName "kube-api-access-8vrlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.683053 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8510db5e-9884-41b1-a7d1-575592454efd-config-data" (OuterVolumeSpecName: "config-data") pod "8510db5e-9884-41b1-a7d1-575592454efd" (UID: "8510db5e-9884-41b1-a7d1-575592454efd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.686094 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8510db5e-9884-41b1-a7d1-575592454efd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8510db5e-9884-41b1-a7d1-575592454efd" (UID: "8510db5e-9884-41b1-a7d1-575592454efd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.755561 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8510db5e-9884-41b1-a7d1-575592454efd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.755613 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8510db5e-9884-41b1-a7d1-575592454efd-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:40 crc kubenswrapper[4972]: I1121 11:17:40.755626 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vrlk\" (UniqueName: \"kubernetes.io/projected/8510db5e-9884-41b1-a7d1-575592454efd-kube-api-access-8vrlk\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.054616 4972 generic.go:334] "Generic (PLEG): container finished" podID="8510db5e-9884-41b1-a7d1-575592454efd" containerID="9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec" exitCode=0 Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.054677 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.054735 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8510db5e-9884-41b1-a7d1-575592454efd","Type":"ContainerDied","Data":"9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec"} Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.054766 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8510db5e-9884-41b1-a7d1-575592454efd","Type":"ContainerDied","Data":"b988035671b0e6e1b574b36c3f215ff34fbfe85f3da6574f5471e7ab447e9e86"} Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.054783 4972 scope.go:117] "RemoveContainer" containerID="9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.086181 4972 scope.go:117] "RemoveContainer" containerID="9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec" Nov 21 11:17:41 crc kubenswrapper[4972]: E1121 11:17:41.087030 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec\": container with ID starting with 9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec not found: ID does not exist" containerID="9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.087078 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec"} err="failed to get container status \"9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec\": rpc error: code = NotFound desc = could not find container \"9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec\": container with ID starting with 9c6422188b460b3383a8cd4c611c756438b9739cf97d921f5bc30ecec64029ec not found: ID does not exist" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.097433 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.109694 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.130089 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 11:17:41 crc kubenswrapper[4972]: E1121 11:17:41.130574 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446e20bf-e455-4b55-9c86-c5c6cb95070e" containerName="dnsmasq-dns" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.130592 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="446e20bf-e455-4b55-9c86-c5c6cb95070e" containerName="dnsmasq-dns" Nov 21 11:17:41 crc kubenswrapper[4972]: E1121 11:17:41.130611 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8510db5e-9884-41b1-a7d1-575592454efd" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.130620 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8510db5e-9884-41b1-a7d1-575592454efd" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 11:17:41 crc kubenswrapper[4972]: E1121 11:17:41.130635 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446e20bf-e455-4b55-9c86-c5c6cb95070e" containerName="init" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.130642 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="446e20bf-e455-4b55-9c86-c5c6cb95070e" containerName="init" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.130855 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8510db5e-9884-41b1-a7d1-575592454efd" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.130868 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="446e20bf-e455-4b55-9c86-c5c6cb95070e" containerName="dnsmasq-dns" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.131544 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.134599 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.152289 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.165084 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51317036-994c-4651-86d8-4e1a15036ffd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"51317036-994c-4651-86d8-4e1a15036ffd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.165301 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxvzg\" (UniqueName: \"kubernetes.io/projected/51317036-994c-4651-86d8-4e1a15036ffd-kube-api-access-vxvzg\") pod \"nova-cell1-novncproxy-0\" (UID: \"51317036-994c-4651-86d8-4e1a15036ffd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.165354 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51317036-994c-4651-86d8-4e1a15036ffd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"51317036-994c-4651-86d8-4e1a15036ffd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.266910 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51317036-994c-4651-86d8-4e1a15036ffd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"51317036-994c-4651-86d8-4e1a15036ffd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.267019 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51317036-994c-4651-86d8-4e1a15036ffd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"51317036-994c-4651-86d8-4e1a15036ffd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.267135 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxvzg\" (UniqueName: \"kubernetes.io/projected/51317036-994c-4651-86d8-4e1a15036ffd-kube-api-access-vxvzg\") pod \"nova-cell1-novncproxy-0\" (UID: \"51317036-994c-4651-86d8-4e1a15036ffd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.272678 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51317036-994c-4651-86d8-4e1a15036ffd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"51317036-994c-4651-86d8-4e1a15036ffd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.276381 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51317036-994c-4651-86d8-4e1a15036ffd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"51317036-994c-4651-86d8-4e1a15036ffd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.283373 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxvzg\" (UniqueName: \"kubernetes.io/projected/51317036-994c-4651-86d8-4e1a15036ffd-kube-api-access-vxvzg\") pod \"nova-cell1-novncproxy-0\" (UID: \"51317036-994c-4651-86d8-4e1a15036ffd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.459445 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.771438 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446e20bf-e455-4b55-9c86-c5c6cb95070e" path="/var/lib/kubelet/pods/446e20bf-e455-4b55-9c86-c5c6cb95070e/volumes" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.772427 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8510db5e-9884-41b1-a7d1-575592454efd" path="/var/lib/kubelet/pods/8510db5e-9884-41b1-a7d1-575592454efd/volumes" Nov 21 11:17:41 crc kubenswrapper[4972]: I1121 11:17:41.922127 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 11:17:41 crc kubenswrapper[4972]: W1121 11:17:41.926569 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51317036_994c_4651_86d8_4e1a15036ffd.slice/crio-32a21545e27cc7a56b4b638c44f48086014d838c570d5f5db6bf0462466de73d WatchSource:0}: Error finding container 32a21545e27cc7a56b4b638c44f48086014d838c570d5f5db6bf0462466de73d: Status 404 returned error can't find the container with id 32a21545e27cc7a56b4b638c44f48086014d838c570d5f5db6bf0462466de73d Nov 21 11:17:42 crc kubenswrapper[4972]: I1121 11:17:42.077101 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"51317036-994c-4651-86d8-4e1a15036ffd","Type":"ContainerStarted","Data":"32a21545e27cc7a56b4b638c44f48086014d838c570d5f5db6bf0462466de73d"} Nov 21 11:17:42 crc kubenswrapper[4972]: I1121 11:17:42.994441 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.70:8774/\": read tcp 10.217.0.2:45058->10.217.1.70:8774: read: connection reset by peer" Nov 21 11:17:42 crc kubenswrapper[4972]: I1121 11:17:42.995227 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.70:8774/\": read tcp 10.217.0.2:45062->10.217.1.70:8774: read: connection reset by peer" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.006730 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.69:8775/\": read tcp 10.217.0.2:41880->10.217.1.69:8775: read: connection reset by peer" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.010446 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.69:8775/\": read tcp 10.217.0.2:41878->10.217.1.69:8775: read: connection reset by peer" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.063519 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.063777 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="186387b6-541f-44f7-811a-2814418ff1cd" containerName="nova-cell1-conductor-conductor" containerID="cri-o://21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33" gracePeriod=30 Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.090878 4972 generic.go:334] "Generic (PLEG): container finished" podID="1730bac7-5d9e-4989-9995-8920234d3eef" containerID="d75abd60f2559ddeeeb25fd6982cd02bef407cc5a36be3fe6b14dee79ae325d4" exitCode=0 Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.090958 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1730bac7-5d9e-4989-9995-8920234d3eef","Type":"ContainerDied","Data":"d75abd60f2559ddeeeb25fd6982cd02bef407cc5a36be3fe6b14dee79ae325d4"} Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.093661 4972 generic.go:334] "Generic (PLEG): container finished" podID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerID="1442d766ed402858dc119cb8d0d0499013b8ae8e878e4366818033e5854a9df0" exitCode=0 Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.093744 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6969f156-2c8d-44d0-85c0-8b2d08c4c138","Type":"ContainerDied","Data":"1442d766ed402858dc119cb8d0d0499013b8ae8e878e4366818033e5854a9df0"} Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.096562 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"51317036-994c-4651-86d8-4e1a15036ffd","Type":"ContainerStarted","Data":"f89a3ecec9ba23d41d47a9022eb2a38216ec6e5a8855d5c7b5defaa12f73a258"} Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.122160 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.122132307 podStartE2EDuration="2.122132307s" podCreationTimestamp="2025-11-21 11:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:17:43.118287995 +0000 UTC m=+5808.227430513" watchObservedRunningTime="2025-11-21 11:17:43.122132307 +0000 UTC m=+5808.231274825" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.542414 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.545668 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:17:43 crc kubenswrapper[4972]: E1121 11:17:43.548917 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 11:17:43 crc kubenswrapper[4972]: E1121 11:17:43.550857 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 11:17:43 crc kubenswrapper[4972]: E1121 11:17:43.552513 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 11:17:43 crc kubenswrapper[4972]: E1121 11:17:43.552549 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cbe61d30-0cab-4450-aea9-7f3bfa806221" containerName="nova-scheduler-scheduler" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.618515 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtbfn\" (UniqueName: \"kubernetes.io/projected/1730bac7-5d9e-4989-9995-8920234d3eef-kube-api-access-xtbfn\") pod \"1730bac7-5d9e-4989-9995-8920234d3eef\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.618624 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6969f156-2c8d-44d0-85c0-8b2d08c4c138-combined-ca-bundle\") pod \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.618676 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6969f156-2c8d-44d0-85c0-8b2d08c4c138-config-data\") pod \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.618738 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvrmp\" (UniqueName: \"kubernetes.io/projected/6969f156-2c8d-44d0-85c0-8b2d08c4c138-kube-api-access-fvrmp\") pod \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.618862 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1730bac7-5d9e-4989-9995-8920234d3eef-config-data\") pod \"1730bac7-5d9e-4989-9995-8920234d3eef\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.618915 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1730bac7-5d9e-4989-9995-8920234d3eef-logs\") pod \"1730bac7-5d9e-4989-9995-8920234d3eef\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.618956 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6969f156-2c8d-44d0-85c0-8b2d08c4c138-logs\") pod \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\" (UID: \"6969f156-2c8d-44d0-85c0-8b2d08c4c138\") " Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.618999 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1730bac7-5d9e-4989-9995-8920234d3eef-combined-ca-bundle\") pod \"1730bac7-5d9e-4989-9995-8920234d3eef\" (UID: \"1730bac7-5d9e-4989-9995-8920234d3eef\") " Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.628252 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1730bac7-5d9e-4989-9995-8920234d3eef-logs" (OuterVolumeSpecName: "logs") pod "1730bac7-5d9e-4989-9995-8920234d3eef" (UID: "1730bac7-5d9e-4989-9995-8920234d3eef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.629718 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6969f156-2c8d-44d0-85c0-8b2d08c4c138-logs" (OuterVolumeSpecName: "logs") pod "6969f156-2c8d-44d0-85c0-8b2d08c4c138" (UID: "6969f156-2c8d-44d0-85c0-8b2d08c4c138"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.635479 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6969f156-2c8d-44d0-85c0-8b2d08c4c138-kube-api-access-fvrmp" (OuterVolumeSpecName: "kube-api-access-fvrmp") pod "6969f156-2c8d-44d0-85c0-8b2d08c4c138" (UID: "6969f156-2c8d-44d0-85c0-8b2d08c4c138"). InnerVolumeSpecName "kube-api-access-fvrmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.648186 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1730bac7-5d9e-4989-9995-8920234d3eef-kube-api-access-xtbfn" (OuterVolumeSpecName: "kube-api-access-xtbfn") pod "1730bac7-5d9e-4989-9995-8920234d3eef" (UID: "1730bac7-5d9e-4989-9995-8920234d3eef"). InnerVolumeSpecName "kube-api-access-xtbfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.657186 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1730bac7-5d9e-4989-9995-8920234d3eef-config-data" (OuterVolumeSpecName: "config-data") pod "1730bac7-5d9e-4989-9995-8920234d3eef" (UID: "1730bac7-5d9e-4989-9995-8920234d3eef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.681549 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1730bac7-5d9e-4989-9995-8920234d3eef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1730bac7-5d9e-4989-9995-8920234d3eef" (UID: "1730bac7-5d9e-4989-9995-8920234d3eef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.683928 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6969f156-2c8d-44d0-85c0-8b2d08c4c138-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6969f156-2c8d-44d0-85c0-8b2d08c4c138" (UID: "6969f156-2c8d-44d0-85c0-8b2d08c4c138"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.688344 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6969f156-2c8d-44d0-85c0-8b2d08c4c138-config-data" (OuterVolumeSpecName: "config-data") pod "6969f156-2c8d-44d0-85c0-8b2d08c4c138" (UID: "6969f156-2c8d-44d0-85c0-8b2d08c4c138"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.723697 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6969f156-2c8d-44d0-85c0-8b2d08c4c138-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.723730 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6969f156-2c8d-44d0-85c0-8b2d08c4c138-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.723809 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvrmp\" (UniqueName: \"kubernetes.io/projected/6969f156-2c8d-44d0-85c0-8b2d08c4c138-kube-api-access-fvrmp\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.723819 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1730bac7-5d9e-4989-9995-8920234d3eef-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.723843 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1730bac7-5d9e-4989-9995-8920234d3eef-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.723853 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6969f156-2c8d-44d0-85c0-8b2d08c4c138-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.723863 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1730bac7-5d9e-4989-9995-8920234d3eef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:43 crc kubenswrapper[4972]: I1121 11:17:43.723871 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtbfn\" (UniqueName: \"kubernetes.io/projected/1730bac7-5d9e-4989-9995-8920234d3eef-kube-api-access-xtbfn\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:44 crc kubenswrapper[4972]: E1121 11:17:44.025356 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 11:17:44 crc kubenswrapper[4972]: E1121 11:17:44.027361 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 11:17:44 crc kubenswrapper[4972]: E1121 11:17:44.029215 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 11:17:44 crc kubenswrapper[4972]: E1121 11:17:44.029268 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="e058b60f-7e51-450d-8330-1b96ad510032" containerName="nova-cell0-conductor-conductor" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.108267 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.108295 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6969f156-2c8d-44d0-85c0-8b2d08c4c138","Type":"ContainerDied","Data":"410ca064cead4906db092bdfafc2f9eee61e586941b6933636456c1056185ea2"} Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.108354 4972 scope.go:117] "RemoveContainer" containerID="1442d766ed402858dc119cb8d0d0499013b8ae8e878e4366818033e5854a9df0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.117605 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1730bac7-5d9e-4989-9995-8920234d3eef","Type":"ContainerDied","Data":"7ba4f3130a38db239cb647e853579e0a0d32b4b5b9b532bdc14d3835abca8e36"} Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.117702 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bdd64c87f-mqbfk" podUID="446e20bf-e455-4b55-9c86-c5c6cb95070e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.71:5353: i/o timeout" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.117792 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.154720 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.164146 4972 scope.go:117] "RemoveContainer" containerID="838af30eda8e65630edd722cb208ceff9d3bba0e3fe1507637aca192476a25b5" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.170155 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.187231 4972 scope.go:117] "RemoveContainer" containerID="d75abd60f2559ddeeeb25fd6982cd02bef407cc5a36be3fe6b14dee79ae325d4" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.190211 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.201891 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 11:17:44 crc kubenswrapper[4972]: E1121 11:17:44.202460 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerName="nova-metadata-log" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.202487 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerName="nova-metadata-log" Nov 21 11:17:44 crc kubenswrapper[4972]: E1121 11:17:44.202515 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" containerName="nova-api-log" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.202524 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" containerName="nova-api-log" Nov 21 11:17:44 crc kubenswrapper[4972]: E1121 11:17:44.202538 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerName="nova-metadata-metadata" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.202547 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerName="nova-metadata-metadata" Nov 21 11:17:44 crc kubenswrapper[4972]: E1121 11:17:44.202572 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" containerName="nova-api-api" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.202582 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" containerName="nova-api-api" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.202804 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerName="nova-metadata-metadata" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.202857 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" containerName="nova-api-api" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.202876 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" containerName="nova-metadata-log" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.202893 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" containerName="nova-api-log" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.204446 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.210629 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.211413 4972 scope.go:117] "RemoveContainer" containerID="72515e346b510bcb8130cfd85c2ed3efffd9921b3d1b6441d399ee9d6352c28a" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.220372 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.221113 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.222331 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.223692 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.225436 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.232427 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-788kn\" (UniqueName: \"kubernetes.io/projected/76a1d86d-887f-461d-9415-908540ed2f33-kube-api-access-788kn\") pod \"nova-api-0\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.232533 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a1d86d-887f-461d-9415-908540ed2f33-config-data\") pod \"nova-api-0\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.232561 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a1d86d-887f-461d-9415-908540ed2f33-logs\") pod \"nova-api-0\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.232583 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a1d86d-887f-461d-9415-908540ed2f33-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.267224 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.338952 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8164b141-9e42-4a0c-b161-ec80323b043d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.339326 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8164b141-9e42-4a0c-b161-ec80323b043d-config-data\") pod \"nova-metadata-0\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.339386 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-788kn\" (UniqueName: \"kubernetes.io/projected/76a1d86d-887f-461d-9415-908540ed2f33-kube-api-access-788kn\") pod \"nova-api-0\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.339479 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a1d86d-887f-461d-9415-908540ed2f33-config-data\") pod \"nova-api-0\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.339529 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a1d86d-887f-461d-9415-908540ed2f33-logs\") pod \"nova-api-0\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.339561 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8164b141-9e42-4a0c-b161-ec80323b043d-logs\") pod \"nova-metadata-0\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.339589 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a1d86d-887f-461d-9415-908540ed2f33-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.339628 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78bkw\" (UniqueName: \"kubernetes.io/projected/8164b141-9e42-4a0c-b161-ec80323b043d-kube-api-access-78bkw\") pod \"nova-metadata-0\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.342254 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a1d86d-887f-461d-9415-908540ed2f33-logs\") pod \"nova-api-0\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.346542 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a1d86d-887f-461d-9415-908540ed2f33-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.361618 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a1d86d-887f-461d-9415-908540ed2f33-config-data\") pod \"nova-api-0\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.372044 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-788kn\" (UniqueName: \"kubernetes.io/projected/76a1d86d-887f-461d-9415-908540ed2f33-kube-api-access-788kn\") pod \"nova-api-0\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.441457 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78bkw\" (UniqueName: \"kubernetes.io/projected/8164b141-9e42-4a0c-b161-ec80323b043d-kube-api-access-78bkw\") pod \"nova-metadata-0\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.441580 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8164b141-9e42-4a0c-b161-ec80323b043d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.441599 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8164b141-9e42-4a0c-b161-ec80323b043d-config-data\") pod \"nova-metadata-0\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.441702 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8164b141-9e42-4a0c-b161-ec80323b043d-logs\") pod \"nova-metadata-0\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.442167 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8164b141-9e42-4a0c-b161-ec80323b043d-logs\") pod \"nova-metadata-0\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.444966 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8164b141-9e42-4a0c-b161-ec80323b043d-config-data\") pod \"nova-metadata-0\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.446356 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8164b141-9e42-4a0c-b161-ec80323b043d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.460524 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78bkw\" (UniqueName: \"kubernetes.io/projected/8164b141-9e42-4a0c-b161-ec80323b043d-kube-api-access-78bkw\") pod \"nova-metadata-0\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.532299 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.554600 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 11:17:44 crc kubenswrapper[4972]: I1121 11:17:44.985373 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 11:17:44 crc kubenswrapper[4972]: W1121 11:17:44.985635 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8164b141_9e42_4a0c_b161_ec80323b043d.slice/crio-bad735de138a68cdbc313cb6971a292b79dc3bfeecf5683fedbe8caa14e8181e WatchSource:0}: Error finding container bad735de138a68cdbc313cb6971a292b79dc3bfeecf5683fedbe8caa14e8181e: Status 404 returned error can't find the container with id bad735de138a68cdbc313cb6971a292b79dc3bfeecf5683fedbe8caa14e8181e Nov 21 11:17:45 crc kubenswrapper[4972]: W1121 11:17:45.071780 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76a1d86d_887f_461d_9415_908540ed2f33.slice/crio-0015ed2c9719393d5275d8ca5d21b44f5c2aecbad9bd2295303237e55f9faeb6 WatchSource:0}: Error finding container 0015ed2c9719393d5275d8ca5d21b44f5c2aecbad9bd2295303237e55f9faeb6: Status 404 returned error can't find the container with id 0015ed2c9719393d5275d8ca5d21b44f5c2aecbad9bd2295303237e55f9faeb6 Nov 21 11:17:45 crc kubenswrapper[4972]: I1121 11:17:45.073734 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 11:17:45 crc kubenswrapper[4972]: I1121 11:17:45.137197 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76a1d86d-887f-461d-9415-908540ed2f33","Type":"ContainerStarted","Data":"0015ed2c9719393d5275d8ca5d21b44f5c2aecbad9bd2295303237e55f9faeb6"} Nov 21 11:17:45 crc kubenswrapper[4972]: I1121 11:17:45.138364 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8164b141-9e42-4a0c-b161-ec80323b043d","Type":"ContainerStarted","Data":"bad735de138a68cdbc313cb6971a292b79dc3bfeecf5683fedbe8caa14e8181e"} Nov 21 11:17:45 crc kubenswrapper[4972]: I1121 11:17:45.775342 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1730bac7-5d9e-4989-9995-8920234d3eef" path="/var/lib/kubelet/pods/1730bac7-5d9e-4989-9995-8920234d3eef/volumes" Nov 21 11:17:45 crc kubenswrapper[4972]: I1121 11:17:45.776459 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6969f156-2c8d-44d0-85c0-8b2d08c4c138" path="/var/lib/kubelet/pods/6969f156-2c8d-44d0-85c0-8b2d08c4c138/volumes" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.088191 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.149528 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76a1d86d-887f-461d-9415-908540ed2f33","Type":"ContainerStarted","Data":"5026fd1e01357d7e018deab68e34ea1c38d3ba63c5c7241042861939f452af5e"} Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.149589 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76a1d86d-887f-461d-9415-908540ed2f33","Type":"ContainerStarted","Data":"8fcaf6fdcf7a06e2eb824a2e922877222b49ba25c68fcf77c0bc6d1620d078f0"} Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.153775 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8164b141-9e42-4a0c-b161-ec80323b043d","Type":"ContainerStarted","Data":"aeb921465259a6901b1e4e8704fd6d97e5ba8bae2eaf7347cdd19708e3e7fa59"} Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.153847 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8164b141-9e42-4a0c-b161-ec80323b043d","Type":"ContainerStarted","Data":"945bbb7cb188bde4d0c73027afd5b358190f93fc0121f0d6a899a8fa8f08064a"} Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.156083 4972 generic.go:334] "Generic (PLEG): container finished" podID="186387b6-541f-44f7-811a-2814418ff1cd" containerID="21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33" exitCode=0 Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.156145 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"186387b6-541f-44f7-811a-2814418ff1cd","Type":"ContainerDied","Data":"21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33"} Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.156181 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"186387b6-541f-44f7-811a-2814418ff1cd","Type":"ContainerDied","Data":"0ef7a5394aae16670c272519e8e32aa1c0eb01aae5fd3d5c84a41a3afaed6674"} Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.156209 4972 scope.go:117] "RemoveContainer" containerID="21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.156353 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.170472 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186387b6-541f-44f7-811a-2814418ff1cd-combined-ca-bundle\") pod \"186387b6-541f-44f7-811a-2814418ff1cd\" (UID: \"186387b6-541f-44f7-811a-2814418ff1cd\") " Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.170639 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/186387b6-541f-44f7-811a-2814418ff1cd-config-data\") pod \"186387b6-541f-44f7-811a-2814418ff1cd\" (UID: \"186387b6-541f-44f7-811a-2814418ff1cd\") " Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.170721 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xk89\" (UniqueName: \"kubernetes.io/projected/186387b6-541f-44f7-811a-2814418ff1cd-kube-api-access-8xk89\") pod \"186387b6-541f-44f7-811a-2814418ff1cd\" (UID: \"186387b6-541f-44f7-811a-2814418ff1cd\") " Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.177546 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/186387b6-541f-44f7-811a-2814418ff1cd-kube-api-access-8xk89" (OuterVolumeSpecName: "kube-api-access-8xk89") pod "186387b6-541f-44f7-811a-2814418ff1cd" (UID: "186387b6-541f-44f7-811a-2814418ff1cd"). InnerVolumeSpecName "kube-api-access-8xk89". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.191505 4972 scope.go:117] "RemoveContainer" containerID="21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33" Nov 21 11:17:46 crc kubenswrapper[4972]: E1121 11:17:46.192259 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33\": container with ID starting with 21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33 not found: ID does not exist" containerID="21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.192423 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33"} err="failed to get container status \"21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33\": rpc error: code = NotFound desc = could not find container \"21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33\": container with ID starting with 21792a508cb1b491f29b5ab312e2eae14070af009cc77027986e6a761ab82e33 not found: ID does not exist" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.205494 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.205472766 podStartE2EDuration="2.205472766s" podCreationTimestamp="2025-11-21 11:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:17:46.199277612 +0000 UTC m=+5811.308420110" watchObservedRunningTime="2025-11-21 11:17:46.205472766 +0000 UTC m=+5811.314615304" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.209284 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.209269007 podStartE2EDuration="2.209269007s" podCreationTimestamp="2025-11-21 11:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:17:46.176658953 +0000 UTC m=+5811.285801521" watchObservedRunningTime="2025-11-21 11:17:46.209269007 +0000 UTC m=+5811.318411545" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.211677 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/186387b6-541f-44f7-811a-2814418ff1cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "186387b6-541f-44f7-811a-2814418ff1cd" (UID: "186387b6-541f-44f7-811a-2814418ff1cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.212897 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/186387b6-541f-44f7-811a-2814418ff1cd-config-data" (OuterVolumeSpecName: "config-data") pod "186387b6-541f-44f7-811a-2814418ff1cd" (UID: "186387b6-541f-44f7-811a-2814418ff1cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.273267 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186387b6-541f-44f7-811a-2814418ff1cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.273314 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/186387b6-541f-44f7-811a-2814418ff1cd-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.273467 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xk89\" (UniqueName: \"kubernetes.io/projected/186387b6-541f-44f7-811a-2814418ff1cd-kube-api-access-8xk89\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.460267 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.515741 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.540526 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.566365 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 11:17:46 crc kubenswrapper[4972]: E1121 11:17:46.567269 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="186387b6-541f-44f7-811a-2814418ff1cd" containerName="nova-cell1-conductor-conductor" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.567416 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="186387b6-541f-44f7-811a-2814418ff1cd" containerName="nova-cell1-conductor-conductor" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.567887 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="186387b6-541f-44f7-811a-2814418ff1cd" containerName="nova-cell1-conductor-conductor" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.569029 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.572270 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.575644 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.680590 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da7f1a1-6ce5-468a-a84f-e12242d5539e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.681051 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da7f1a1-6ce5-468a-a84f-e12242d5539e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.681078 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkfl5\" (UniqueName: \"kubernetes.io/projected/3da7f1a1-6ce5-468a-a84f-e12242d5539e-kube-api-access-rkfl5\") pod \"nova-cell1-conductor-0\" (UID: \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.782399 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da7f1a1-6ce5-468a-a84f-e12242d5539e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.782455 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkfl5\" (UniqueName: \"kubernetes.io/projected/3da7f1a1-6ce5-468a-a84f-e12242d5539e-kube-api-access-rkfl5\") pod \"nova-cell1-conductor-0\" (UID: \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.782568 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da7f1a1-6ce5-468a-a84f-e12242d5539e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.789760 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da7f1a1-6ce5-468a-a84f-e12242d5539e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.789775 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da7f1a1-6ce5-468a-a84f-e12242d5539e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.804607 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkfl5\" (UniqueName: \"kubernetes.io/projected/3da7f1a1-6ce5-468a-a84f-e12242d5539e-kube-api-access-rkfl5\") pod \"nova-cell1-conductor-0\" (UID: \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\") " pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:46 crc kubenswrapper[4972]: I1121 11:17:46.901097 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.392270 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.751919 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.775791 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="186387b6-541f-44f7-811a-2814418ff1cd" path="/var/lib/kubelet/pods/186387b6-541f-44f7-811a-2814418ff1cd/volumes" Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.809435 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe61d30-0cab-4450-aea9-7f3bfa806221-combined-ca-bundle\") pod \"cbe61d30-0cab-4450-aea9-7f3bfa806221\" (UID: \"cbe61d30-0cab-4450-aea9-7f3bfa806221\") " Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.809509 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr286\" (UniqueName: \"kubernetes.io/projected/cbe61d30-0cab-4450-aea9-7f3bfa806221-kube-api-access-vr286\") pod \"cbe61d30-0cab-4450-aea9-7f3bfa806221\" (UID: \"cbe61d30-0cab-4450-aea9-7f3bfa806221\") " Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.809673 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe61d30-0cab-4450-aea9-7f3bfa806221-config-data\") pod \"cbe61d30-0cab-4450-aea9-7f3bfa806221\" (UID: \"cbe61d30-0cab-4450-aea9-7f3bfa806221\") " Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.824028 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe61d30-0cab-4450-aea9-7f3bfa806221-kube-api-access-vr286" (OuterVolumeSpecName: "kube-api-access-vr286") pod "cbe61d30-0cab-4450-aea9-7f3bfa806221" (UID: "cbe61d30-0cab-4450-aea9-7f3bfa806221"). InnerVolumeSpecName "kube-api-access-vr286". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.888205 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.901121 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe61d30-0cab-4450-aea9-7f3bfa806221-config-data" (OuterVolumeSpecName: "config-data") pod "cbe61d30-0cab-4450-aea9-7f3bfa806221" (UID: "cbe61d30-0cab-4450-aea9-7f3bfa806221"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.910084 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe61d30-0cab-4450-aea9-7f3bfa806221-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbe61d30-0cab-4450-aea9-7f3bfa806221" (UID: "cbe61d30-0cab-4450-aea9-7f3bfa806221"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.920818 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6jbc\" (UniqueName: \"kubernetes.io/projected/e058b60f-7e51-450d-8330-1b96ad510032-kube-api-access-z6jbc\") pod \"e058b60f-7e51-450d-8330-1b96ad510032\" (UID: \"e058b60f-7e51-450d-8330-1b96ad510032\") " Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.921009 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e058b60f-7e51-450d-8330-1b96ad510032-config-data\") pod \"e058b60f-7e51-450d-8330-1b96ad510032\" (UID: \"e058b60f-7e51-450d-8330-1b96ad510032\") " Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.921039 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e058b60f-7e51-450d-8330-1b96ad510032-combined-ca-bundle\") pod \"e058b60f-7e51-450d-8330-1b96ad510032\" (UID: \"e058b60f-7e51-450d-8330-1b96ad510032\") " Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.921653 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe61d30-0cab-4450-aea9-7f3bfa806221-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.921680 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr286\" (UniqueName: \"kubernetes.io/projected/cbe61d30-0cab-4450-aea9-7f3bfa806221-kube-api-access-vr286\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.921697 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe61d30-0cab-4450-aea9-7f3bfa806221-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.927048 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e058b60f-7e51-450d-8330-1b96ad510032-kube-api-access-z6jbc" (OuterVolumeSpecName: "kube-api-access-z6jbc") pod "e058b60f-7e51-450d-8330-1b96ad510032" (UID: "e058b60f-7e51-450d-8330-1b96ad510032"). InnerVolumeSpecName "kube-api-access-z6jbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.949392 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e058b60f-7e51-450d-8330-1b96ad510032-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e058b60f-7e51-450d-8330-1b96ad510032" (UID: "e058b60f-7e51-450d-8330-1b96ad510032"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:47 crc kubenswrapper[4972]: I1121 11:17:47.952789 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e058b60f-7e51-450d-8330-1b96ad510032-config-data" (OuterVolumeSpecName: "config-data") pod "e058b60f-7e51-450d-8330-1b96ad510032" (UID: "e058b60f-7e51-450d-8330-1b96ad510032"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.022895 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e058b60f-7e51-450d-8330-1b96ad510032-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.022924 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e058b60f-7e51-450d-8330-1b96ad510032-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.022935 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6jbc\" (UniqueName: \"kubernetes.io/projected/e058b60f-7e51-450d-8330-1b96ad510032-kube-api-access-z6jbc\") on node \"crc\" DevicePath \"\"" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.176817 4972 generic.go:334] "Generic (PLEG): container finished" podID="e058b60f-7e51-450d-8330-1b96ad510032" containerID="195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2" exitCode=0 Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.176956 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.181039 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e058b60f-7e51-450d-8330-1b96ad510032","Type":"ContainerDied","Data":"195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2"} Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.181107 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e058b60f-7e51-450d-8330-1b96ad510032","Type":"ContainerDied","Data":"362ed96badcee102adc5f769133407067b3c6b1422128c642b2d60565dca40ab"} Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.181124 4972 scope.go:117] "RemoveContainer" containerID="195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.183097 4972 generic.go:334] "Generic (PLEG): container finished" podID="cbe61d30-0cab-4450-aea9-7f3bfa806221" containerID="88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49" exitCode=0 Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.183193 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cbe61d30-0cab-4450-aea9-7f3bfa806221","Type":"ContainerDied","Data":"88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49"} Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.183227 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cbe61d30-0cab-4450-aea9-7f3bfa806221","Type":"ContainerDied","Data":"91bff2a9340154aa6dc2224feac843498d2f58e15ec9c31908d92063b5de4ab3"} Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.183224 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.184631 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3da7f1a1-6ce5-468a-a84f-e12242d5539e","Type":"ContainerStarted","Data":"736a65abba960b9bad9182628928af172f1d216c482a0987666c682cd8a7bc1d"} Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.184659 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3da7f1a1-6ce5-468a-a84f-e12242d5539e","Type":"ContainerStarted","Data":"77791e6207ab88e2f41d6e1c9af7686f5da0eaee2b828fdcd70e21eaac083487"} Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.185498 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.206714 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.206692879 podStartE2EDuration="2.206692879s" podCreationTimestamp="2025-11-21 11:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:17:48.206567886 +0000 UTC m=+5813.315710424" watchObservedRunningTime="2025-11-21 11:17:48.206692879 +0000 UTC m=+5813.315835377" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.213321 4972 scope.go:117] "RemoveContainer" containerID="195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2" Nov 21 11:17:48 crc kubenswrapper[4972]: E1121 11:17:48.213998 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2\": container with ID starting with 195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2 not found: ID does not exist" containerID="195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.214036 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2"} err="failed to get container status \"195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2\": rpc error: code = NotFound desc = could not find container \"195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2\": container with ID starting with 195c482ddda51e036d2e9a7020c2dce8e00305fac3a9367eaf67b86bb7769db2 not found: ID does not exist" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.214059 4972 scope.go:117] "RemoveContainer" containerID="88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.234516 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.264276 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.267411 4972 scope.go:117] "RemoveContainer" containerID="88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.277379 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:17:48 crc kubenswrapper[4972]: E1121 11:17:48.277619 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49\": container with ID starting with 88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49 not found: ID does not exist" containerID="88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.277658 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49"} err="failed to get container status \"88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49\": rpc error: code = NotFound desc = could not find container \"88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49\": container with ID starting with 88bbc5a650a0a241ee3d2d492f61e4d30e104b48bd343b1599559d337d7afe49 not found: ID does not exist" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.297061 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.310398 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 11:17:48 crc kubenswrapper[4972]: E1121 11:17:48.310786 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e058b60f-7e51-450d-8330-1b96ad510032" containerName="nova-cell0-conductor-conductor" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.310803 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e058b60f-7e51-450d-8330-1b96ad510032" containerName="nova-cell0-conductor-conductor" Nov 21 11:17:48 crc kubenswrapper[4972]: E1121 11:17:48.310817 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe61d30-0cab-4450-aea9-7f3bfa806221" containerName="nova-scheduler-scheduler" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.310824 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe61d30-0cab-4450-aea9-7f3bfa806221" containerName="nova-scheduler-scheduler" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.311045 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e058b60f-7e51-450d-8330-1b96ad510032" containerName="nova-cell0-conductor-conductor" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.311060 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe61d30-0cab-4450-aea9-7f3bfa806221" containerName="nova-scheduler-scheduler" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.311715 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.316245 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.326281 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.329788 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0e5f3d-e84e-4866-81ca-119283c296d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9b0e5f3d-e84e-4866-81ca-119283c296d7\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.329891 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvk9d\" (UniqueName: \"kubernetes.io/projected/9b0e5f3d-e84e-4866-81ca-119283c296d7-kube-api-access-gvk9d\") pod \"nova-cell0-conductor-0\" (UID: \"9b0e5f3d-e84e-4866-81ca-119283c296d7\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.330006 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0e5f3d-e84e-4866-81ca-119283c296d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9b0e5f3d-e84e-4866-81ca-119283c296d7\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.342244 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.343435 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.345403 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.348635 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.432165 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvk9d\" (UniqueName: \"kubernetes.io/projected/9b0e5f3d-e84e-4866-81ca-119283c296d7-kube-api-access-gvk9d\") pod \"nova-cell0-conductor-0\" (UID: \"9b0e5f3d-e84e-4866-81ca-119283c296d7\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.432275 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzn26\" (UniqueName: \"kubernetes.io/projected/8319eeb4-9a52-4573-b264-22f703b195a8-kube-api-access-qzn26\") pod \"nova-scheduler-0\" (UID: \"8319eeb4-9a52-4573-b264-22f703b195a8\") " pod="openstack/nova-scheduler-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.432316 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8319eeb4-9a52-4573-b264-22f703b195a8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8319eeb4-9a52-4573-b264-22f703b195a8\") " pod="openstack/nova-scheduler-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.432366 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0e5f3d-e84e-4866-81ca-119283c296d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9b0e5f3d-e84e-4866-81ca-119283c296d7\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.432413 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8319eeb4-9a52-4573-b264-22f703b195a8-config-data\") pod \"nova-scheduler-0\" (UID: \"8319eeb4-9a52-4573-b264-22f703b195a8\") " pod="openstack/nova-scheduler-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.432597 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0e5f3d-e84e-4866-81ca-119283c296d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9b0e5f3d-e84e-4866-81ca-119283c296d7\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.438096 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0e5f3d-e84e-4866-81ca-119283c296d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9b0e5f3d-e84e-4866-81ca-119283c296d7\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.446152 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0e5f3d-e84e-4866-81ca-119283c296d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9b0e5f3d-e84e-4866-81ca-119283c296d7\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.459425 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvk9d\" (UniqueName: \"kubernetes.io/projected/9b0e5f3d-e84e-4866-81ca-119283c296d7-kube-api-access-gvk9d\") pod \"nova-cell0-conductor-0\" (UID: \"9b0e5f3d-e84e-4866-81ca-119283c296d7\") " pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.534802 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzn26\" (UniqueName: \"kubernetes.io/projected/8319eeb4-9a52-4573-b264-22f703b195a8-kube-api-access-qzn26\") pod \"nova-scheduler-0\" (UID: \"8319eeb4-9a52-4573-b264-22f703b195a8\") " pod="openstack/nova-scheduler-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.534889 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8319eeb4-9a52-4573-b264-22f703b195a8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8319eeb4-9a52-4573-b264-22f703b195a8\") " pod="openstack/nova-scheduler-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.534968 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8319eeb4-9a52-4573-b264-22f703b195a8-config-data\") pod \"nova-scheduler-0\" (UID: \"8319eeb4-9a52-4573-b264-22f703b195a8\") " pod="openstack/nova-scheduler-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.538878 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8319eeb4-9a52-4573-b264-22f703b195a8-config-data\") pod \"nova-scheduler-0\" (UID: \"8319eeb4-9a52-4573-b264-22f703b195a8\") " pod="openstack/nova-scheduler-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.539870 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8319eeb4-9a52-4573-b264-22f703b195a8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8319eeb4-9a52-4573-b264-22f703b195a8\") " pod="openstack/nova-scheduler-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.563652 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzn26\" (UniqueName: \"kubernetes.io/projected/8319eeb4-9a52-4573-b264-22f703b195a8-kube-api-access-qzn26\") pod \"nova-scheduler-0\" (UID: \"8319eeb4-9a52-4573-b264-22f703b195a8\") " pod="openstack/nova-scheduler-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.632071 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:48 crc kubenswrapper[4972]: I1121 11:17:48.667804 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 11:17:49 crc kubenswrapper[4972]: I1121 11:17:49.100200 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 11:17:49 crc kubenswrapper[4972]: I1121 11:17:49.198803 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9b0e5f3d-e84e-4866-81ca-119283c296d7","Type":"ContainerStarted","Data":"ed315efae2564545b449cc67dc698f57e395a14b5dc5c387c93e6b16c451e7ee"} Nov 21 11:17:49 crc kubenswrapper[4972]: I1121 11:17:49.202409 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 11:17:49 crc kubenswrapper[4972]: I1121 11:17:49.554936 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 11:17:49 crc kubenswrapper[4972]: I1121 11:17:49.555264 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 11:17:49 crc kubenswrapper[4972]: I1121 11:17:49.776741 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe61d30-0cab-4450-aea9-7f3bfa806221" path="/var/lib/kubelet/pods/cbe61d30-0cab-4450-aea9-7f3bfa806221/volumes" Nov 21 11:17:49 crc kubenswrapper[4972]: I1121 11:17:49.777676 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e058b60f-7e51-450d-8330-1b96ad510032" path="/var/lib/kubelet/pods/e058b60f-7e51-450d-8330-1b96ad510032/volumes" Nov 21 11:17:50 crc kubenswrapper[4972]: I1121 11:17:50.212748 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9b0e5f3d-e84e-4866-81ca-119283c296d7","Type":"ContainerStarted","Data":"2bb09ea1fb99562ae2245d7d80a0a482ded0dc19db5a08e39c55e1f6bb81df59"} Nov 21 11:17:50 crc kubenswrapper[4972]: I1121 11:17:50.212885 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:50 crc kubenswrapper[4972]: I1121 11:17:50.215307 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8319eeb4-9a52-4573-b264-22f703b195a8","Type":"ContainerStarted","Data":"f845e482af8c82e253ff34bcbcb151030a614818aabf6d1a4a10d471ceb3645f"} Nov 21 11:17:50 crc kubenswrapper[4972]: I1121 11:17:50.215351 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8319eeb4-9a52-4573-b264-22f703b195a8","Type":"ContainerStarted","Data":"3892f96fa94ea1a73816450078e40944ee7bfde1bdc5b9561682b51c4e2b92d4"} Nov 21 11:17:50 crc kubenswrapper[4972]: I1121 11:17:50.234507 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.234484827 podStartE2EDuration="2.234484827s" podCreationTimestamp="2025-11-21 11:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:17:50.226776123 +0000 UTC m=+5815.335918651" watchObservedRunningTime="2025-11-21 11:17:50.234484827 +0000 UTC m=+5815.343627325" Nov 21 11:17:50 crc kubenswrapper[4972]: I1121 11:17:50.250424 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.250402218 podStartE2EDuration="2.250402218s" podCreationTimestamp="2025-11-21 11:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:17:50.246878875 +0000 UTC m=+5815.356021403" watchObservedRunningTime="2025-11-21 11:17:50.250402218 +0000 UTC m=+5815.359544716" Nov 21 11:17:51 crc kubenswrapper[4972]: I1121 11:17:51.460433 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:51 crc kubenswrapper[4972]: I1121 11:17:51.475969 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.250274 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.629331 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p6ssr"] Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.632902 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.645440 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p6ssr"] Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.729942 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb16514-1d45-4160-add7-70db8cb1ffbd-catalog-content\") pod \"redhat-marketplace-p6ssr\" (UID: \"8eb16514-1d45-4160-add7-70db8cb1ffbd\") " pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.730434 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95sjb\" (UniqueName: \"kubernetes.io/projected/8eb16514-1d45-4160-add7-70db8cb1ffbd-kube-api-access-95sjb\") pod \"redhat-marketplace-p6ssr\" (UID: \"8eb16514-1d45-4160-add7-70db8cb1ffbd\") " pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.730478 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb16514-1d45-4160-add7-70db8cb1ffbd-utilities\") pod \"redhat-marketplace-p6ssr\" (UID: \"8eb16514-1d45-4160-add7-70db8cb1ffbd\") " pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.832920 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb16514-1d45-4160-add7-70db8cb1ffbd-catalog-content\") pod \"redhat-marketplace-p6ssr\" (UID: \"8eb16514-1d45-4160-add7-70db8cb1ffbd\") " pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.833139 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95sjb\" (UniqueName: \"kubernetes.io/projected/8eb16514-1d45-4160-add7-70db8cb1ffbd-kube-api-access-95sjb\") pod \"redhat-marketplace-p6ssr\" (UID: \"8eb16514-1d45-4160-add7-70db8cb1ffbd\") " pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.833196 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb16514-1d45-4160-add7-70db8cb1ffbd-utilities\") pod \"redhat-marketplace-p6ssr\" (UID: \"8eb16514-1d45-4160-add7-70db8cb1ffbd\") " pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.833628 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb16514-1d45-4160-add7-70db8cb1ffbd-catalog-content\") pod \"redhat-marketplace-p6ssr\" (UID: \"8eb16514-1d45-4160-add7-70db8cb1ffbd\") " pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.833812 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb16514-1d45-4160-add7-70db8cb1ffbd-utilities\") pod \"redhat-marketplace-p6ssr\" (UID: \"8eb16514-1d45-4160-add7-70db8cb1ffbd\") " pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.857326 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95sjb\" (UniqueName: \"kubernetes.io/projected/8eb16514-1d45-4160-add7-70db8cb1ffbd-kube-api-access-95sjb\") pod \"redhat-marketplace-p6ssr\" (UID: \"8eb16514-1d45-4160-add7-70db8cb1ffbd\") " pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:17:52 crc kubenswrapper[4972]: I1121 11:17:52.967549 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:17:53 crc kubenswrapper[4972]: I1121 11:17:53.289628 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p6ssr"] Nov 21 11:17:53 crc kubenswrapper[4972]: I1121 11:17:53.668172 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 11:17:54 crc kubenswrapper[4972]: I1121 11:17:54.263108 4972 generic.go:334] "Generic (PLEG): container finished" podID="8eb16514-1d45-4160-add7-70db8cb1ffbd" containerID="dc90c82f4e2d4277b70cb04b90a0df9731aef9a180e8f2b1853d47a80c72b104" exitCode=0 Nov 21 11:17:54 crc kubenswrapper[4972]: I1121 11:17:54.263195 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6ssr" event={"ID":"8eb16514-1d45-4160-add7-70db8cb1ffbd","Type":"ContainerDied","Data":"dc90c82f4e2d4277b70cb04b90a0df9731aef9a180e8f2b1853d47a80c72b104"} Nov 21 11:17:54 crc kubenswrapper[4972]: I1121 11:17:54.263260 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6ssr" event={"ID":"8eb16514-1d45-4160-add7-70db8cb1ffbd","Type":"ContainerStarted","Data":"caf8749efa154cb7b60bae28712296502bcb9a6ddfd3be2022e96bfa5d857ced"} Nov 21 11:17:54 crc kubenswrapper[4972]: I1121 11:17:54.266086 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 11:17:54 crc kubenswrapper[4972]: I1121 11:17:54.532954 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 11:17:54 crc kubenswrapper[4972]: I1121 11:17:54.533025 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 11:17:54 crc kubenswrapper[4972]: I1121 11:17:54.555940 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 11:17:54 crc kubenswrapper[4972]: I1121 11:17:54.556009 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 11:17:54 crc kubenswrapper[4972]: I1121 11:17:54.760112 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:17:54 crc kubenswrapper[4972]: E1121 11:17:54.760862 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.588583 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wxw6v"] Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.591773 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.600956 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxw6v"] Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.616271 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="76a1d86d-887f-461d-9415-908540ed2f33" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.78:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.658017 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.692601 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-catalog-content\") pod \"redhat-operators-wxw6v\" (UID: \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\") " pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.692680 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-utilities\") pod \"redhat-operators-wxw6v\" (UID: \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\") " pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.693129 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4ttw\" (UniqueName: \"kubernetes.io/projected/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-kube-api-access-b4ttw\") pod \"redhat-operators-wxw6v\" (UID: \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\") " pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.699401 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.699488 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="76a1d86d-887f-461d-9415-908540ed2f33" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.78:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.796925 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-catalog-content\") pod \"redhat-operators-wxw6v\" (UID: \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\") " pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.796994 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-catalog-content\") pod \"redhat-operators-wxw6v\" (UID: \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\") " pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.797411 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-utilities\") pod \"redhat-operators-wxw6v\" (UID: \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\") " pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.797724 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4ttw\" (UniqueName: \"kubernetes.io/projected/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-kube-api-access-b4ttw\") pod \"redhat-operators-wxw6v\" (UID: \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\") " pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.800596 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-utilities\") pod \"redhat-operators-wxw6v\" (UID: \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\") " pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.820308 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4ttw\" (UniqueName: \"kubernetes.io/projected/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-kube-api-access-b4ttw\") pod \"redhat-operators-wxw6v\" (UID: \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\") " pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:17:55 crc kubenswrapper[4972]: I1121 11:17:55.929617 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:17:56 crc kubenswrapper[4972]: I1121 11:17:56.285941 4972 generic.go:334] "Generic (PLEG): container finished" podID="8eb16514-1d45-4160-add7-70db8cb1ffbd" containerID="0fdd20fe20fa00a69c96a2f536aa068e8f68ab236ef7cd422f56e3947dfcb257" exitCode=0 Nov 21 11:17:56 crc kubenswrapper[4972]: I1121 11:17:56.286023 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6ssr" event={"ID":"8eb16514-1d45-4160-add7-70db8cb1ffbd","Type":"ContainerDied","Data":"0fdd20fe20fa00a69c96a2f536aa068e8f68ab236ef7cd422f56e3947dfcb257"} Nov 21 11:17:56 crc kubenswrapper[4972]: I1121 11:17:56.395732 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxw6v"] Nov 21 11:17:56 crc kubenswrapper[4972]: W1121 11:17:56.402900 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaecb28e_acb1_4d0a_bc4a_2bf5073ac04a.slice/crio-c0a264be98d556a1d57708d0bb8d6dd7686fda50fa86de0634c6bf3b8e13b3b9 WatchSource:0}: Error finding container c0a264be98d556a1d57708d0bb8d6dd7686fda50fa86de0634c6bf3b8e13b3b9: Status 404 returned error can't find the container with id c0a264be98d556a1d57708d0bb8d6dd7686fda50fa86de0634c6bf3b8e13b3b9 Nov 21 11:17:56 crc kubenswrapper[4972]: I1121 11:17:56.929390 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.301514 4972 generic.go:334] "Generic (PLEG): container finished" podID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" containerID="2fd0bc9a128776e46c4b6b33557939c66d1c70df5d2f4a73e6623436db5acb8b" exitCode=0 Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.301564 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxw6v" event={"ID":"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a","Type":"ContainerDied","Data":"2fd0bc9a128776e46c4b6b33557939c66d1c70df5d2f4a73e6623436db5acb8b"} Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.301591 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxw6v" event={"ID":"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a","Type":"ContainerStarted","Data":"c0a264be98d556a1d57708d0bb8d6dd7686fda50fa86de0634c6bf3b8e13b3b9"} Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.542906 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.545102 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.553223 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.569799 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.643340 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-scripts\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.643426 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/322cdd2a-cdf6-4e56-b828-58b589f4604e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.643460 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-config-data\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.643556 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.643589 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k29r\" (UniqueName: \"kubernetes.io/projected/322cdd2a-cdf6-4e56-b828-58b589f4604e-kube-api-access-2k29r\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.643630 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.744812 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/322cdd2a-cdf6-4e56-b828-58b589f4604e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.744878 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-config-data\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.744952 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.744978 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k29r\" (UniqueName: \"kubernetes.io/projected/322cdd2a-cdf6-4e56-b828-58b589f4604e-kube-api-access-2k29r\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.745008 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.745044 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-scripts\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.745047 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/322cdd2a-cdf6-4e56-b828-58b589f4604e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.750986 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.751142 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.751513 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-config-data\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.752514 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-scripts\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.770305 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k29r\" (UniqueName: \"kubernetes.io/projected/322cdd2a-cdf6-4e56-b828-58b589f4604e-kube-api-access-2k29r\") pod \"cinder-scheduler-0\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " pod="openstack/cinder-scheduler-0" Nov 21 11:17:57 crc kubenswrapper[4972]: I1121 11:17:57.868984 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 11:17:58 crc kubenswrapper[4972]: I1121 11:17:58.318161 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6ssr" event={"ID":"8eb16514-1d45-4160-add7-70db8cb1ffbd","Type":"ContainerStarted","Data":"0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a"} Nov 21 11:17:58 crc kubenswrapper[4972]: I1121 11:17:58.332899 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 11:17:58 crc kubenswrapper[4972]: I1121 11:17:58.351925 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p6ssr" podStartSLOduration=3.044998683 podStartE2EDuration="6.351894861s" podCreationTimestamp="2025-11-21 11:17:52 +0000 UTC" firstStartedPulling="2025-11-21 11:17:54.265730607 +0000 UTC m=+5819.374873105" lastFinishedPulling="2025-11-21 11:17:57.572626785 +0000 UTC m=+5822.681769283" observedRunningTime="2025-11-21 11:17:58.343784926 +0000 UTC m=+5823.452927434" watchObservedRunningTime="2025-11-21 11:17:58.351894861 +0000 UTC m=+5823.461037359" Nov 21 11:17:58 crc kubenswrapper[4972]: I1121 11:17:58.669326 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 11:17:58 crc kubenswrapper[4972]: I1121 11:17:58.677816 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 21 11:17:58 crc kubenswrapper[4972]: I1121 11:17:58.739946 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 11:17:59 crc kubenswrapper[4972]: I1121 11:17:59.199948 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 21 11:17:59 crc kubenswrapper[4972]: I1121 11:17:59.200503 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f259f468-d34c-4536-b8b2-c6eda578a447" containerName="cinder-api-log" containerID="cri-o://3279be070e284fc884487fa9e9cf9a85c43e044010789766fc7062c7677771d1" gracePeriod=30 Nov 21 11:17:59 crc kubenswrapper[4972]: I1121 11:17:59.200575 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f259f468-d34c-4536-b8b2-c6eda578a447" containerName="cinder-api" containerID="cri-o://7eb7a907d95f6882c5d96c53f136c64646057fbe0046409ee6a4cc5a5cc24240" gracePeriod=30 Nov 21 11:17:59 crc kubenswrapper[4972]: I1121 11:17:59.329540 4972 generic.go:334] "Generic (PLEG): container finished" podID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" containerID="7c12ed1fc22af1d182535e272c821411f90ad5a6fea6ec2a065cf2082e2ade34" exitCode=0 Nov 21 11:17:59 crc kubenswrapper[4972]: I1121 11:17:59.329606 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxw6v" event={"ID":"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a","Type":"ContainerDied","Data":"7c12ed1fc22af1d182535e272c821411f90ad5a6fea6ec2a065cf2082e2ade34"} Nov 21 11:17:59 crc kubenswrapper[4972]: I1121 11:17:59.333059 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"322cdd2a-cdf6-4e56-b828-58b589f4604e","Type":"ContainerStarted","Data":"c845b99bc65d2a2aff96fb06d52f5d449710edf6d578b535f1162dfc50fade99"} Nov 21 11:17:59 crc kubenswrapper[4972]: I1121 11:17:59.333117 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"322cdd2a-cdf6-4e56-b828-58b589f4604e","Type":"ContainerStarted","Data":"6187d1e4b4d1d8c5df8ad0728a13a353b38668877b93e030119d6137af61b059"} Nov 21 11:17:59 crc kubenswrapper[4972]: I1121 11:17:59.372092 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.119790 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.121948 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.126842 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.137107 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196103 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196163 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196242 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-run\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196273 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196305 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196351 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196375 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196398 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196468 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwt5l\" (UniqueName: \"kubernetes.io/projected/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-kube-api-access-jwt5l\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196513 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196538 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196653 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196718 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196749 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196771 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.196796 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.298631 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.298890 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.298910 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.298928 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.298948 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.298985 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.299003 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.299096 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-run\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.299120 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.299139 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.299172 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.299190 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.299207 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.299244 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwt5l\" (UniqueName: \"kubernetes.io/projected/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-kube-api-access-jwt5l\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.299274 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.299292 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.299765 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-run\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.301093 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.301218 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.301348 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.302157 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.302266 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.302352 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.303065 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.303347 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.303565 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.304930 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.306860 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.308570 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.319729 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.321652 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.338276 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwt5l\" (UniqueName: \"kubernetes.io/projected/1d618b5b-1e2f-4608-a7c0-d9fea9f72d46-kube-api-access-jwt5l\") pod \"cinder-volume-volume1-0\" (UID: \"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46\") " pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.349265 4972 generic.go:334] "Generic (PLEG): container finished" podID="f259f468-d34c-4536-b8b2-c6eda578a447" containerID="3279be070e284fc884487fa9e9cf9a85c43e044010789766fc7062c7677771d1" exitCode=143 Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.349415 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f259f468-d34c-4536-b8b2-c6eda578a447","Type":"ContainerDied","Data":"3279be070e284fc884487fa9e9cf9a85c43e044010789766fc7062c7677771d1"} Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.442931 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:00 crc kubenswrapper[4972]: I1121 11:18:00.993043 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.000457 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.007860 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.009063 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.017141 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.117574 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.117682 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-run\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.117753 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-scripts\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.117789 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.117824 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.117905 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-dev\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.117960 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.117996 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.118023 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.118071 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.118104 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-ceph\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.118175 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-config-data\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.118236 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-sys\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.118279 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.118313 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-lib-modules\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.118368 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpnzl\" (UniqueName: \"kubernetes.io/projected/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-kube-api-access-mpnzl\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.220396 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.220463 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.220514 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.220523 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.220579 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-ceph\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.220671 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-config-data\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.220708 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.220751 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-sys\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.220802 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-sys\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.220896 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.220952 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-lib-modules\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221038 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpnzl\" (UniqueName: \"kubernetes.io/projected/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-kube-api-access-mpnzl\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221169 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221263 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-run\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221310 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-scripts\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221343 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221380 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221469 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-dev\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221576 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221616 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221728 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-dev\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221728 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221760 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221783 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-run\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221791 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-lib-modules\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.221865 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.226697 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-ceph\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.226700 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.226949 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.233921 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-config-data\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.234871 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-scripts\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.243808 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpnzl\" (UniqueName: \"kubernetes.io/projected/b5b6b234-7ee4-4997-87e7-2b70b5da72dc-kube-api-access-mpnzl\") pod \"cinder-backup-0\" (UID: \"b5b6b234-7ee4-4997-87e7-2b70b5da72dc\") " pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.353926 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.359630 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46","Type":"ContainerStarted","Data":"ea48fe5153faf54e70c544c94f4cb3de5356a95f637895128ba3ced62c6a9222"} Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.362194 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxw6v" event={"ID":"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a","Type":"ContainerStarted","Data":"6a0b179ca524c68625d4e6ac987e4929826bb866a4a6a005e390c649e47a2138"} Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.364195 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"322cdd2a-cdf6-4e56-b828-58b589f4604e","Type":"ContainerStarted","Data":"b4e87f3b6dec51bedc2b42e23ef1874677737832ba6f1a61db52da97313cf61e"} Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.388627 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wxw6v" podStartSLOduration=3.152920272 podStartE2EDuration="6.388601935s" podCreationTimestamp="2025-11-21 11:17:55 +0000 UTC" firstStartedPulling="2025-11-21 11:17:57.303080447 +0000 UTC m=+5822.412222945" lastFinishedPulling="2025-11-21 11:18:00.53876208 +0000 UTC m=+5825.647904608" observedRunningTime="2025-11-21 11:18:01.37673442 +0000 UTC m=+5826.485876928" watchObservedRunningTime="2025-11-21 11:18:01.388601935 +0000 UTC m=+5826.497744443" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.417142 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.41712507 podStartE2EDuration="4.41712507s" podCreationTimestamp="2025-11-21 11:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:18:01.41372528 +0000 UTC m=+5826.522867808" watchObservedRunningTime="2025-11-21 11:18:01.41712507 +0000 UTC m=+5826.526267568" Nov 21 11:18:01 crc kubenswrapper[4972]: I1121 11:18:01.985450 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 21 11:18:01 crc kubenswrapper[4972]: W1121 11:18:01.985642 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5b6b234_7ee4_4997_87e7_2b70b5da72dc.slice/crio-e673c1209e1f1ce6c5d5de375056e34f9eade5bdb2a9417eb992c88bef5116c4 WatchSource:0}: Error finding container e673c1209e1f1ce6c5d5de375056e34f9eade5bdb2a9417eb992c88bef5116c4: Status 404 returned error can't find the container with id e673c1209e1f1ce6c5d5de375056e34f9eade5bdb2a9417eb992c88bef5116c4 Nov 21 11:18:02 crc kubenswrapper[4972]: I1121 11:18:02.380280 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"b5b6b234-7ee4-4997-87e7-2b70b5da72dc","Type":"ContainerStarted","Data":"e673c1209e1f1ce6c5d5de375056e34f9eade5bdb2a9417eb992c88bef5116c4"} Nov 21 11:18:02 crc kubenswrapper[4972]: I1121 11:18:02.870193 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 21 11:18:02 crc kubenswrapper[4972]: I1121 11:18:02.967874 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:18:02 crc kubenswrapper[4972]: I1121 11:18:02.967914 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.060121 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.393998 4972 generic.go:334] "Generic (PLEG): container finished" podID="f259f468-d34c-4536-b8b2-c6eda578a447" containerID="7eb7a907d95f6882c5d96c53f136c64646057fbe0046409ee6a4cc5a5cc24240" exitCode=0 Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.395524 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f259f468-d34c-4536-b8b2-c6eda578a447","Type":"ContainerDied","Data":"7eb7a907d95f6882c5d96c53f136c64646057fbe0046409ee6a4cc5a5cc24240"} Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.395688 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f259f468-d34c-4536-b8b2-c6eda578a447","Type":"ContainerDied","Data":"a678e64e74e219b7bd99390bfab67f1f61e8876c52c6867d75a9574f929b476f"} Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.395760 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a678e64e74e219b7bd99390bfab67f1f61e8876c52c6867d75a9574f929b476f" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.452775 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.481814 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.571393 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j9r2\" (UniqueName: \"kubernetes.io/projected/f259f468-d34c-4536-b8b2-c6eda578a447-kube-api-access-2j9r2\") pod \"f259f468-d34c-4536-b8b2-c6eda578a447\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.571467 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-combined-ca-bundle\") pod \"f259f468-d34c-4536-b8b2-c6eda578a447\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.571493 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-scripts\") pod \"f259f468-d34c-4536-b8b2-c6eda578a447\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.571555 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-config-data\") pod \"f259f468-d34c-4536-b8b2-c6eda578a447\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.571652 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-config-data-custom\") pod \"f259f468-d34c-4536-b8b2-c6eda578a447\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.571675 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259f468-d34c-4536-b8b2-c6eda578a447-logs\") pod \"f259f468-d34c-4536-b8b2-c6eda578a447\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.571727 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f259f468-d34c-4536-b8b2-c6eda578a447-etc-machine-id\") pod \"f259f468-d34c-4536-b8b2-c6eda578a447\" (UID: \"f259f468-d34c-4536-b8b2-c6eda578a447\") " Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.572178 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f259f468-d34c-4536-b8b2-c6eda578a447-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f259f468-d34c-4536-b8b2-c6eda578a447" (UID: "f259f468-d34c-4536-b8b2-c6eda578a447"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.573894 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f259f468-d34c-4536-b8b2-c6eda578a447-logs" (OuterVolumeSpecName: "logs") pod "f259f468-d34c-4536-b8b2-c6eda578a447" (UID: "f259f468-d34c-4536-b8b2-c6eda578a447"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.577284 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-scripts" (OuterVolumeSpecName: "scripts") pod "f259f468-d34c-4536-b8b2-c6eda578a447" (UID: "f259f468-d34c-4536-b8b2-c6eda578a447"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.589057 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f259f468-d34c-4536-b8b2-c6eda578a447" (UID: "f259f468-d34c-4536-b8b2-c6eda578a447"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.593184 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f259f468-d34c-4536-b8b2-c6eda578a447-kube-api-access-2j9r2" (OuterVolumeSpecName: "kube-api-access-2j9r2") pod "f259f468-d34c-4536-b8b2-c6eda578a447" (UID: "f259f468-d34c-4536-b8b2-c6eda578a447"). InnerVolumeSpecName "kube-api-access-2j9r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.654932 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-config-data" (OuterVolumeSpecName: "config-data") pod "f259f468-d34c-4536-b8b2-c6eda578a447" (UID: "f259f468-d34c-4536-b8b2-c6eda578a447"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.656594 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f259f468-d34c-4536-b8b2-c6eda578a447" (UID: "f259f468-d34c-4536-b8b2-c6eda578a447"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.674183 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.674215 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259f468-d34c-4536-b8b2-c6eda578a447-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.674224 4972 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f259f468-d34c-4536-b8b2-c6eda578a447-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.674233 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j9r2\" (UniqueName: \"kubernetes.io/projected/f259f468-d34c-4536-b8b2-c6eda578a447-kube-api-access-2j9r2\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.674242 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.674249 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:03 crc kubenswrapper[4972]: I1121 11:18:03.674259 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259f468-d34c-4536-b8b2-c6eda578a447-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.167955 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p6ssr"] Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.411104 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.411205 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46","Type":"ContainerStarted","Data":"7a46c7b9e1fd3f5542a7e51844a18b303077808b951e4ba024da3272d3d42864"} Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.412437 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1d618b5b-1e2f-4608-a7c0-d9fea9f72d46","Type":"ContainerStarted","Data":"d93de4ade31018e824fc1d92fb25d09efae19769d781869c6099369c482e0b4f"} Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.463796 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.169862613 podStartE2EDuration="4.463763757s" podCreationTimestamp="2025-11-21 11:18:00 +0000 UTC" firstStartedPulling="2025-11-21 11:18:01.000538648 +0000 UTC m=+5826.109681156" lastFinishedPulling="2025-11-21 11:18:03.294439802 +0000 UTC m=+5828.403582300" observedRunningTime="2025-11-21 11:18:04.444975469 +0000 UTC m=+5829.554118007" watchObservedRunningTime="2025-11-21 11:18:04.463763757 +0000 UTC m=+5829.572906265" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.477610 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.500066 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.537714 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.539610 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 21 11:18:04 crc kubenswrapper[4972]: E1121 11:18:04.541442 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f259f468-d34c-4536-b8b2-c6eda578a447" containerName="cinder-api-log" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.541913 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f259f468-d34c-4536-b8b2-c6eda578a447" containerName="cinder-api-log" Nov 21 11:18:04 crc kubenswrapper[4972]: E1121 11:18:04.542014 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f259f468-d34c-4536-b8b2-c6eda578a447" containerName="cinder-api" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.542069 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f259f468-d34c-4536-b8b2-c6eda578a447" containerName="cinder-api" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.542582 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f259f468-d34c-4536-b8b2-c6eda578a447" containerName="cinder-api-log" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.542686 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f259f468-d34c-4536-b8b2-c6eda578a447" containerName="cinder-api" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.545104 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.545207 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.545312 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.546027 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.565423 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.565583 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.569698 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.572289 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.585405 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.586047 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.598456 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-logs\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.598547 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4swh\" (UniqueName: \"kubernetes.io/projected/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-kube-api-access-g4swh\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.598569 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-scripts\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.598595 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.598619 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-config-data\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.598692 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.598848 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-config-data-custom\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.604224 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.700282 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4swh\" (UniqueName: \"kubernetes.io/projected/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-kube-api-access-g4swh\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.700319 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-scripts\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.700340 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.700367 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-config-data\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.700470 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.700606 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.701179 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-config-data-custom\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.701264 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-logs\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.701532 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-logs\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.705239 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-scripts\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.708129 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-config-data\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.710514 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-config-data-custom\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.716521 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:04 crc kubenswrapper[4972]: I1121 11:18:04.724397 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4swh\" (UniqueName: \"kubernetes.io/projected/ad8b7f64-b9fa-494f-a895-f6d04406fcb6-kube-api-access-g4swh\") pod \"cinder-api-0\" (UID: \"ad8b7f64-b9fa-494f-a895-f6d04406fcb6\") " pod="openstack/cinder-api-0" Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.001027 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.423290 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"b5b6b234-7ee4-4997-87e7-2b70b5da72dc","Type":"ContainerStarted","Data":"395489d53e0b48a67011564f4d23c19d009da3fd42c7cc22fabd85284b7bdd7b"} Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.423980 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"b5b6b234-7ee4-4997-87e7-2b70b5da72dc","Type":"ContainerStarted","Data":"3b60d983122fbe9853a429ae249266a31051c587345ad14dc92d0f4da5a35bf1"} Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.424623 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p6ssr" podUID="8eb16514-1d45-4160-add7-70db8cb1ffbd" containerName="registry-server" containerID="cri-o://0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a" gracePeriod=2 Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.435106 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.444412 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.458669 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.211437905 podStartE2EDuration="5.458651592s" podCreationTimestamp="2025-11-21 11:18:00 +0000 UTC" firstStartedPulling="2025-11-21 11:18:01.987950536 +0000 UTC m=+5827.097093044" lastFinishedPulling="2025-11-21 11:18:04.235164233 +0000 UTC m=+5829.344306731" observedRunningTime="2025-11-21 11:18:05.454501192 +0000 UTC m=+5830.563643770" watchObservedRunningTime="2025-11-21 11:18:05.458651592 +0000 UTC m=+5830.567794090" Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.502970 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.806681 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f259f468-d34c-4536-b8b2-c6eda578a447" path="/var/lib/kubelet/pods/f259f468-d34c-4536-b8b2-c6eda578a447/volumes" Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.905360 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.925998 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95sjb\" (UniqueName: \"kubernetes.io/projected/8eb16514-1d45-4160-add7-70db8cb1ffbd-kube-api-access-95sjb\") pod \"8eb16514-1d45-4160-add7-70db8cb1ffbd\" (UID: \"8eb16514-1d45-4160-add7-70db8cb1ffbd\") " Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.926061 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb16514-1d45-4160-add7-70db8cb1ffbd-catalog-content\") pod \"8eb16514-1d45-4160-add7-70db8cb1ffbd\" (UID: \"8eb16514-1d45-4160-add7-70db8cb1ffbd\") " Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.926125 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb16514-1d45-4160-add7-70db8cb1ffbd-utilities\") pod \"8eb16514-1d45-4160-add7-70db8cb1ffbd\" (UID: \"8eb16514-1d45-4160-add7-70db8cb1ffbd\") " Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.926912 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eb16514-1d45-4160-add7-70db8cb1ffbd-utilities" (OuterVolumeSpecName: "utilities") pod "8eb16514-1d45-4160-add7-70db8cb1ffbd" (UID: "8eb16514-1d45-4160-add7-70db8cb1ffbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.931079 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.931118 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.932392 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb16514-1d45-4160-add7-70db8cb1ffbd-kube-api-access-95sjb" (OuterVolumeSpecName: "kube-api-access-95sjb") pod "8eb16514-1d45-4160-add7-70db8cb1ffbd" (UID: "8eb16514-1d45-4160-add7-70db8cb1ffbd"). InnerVolumeSpecName "kube-api-access-95sjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:18:05 crc kubenswrapper[4972]: I1121 11:18:05.956639 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eb16514-1d45-4160-add7-70db8cb1ffbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8eb16514-1d45-4160-add7-70db8cb1ffbd" (UID: "8eb16514-1d45-4160-add7-70db8cb1ffbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.027602 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95sjb\" (UniqueName: \"kubernetes.io/projected/8eb16514-1d45-4160-add7-70db8cb1ffbd-kube-api-access-95sjb\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.027631 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb16514-1d45-4160-add7-70db8cb1ffbd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.027640 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb16514-1d45-4160-add7-70db8cb1ffbd-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.355078 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.447064 4972 generic.go:334] "Generic (PLEG): container finished" podID="8eb16514-1d45-4160-add7-70db8cb1ffbd" containerID="0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a" exitCode=0 Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.447121 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p6ssr" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.447166 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6ssr" event={"ID":"8eb16514-1d45-4160-add7-70db8cb1ffbd","Type":"ContainerDied","Data":"0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a"} Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.447250 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6ssr" event={"ID":"8eb16514-1d45-4160-add7-70db8cb1ffbd","Type":"ContainerDied","Data":"caf8749efa154cb7b60bae28712296502bcb9a6ddfd3be2022e96bfa5d857ced"} Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.447295 4972 scope.go:117] "RemoveContainer" containerID="0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.452276 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ad8b7f64-b9fa-494f-a895-f6d04406fcb6","Type":"ContainerStarted","Data":"e23879a89ec4bd50daa55f8e4999b404c1467ebace69b5bd1d51f2537b34c353"} Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.452326 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ad8b7f64-b9fa-494f-a895-f6d04406fcb6","Type":"ContainerStarted","Data":"6eac0f01b49b1e00b90f70d11914257518e9628ee6aa79d5bf32363a99650ed7"} Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.491219 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p6ssr"] Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.495223 4972 scope.go:117] "RemoveContainer" containerID="0fdd20fe20fa00a69c96a2f536aa068e8f68ab236ef7cd422f56e3947dfcb257" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.502384 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p6ssr"] Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.520247 4972 scope.go:117] "RemoveContainer" containerID="dc90c82f4e2d4277b70cb04b90a0df9731aef9a180e8f2b1853d47a80c72b104" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.584114 4972 scope.go:117] "RemoveContainer" containerID="0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a" Nov 21 11:18:06 crc kubenswrapper[4972]: E1121 11:18:06.584519 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a\": container with ID starting with 0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a not found: ID does not exist" containerID="0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.584553 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a"} err="failed to get container status \"0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a\": rpc error: code = NotFound desc = could not find container \"0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a\": container with ID starting with 0222748697f3af314d0d6c2a14577477b172ab2bc8988f2d848a6ba618f38d8a not found: ID does not exist" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.584573 4972 scope.go:117] "RemoveContainer" containerID="0fdd20fe20fa00a69c96a2f536aa068e8f68ab236ef7cd422f56e3947dfcb257" Nov 21 11:18:06 crc kubenswrapper[4972]: E1121 11:18:06.584859 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fdd20fe20fa00a69c96a2f536aa068e8f68ab236ef7cd422f56e3947dfcb257\": container with ID starting with 0fdd20fe20fa00a69c96a2f536aa068e8f68ab236ef7cd422f56e3947dfcb257 not found: ID does not exist" containerID="0fdd20fe20fa00a69c96a2f536aa068e8f68ab236ef7cd422f56e3947dfcb257" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.584881 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fdd20fe20fa00a69c96a2f536aa068e8f68ab236ef7cd422f56e3947dfcb257"} err="failed to get container status \"0fdd20fe20fa00a69c96a2f536aa068e8f68ab236ef7cd422f56e3947dfcb257\": rpc error: code = NotFound desc = could not find container \"0fdd20fe20fa00a69c96a2f536aa068e8f68ab236ef7cd422f56e3947dfcb257\": container with ID starting with 0fdd20fe20fa00a69c96a2f536aa068e8f68ab236ef7cd422f56e3947dfcb257 not found: ID does not exist" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.584893 4972 scope.go:117] "RemoveContainer" containerID="dc90c82f4e2d4277b70cb04b90a0df9731aef9a180e8f2b1853d47a80c72b104" Nov 21 11:18:06 crc kubenswrapper[4972]: E1121 11:18:06.585055 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc90c82f4e2d4277b70cb04b90a0df9731aef9a180e8f2b1853d47a80c72b104\": container with ID starting with dc90c82f4e2d4277b70cb04b90a0df9731aef9a180e8f2b1853d47a80c72b104 not found: ID does not exist" containerID="dc90c82f4e2d4277b70cb04b90a0df9731aef9a180e8f2b1853d47a80c72b104" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.585075 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc90c82f4e2d4277b70cb04b90a0df9731aef9a180e8f2b1853d47a80c72b104"} err="failed to get container status \"dc90c82f4e2d4277b70cb04b90a0df9731aef9a180e8f2b1853d47a80c72b104\": rpc error: code = NotFound desc = could not find container \"dc90c82f4e2d4277b70cb04b90a0df9731aef9a180e8f2b1853d47a80c72b104\": container with ID starting with dc90c82f4e2d4277b70cb04b90a0df9731aef9a180e8f2b1853d47a80c72b104 not found: ID does not exist" Nov 21 11:18:06 crc kubenswrapper[4972]: I1121 11:18:06.982902 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wxw6v" podUID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" containerName="registry-server" probeResult="failure" output=< Nov 21 11:18:06 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 11:18:06 crc kubenswrapper[4972]: > Nov 21 11:18:07 crc kubenswrapper[4972]: I1121 11:18:07.465036 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ad8b7f64-b9fa-494f-a895-f6d04406fcb6","Type":"ContainerStarted","Data":"c3ccad6b577a6a60d5cbac1fcb04ea897093fd90a8e576109620420988473f22"} Nov 21 11:18:07 crc kubenswrapper[4972]: I1121 11:18:07.465601 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 21 11:18:07 crc kubenswrapper[4972]: I1121 11:18:07.494369 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.494342388 podStartE2EDuration="3.494342388s" podCreationTimestamp="2025-11-21 11:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:18:07.486568362 +0000 UTC m=+5832.595710880" watchObservedRunningTime="2025-11-21 11:18:07.494342388 +0000 UTC m=+5832.603484886" Nov 21 11:18:07 crc kubenswrapper[4972]: I1121 11:18:07.771506 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb16514-1d45-4160-add7-70db8cb1ffbd" path="/var/lib/kubelet/pods/8eb16514-1d45-4160-add7-70db8cb1ffbd/volumes" Nov 21 11:18:08 crc kubenswrapper[4972]: I1121 11:18:08.084267 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 21 11:18:08 crc kubenswrapper[4972]: I1121 11:18:08.153806 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 11:18:08 crc kubenswrapper[4972]: I1121 11:18:08.478109 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="322cdd2a-cdf6-4e56-b828-58b589f4604e" containerName="cinder-scheduler" containerID="cri-o://c845b99bc65d2a2aff96fb06d52f5d449710edf6d578b535f1162dfc50fade99" gracePeriod=30 Nov 21 11:18:08 crc kubenswrapper[4972]: I1121 11:18:08.478287 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="322cdd2a-cdf6-4e56-b828-58b589f4604e" containerName="probe" containerID="cri-o://b4e87f3b6dec51bedc2b42e23ef1874677737832ba6f1a61db52da97313cf61e" gracePeriod=30 Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.489614 4972 generic.go:334] "Generic (PLEG): container finished" podID="322cdd2a-cdf6-4e56-b828-58b589f4604e" containerID="b4e87f3b6dec51bedc2b42e23ef1874677737832ba6f1a61db52da97313cf61e" exitCode=0 Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.490143 4972 generic.go:334] "Generic (PLEG): container finished" podID="322cdd2a-cdf6-4e56-b828-58b589f4604e" containerID="c845b99bc65d2a2aff96fb06d52f5d449710edf6d578b535f1162dfc50fade99" exitCode=0 Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.489669 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"322cdd2a-cdf6-4e56-b828-58b589f4604e","Type":"ContainerDied","Data":"b4e87f3b6dec51bedc2b42e23ef1874677737832ba6f1a61db52da97313cf61e"} Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.490217 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"322cdd2a-cdf6-4e56-b828-58b589f4604e","Type":"ContainerDied","Data":"c845b99bc65d2a2aff96fb06d52f5d449710edf6d578b535f1162dfc50fade99"} Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.615288 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.759557 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:18:09 crc kubenswrapper[4972]: E1121 11:18:09.760100 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.808098 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/322cdd2a-cdf6-4e56-b828-58b589f4604e-etc-machine-id\") pod \"322cdd2a-cdf6-4e56-b828-58b589f4604e\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.808184 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-combined-ca-bundle\") pod \"322cdd2a-cdf6-4e56-b828-58b589f4604e\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.808237 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-config-data-custom\") pod \"322cdd2a-cdf6-4e56-b828-58b589f4604e\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.808263 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/322cdd2a-cdf6-4e56-b828-58b589f4604e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "322cdd2a-cdf6-4e56-b828-58b589f4604e" (UID: "322cdd2a-cdf6-4e56-b828-58b589f4604e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.808333 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-config-data\") pod \"322cdd2a-cdf6-4e56-b828-58b589f4604e\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.808378 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-scripts\") pod \"322cdd2a-cdf6-4e56-b828-58b589f4604e\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.808434 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k29r\" (UniqueName: \"kubernetes.io/projected/322cdd2a-cdf6-4e56-b828-58b589f4604e-kube-api-access-2k29r\") pod \"322cdd2a-cdf6-4e56-b828-58b589f4604e\" (UID: \"322cdd2a-cdf6-4e56-b828-58b589f4604e\") " Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.810224 4972 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/322cdd2a-cdf6-4e56-b828-58b589f4604e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.814746 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-scripts" (OuterVolumeSpecName: "scripts") pod "322cdd2a-cdf6-4e56-b828-58b589f4604e" (UID: "322cdd2a-cdf6-4e56-b828-58b589f4604e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.814855 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/322cdd2a-cdf6-4e56-b828-58b589f4604e-kube-api-access-2k29r" (OuterVolumeSpecName: "kube-api-access-2k29r") pod "322cdd2a-cdf6-4e56-b828-58b589f4604e" (UID: "322cdd2a-cdf6-4e56-b828-58b589f4604e"). InnerVolumeSpecName "kube-api-access-2k29r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.820167 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "322cdd2a-cdf6-4e56-b828-58b589f4604e" (UID: "322cdd2a-cdf6-4e56-b828-58b589f4604e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.867058 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "322cdd2a-cdf6-4e56-b828-58b589f4604e" (UID: "322cdd2a-cdf6-4e56-b828-58b589f4604e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.911324 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.911355 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.911366 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.911375 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k29r\" (UniqueName: \"kubernetes.io/projected/322cdd2a-cdf6-4e56-b828-58b589f4604e-kube-api-access-2k29r\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:09 crc kubenswrapper[4972]: I1121 11:18:09.915822 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-config-data" (OuterVolumeSpecName: "config-data") pod "322cdd2a-cdf6-4e56-b828-58b589f4604e" (UID: "322cdd2a-cdf6-4e56-b828-58b589f4604e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.013495 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322cdd2a-cdf6-4e56-b828-58b589f4604e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.506617 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"322cdd2a-cdf6-4e56-b828-58b589f4604e","Type":"ContainerDied","Data":"6187d1e4b4d1d8c5df8ad0728a13a353b38668877b93e030119d6137af61b059"} Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.506699 4972 scope.go:117] "RemoveContainer" containerID="b4e87f3b6dec51bedc2b42e23ef1874677737832ba6f1a61db52da97313cf61e" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.506727 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.545843 4972 scope.go:117] "RemoveContainer" containerID="c845b99bc65d2a2aff96fb06d52f5d449710edf6d578b535f1162dfc50fade99" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.555217 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.579582 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.588261 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 11:18:10 crc kubenswrapper[4972]: E1121 11:18:10.588816 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb16514-1d45-4160-add7-70db8cb1ffbd" containerName="extract-content" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.588848 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb16514-1d45-4160-add7-70db8cb1ffbd" containerName="extract-content" Nov 21 11:18:10 crc kubenswrapper[4972]: E1121 11:18:10.588883 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="322cdd2a-cdf6-4e56-b828-58b589f4604e" containerName="probe" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.588893 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="322cdd2a-cdf6-4e56-b828-58b589f4604e" containerName="probe" Nov 21 11:18:10 crc kubenswrapper[4972]: E1121 11:18:10.588905 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb16514-1d45-4160-add7-70db8cb1ffbd" containerName="registry-server" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.588913 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb16514-1d45-4160-add7-70db8cb1ffbd" containerName="registry-server" Nov 21 11:18:10 crc kubenswrapper[4972]: E1121 11:18:10.588928 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb16514-1d45-4160-add7-70db8cb1ffbd" containerName="extract-utilities" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.588935 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb16514-1d45-4160-add7-70db8cb1ffbd" containerName="extract-utilities" Nov 21 11:18:10 crc kubenswrapper[4972]: E1121 11:18:10.588955 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="322cdd2a-cdf6-4e56-b828-58b589f4604e" containerName="cinder-scheduler" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.588962 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="322cdd2a-cdf6-4e56-b828-58b589f4604e" containerName="cinder-scheduler" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.589202 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="322cdd2a-cdf6-4e56-b828-58b589f4604e" containerName="probe" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.589219 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="322cdd2a-cdf6-4e56-b828-58b589f4604e" containerName="cinder-scheduler" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.589235 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb16514-1d45-4160-add7-70db8cb1ffbd" containerName="registry-server" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.590509 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.594367 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.596486 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.624686 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzhmh\" (UniqueName: \"kubernetes.io/projected/a70ae08c-10f1-4552-92a3-c23ede059504-kube-api-access-rzhmh\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.625180 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a70ae08c-10f1-4552-92a3-c23ede059504-scripts\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.625237 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a70ae08c-10f1-4552-92a3-c23ede059504-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.625300 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a70ae08c-10f1-4552-92a3-c23ede059504-config-data\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.625363 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a70ae08c-10f1-4552-92a3-c23ede059504-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.625481 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a70ae08c-10f1-4552-92a3-c23ede059504-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.715419 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.727378 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzhmh\" (UniqueName: \"kubernetes.io/projected/a70ae08c-10f1-4552-92a3-c23ede059504-kube-api-access-rzhmh\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.727450 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a70ae08c-10f1-4552-92a3-c23ede059504-scripts\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.727473 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a70ae08c-10f1-4552-92a3-c23ede059504-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.727523 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a70ae08c-10f1-4552-92a3-c23ede059504-config-data\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.727555 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a70ae08c-10f1-4552-92a3-c23ede059504-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.727623 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a70ae08c-10f1-4552-92a3-c23ede059504-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.728210 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a70ae08c-10f1-4552-92a3-c23ede059504-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.733665 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a70ae08c-10f1-4552-92a3-c23ede059504-config-data\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.735634 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a70ae08c-10f1-4552-92a3-c23ede059504-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.735876 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a70ae08c-10f1-4552-92a3-c23ede059504-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.736266 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a70ae08c-10f1-4552-92a3-c23ede059504-scripts\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.757695 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzhmh\" (UniqueName: \"kubernetes.io/projected/a70ae08c-10f1-4552-92a3-c23ede059504-kube-api-access-rzhmh\") pod \"cinder-scheduler-0\" (UID: \"a70ae08c-10f1-4552-92a3-c23ede059504\") " pod="openstack/cinder-scheduler-0" Nov 21 11:18:10 crc kubenswrapper[4972]: I1121 11:18:10.926045 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 11:18:11 crc kubenswrapper[4972]: I1121 11:18:11.451205 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 11:18:11 crc kubenswrapper[4972]: W1121 11:18:11.458636 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda70ae08c_10f1_4552_92a3_c23ede059504.slice/crio-580faaad93b86ebf716a246fadfaec84f678fb3ae0db473265c2f60c35e78791 WatchSource:0}: Error finding container 580faaad93b86ebf716a246fadfaec84f678fb3ae0db473265c2f60c35e78791: Status 404 returned error can't find the container with id 580faaad93b86ebf716a246fadfaec84f678fb3ae0db473265c2f60c35e78791 Nov 21 11:18:11 crc kubenswrapper[4972]: I1121 11:18:11.528125 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a70ae08c-10f1-4552-92a3-c23ede059504","Type":"ContainerStarted","Data":"580faaad93b86ebf716a246fadfaec84f678fb3ae0db473265c2f60c35e78791"} Nov 21 11:18:11 crc kubenswrapper[4972]: I1121 11:18:11.643075 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 21 11:18:11 crc kubenswrapper[4972]: I1121 11:18:11.770233 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="322cdd2a-cdf6-4e56-b828-58b589f4604e" path="/var/lib/kubelet/pods/322cdd2a-cdf6-4e56-b828-58b589f4604e/volumes" Nov 21 11:18:12 crc kubenswrapper[4972]: I1121 11:18:12.553025 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a70ae08c-10f1-4552-92a3-c23ede059504","Type":"ContainerStarted","Data":"a7532c1f4bf36d4ecadcb3b993595b8a0175c334f10019e29d12d4b0b075e6b6"} Nov 21 11:18:13 crc kubenswrapper[4972]: I1121 11:18:13.569955 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a70ae08c-10f1-4552-92a3-c23ede059504","Type":"ContainerStarted","Data":"6e1405031a03f924e24b0fb36c84956d27987eb0e617ac0f288368be7ac6a3ac"} Nov 21 11:18:13 crc kubenswrapper[4972]: I1121 11:18:13.928310 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.928273352 podStartE2EDuration="3.928273352s" podCreationTimestamp="2025-11-21 11:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:18:13.604724624 +0000 UTC m=+5838.713867142" watchObservedRunningTime="2025-11-21 11:18:13.928273352 +0000 UTC m=+5839.037415880" Nov 21 11:18:13 crc kubenswrapper[4972]: I1121 11:18:13.932424 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jssts"] Nov 21 11:18:13 crc kubenswrapper[4972]: I1121 11:18:13.934940 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:13 crc kubenswrapper[4972]: I1121 11:18:13.957152 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jssts"] Nov 21 11:18:14 crc kubenswrapper[4972]: I1121 11:18:14.096764 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f83b259-4d3d-4498-9447-494bb7901f75-utilities\") pod \"certified-operators-jssts\" (UID: \"7f83b259-4d3d-4498-9447-494bb7901f75\") " pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:14 crc kubenswrapper[4972]: I1121 11:18:14.097591 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f83b259-4d3d-4498-9447-494bb7901f75-catalog-content\") pod \"certified-operators-jssts\" (UID: \"7f83b259-4d3d-4498-9447-494bb7901f75\") " pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:14 crc kubenswrapper[4972]: I1121 11:18:14.097785 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m5zb\" (UniqueName: \"kubernetes.io/projected/7f83b259-4d3d-4498-9447-494bb7901f75-kube-api-access-6m5zb\") pod \"certified-operators-jssts\" (UID: \"7f83b259-4d3d-4498-9447-494bb7901f75\") " pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:14 crc kubenswrapper[4972]: I1121 11:18:14.199283 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m5zb\" (UniqueName: \"kubernetes.io/projected/7f83b259-4d3d-4498-9447-494bb7901f75-kube-api-access-6m5zb\") pod \"certified-operators-jssts\" (UID: \"7f83b259-4d3d-4498-9447-494bb7901f75\") " pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:14 crc kubenswrapper[4972]: I1121 11:18:14.199476 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f83b259-4d3d-4498-9447-494bb7901f75-utilities\") pod \"certified-operators-jssts\" (UID: \"7f83b259-4d3d-4498-9447-494bb7901f75\") " pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:14 crc kubenswrapper[4972]: I1121 11:18:14.199541 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f83b259-4d3d-4498-9447-494bb7901f75-catalog-content\") pod \"certified-operators-jssts\" (UID: \"7f83b259-4d3d-4498-9447-494bb7901f75\") " pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:14 crc kubenswrapper[4972]: I1121 11:18:14.200298 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f83b259-4d3d-4498-9447-494bb7901f75-catalog-content\") pod \"certified-operators-jssts\" (UID: \"7f83b259-4d3d-4498-9447-494bb7901f75\") " pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:14 crc kubenswrapper[4972]: I1121 11:18:14.200616 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f83b259-4d3d-4498-9447-494bb7901f75-utilities\") pod \"certified-operators-jssts\" (UID: \"7f83b259-4d3d-4498-9447-494bb7901f75\") " pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:14 crc kubenswrapper[4972]: I1121 11:18:14.231563 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m5zb\" (UniqueName: \"kubernetes.io/projected/7f83b259-4d3d-4498-9447-494bb7901f75-kube-api-access-6m5zb\") pod \"certified-operators-jssts\" (UID: \"7f83b259-4d3d-4498-9447-494bb7901f75\") " pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:14 crc kubenswrapper[4972]: I1121 11:18:14.261125 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:14 crc kubenswrapper[4972]: I1121 11:18:14.791421 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jssts"] Nov 21 11:18:15 crc kubenswrapper[4972]: I1121 11:18:15.592444 4972 generic.go:334] "Generic (PLEG): container finished" podID="7f83b259-4d3d-4498-9447-494bb7901f75" containerID="935a6866f8869943f6dfbd990e8764a722b1a85fa580a1bc134c012883d8ed27" exitCode=0 Nov 21 11:18:15 crc kubenswrapper[4972]: I1121 11:18:15.592484 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jssts" event={"ID":"7f83b259-4d3d-4498-9447-494bb7901f75","Type":"ContainerDied","Data":"935a6866f8869943f6dfbd990e8764a722b1a85fa580a1bc134c012883d8ed27"} Nov 21 11:18:15 crc kubenswrapper[4972]: I1121 11:18:15.592506 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jssts" event={"ID":"7f83b259-4d3d-4498-9447-494bb7901f75","Type":"ContainerStarted","Data":"5a7d58dbdd6f4841d32b60858c705d366d9a4f35d32ead7cbdbc3f2326f5b211"} Nov 21 11:18:15 crc kubenswrapper[4972]: I1121 11:18:15.927598 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 21 11:18:16 crc kubenswrapper[4972]: I1121 11:18:16.016463 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:18:16 crc kubenswrapper[4972]: I1121 11:18:16.087199 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:18:16 crc kubenswrapper[4972]: I1121 11:18:16.607234 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jssts" event={"ID":"7f83b259-4d3d-4498-9447-494bb7901f75","Type":"ContainerStarted","Data":"a7f4b262ded7912427f328b3e7ee6b839fef90a220d6d23d7a94bf7ffba5d720"} Nov 21 11:18:16 crc kubenswrapper[4972]: I1121 11:18:16.864207 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 21 11:18:17 crc kubenswrapper[4972]: I1121 11:18:17.616706 4972 generic.go:334] "Generic (PLEG): container finished" podID="7f83b259-4d3d-4498-9447-494bb7901f75" containerID="a7f4b262ded7912427f328b3e7ee6b839fef90a220d6d23d7a94bf7ffba5d720" exitCode=0 Nov 21 11:18:17 crc kubenswrapper[4972]: I1121 11:18:17.616759 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jssts" event={"ID":"7f83b259-4d3d-4498-9447-494bb7901f75","Type":"ContainerDied","Data":"a7f4b262ded7912427f328b3e7ee6b839fef90a220d6d23d7a94bf7ffba5d720"} Nov 21 11:18:18 crc kubenswrapper[4972]: I1121 11:18:18.630942 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jssts" event={"ID":"7f83b259-4d3d-4498-9447-494bb7901f75","Type":"ContainerStarted","Data":"94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23"} Nov 21 11:18:18 crc kubenswrapper[4972]: I1121 11:18:18.662657 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jssts" podStartSLOduration=3.145841555 podStartE2EDuration="5.662629811s" podCreationTimestamp="2025-11-21 11:18:13 +0000 UTC" firstStartedPulling="2025-11-21 11:18:15.595028719 +0000 UTC m=+5840.704171267" lastFinishedPulling="2025-11-21 11:18:18.111816985 +0000 UTC m=+5843.220959523" observedRunningTime="2025-11-21 11:18:18.653623733 +0000 UTC m=+5843.762766251" watchObservedRunningTime="2025-11-21 11:18:18.662629811 +0000 UTC m=+5843.771772349" Nov 21 11:18:20 crc kubenswrapper[4972]: I1121 11:18:20.315402 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxw6v"] Nov 21 11:18:20 crc kubenswrapper[4972]: I1121 11:18:20.316184 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wxw6v" podUID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" containerName="registry-server" containerID="cri-o://6a0b179ca524c68625d4e6ac987e4929826bb866a4a6a005e390c649e47a2138" gracePeriod=2 Nov 21 11:18:20 crc kubenswrapper[4972]: I1121 11:18:20.662444 4972 generic.go:334] "Generic (PLEG): container finished" podID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" containerID="6a0b179ca524c68625d4e6ac987e4929826bb866a4a6a005e390c649e47a2138" exitCode=0 Nov 21 11:18:20 crc kubenswrapper[4972]: I1121 11:18:20.662499 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxw6v" event={"ID":"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a","Type":"ContainerDied","Data":"6a0b179ca524c68625d4e6ac987e4929826bb866a4a6a005e390c649e47a2138"} Nov 21 11:18:20 crc kubenswrapper[4972]: I1121 11:18:20.761590 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:18:20 crc kubenswrapper[4972]: E1121 11:18:20.761871 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:18:20 crc kubenswrapper[4972]: I1121 11:18:20.873285 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.051979 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-utilities\") pod \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\" (UID: \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\") " Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.052450 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4ttw\" (UniqueName: \"kubernetes.io/projected/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-kube-api-access-b4ttw\") pod \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\" (UID: \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\") " Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.052511 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-catalog-content\") pod \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\" (UID: \"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a\") " Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.053636 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-utilities" (OuterVolumeSpecName: "utilities") pod "baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" (UID: "baecb28e-acb1-4d0a-bc4a-2bf5073ac04a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.064427 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-kube-api-access-b4ttw" (OuterVolumeSpecName: "kube-api-access-b4ttw") pod "baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" (UID: "baecb28e-acb1-4d0a-bc4a-2bf5073ac04a"). InnerVolumeSpecName "kube-api-access-b4ttw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.119967 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.154915 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4ttw\" (UniqueName: \"kubernetes.io/projected/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-kube-api-access-b4ttw\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.154974 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.157659 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" (UID: "baecb28e-acb1-4d0a-bc4a-2bf5073ac04a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.256951 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.675938 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxw6v" event={"ID":"baecb28e-acb1-4d0a-bc4a-2bf5073ac04a","Type":"ContainerDied","Data":"c0a264be98d556a1d57708d0bb8d6dd7686fda50fa86de0634c6bf3b8e13b3b9"} Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.675990 4972 scope.go:117] "RemoveContainer" containerID="6a0b179ca524c68625d4e6ac987e4929826bb866a4a6a005e390c649e47a2138" Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.676052 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxw6v" Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.713986 4972 scope.go:117] "RemoveContainer" containerID="7c12ed1fc22af1d182535e272c821411f90ad5a6fea6ec2a065cf2082e2ade34" Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.724555 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxw6v"] Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.731603 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wxw6v"] Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.747173 4972 scope.go:117] "RemoveContainer" containerID="2fd0bc9a128776e46c4b6b33557939c66d1c70df5d2f4a73e6623436db5acb8b" Nov 21 11:18:21 crc kubenswrapper[4972]: I1121 11:18:21.782864 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" path="/var/lib/kubelet/pods/baecb28e-acb1-4d0a-bc4a-2bf5073ac04a/volumes" Nov 21 11:18:24 crc kubenswrapper[4972]: I1121 11:18:24.261675 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:24 crc kubenswrapper[4972]: I1121 11:18:24.262181 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:25 crc kubenswrapper[4972]: I1121 11:18:25.324032 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jssts" podUID="7f83b259-4d3d-4498-9447-494bb7901f75" containerName="registry-server" probeResult="failure" output=< Nov 21 11:18:25 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 11:18:25 crc kubenswrapper[4972]: > Nov 21 11:18:34 crc kubenswrapper[4972]: I1121 11:18:34.345956 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:34 crc kubenswrapper[4972]: I1121 11:18:34.410099 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:34 crc kubenswrapper[4972]: I1121 11:18:34.596130 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jssts"] Nov 21 11:18:35 crc kubenswrapper[4972]: I1121 11:18:35.771597 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:18:35 crc kubenswrapper[4972]: E1121 11:18:35.772111 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:18:35 crc kubenswrapper[4972]: I1121 11:18:35.862300 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jssts" podUID="7f83b259-4d3d-4498-9447-494bb7901f75" containerName="registry-server" containerID="cri-o://94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23" gracePeriod=2 Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.397492 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.488699 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f83b259-4d3d-4498-9447-494bb7901f75-catalog-content\") pod \"7f83b259-4d3d-4498-9447-494bb7901f75\" (UID: \"7f83b259-4d3d-4498-9447-494bb7901f75\") " Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.489094 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m5zb\" (UniqueName: \"kubernetes.io/projected/7f83b259-4d3d-4498-9447-494bb7901f75-kube-api-access-6m5zb\") pod \"7f83b259-4d3d-4498-9447-494bb7901f75\" (UID: \"7f83b259-4d3d-4498-9447-494bb7901f75\") " Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.489323 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f83b259-4d3d-4498-9447-494bb7901f75-utilities\") pod \"7f83b259-4d3d-4498-9447-494bb7901f75\" (UID: \"7f83b259-4d3d-4498-9447-494bb7901f75\") " Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.490785 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f83b259-4d3d-4498-9447-494bb7901f75-utilities" (OuterVolumeSpecName: "utilities") pod "7f83b259-4d3d-4498-9447-494bb7901f75" (UID: "7f83b259-4d3d-4498-9447-494bb7901f75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.495115 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f83b259-4d3d-4498-9447-494bb7901f75-kube-api-access-6m5zb" (OuterVolumeSpecName: "kube-api-access-6m5zb") pod "7f83b259-4d3d-4498-9447-494bb7901f75" (UID: "7f83b259-4d3d-4498-9447-494bb7901f75"). InnerVolumeSpecName "kube-api-access-6m5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.535593 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f83b259-4d3d-4498-9447-494bb7901f75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f83b259-4d3d-4498-9447-494bb7901f75" (UID: "7f83b259-4d3d-4498-9447-494bb7901f75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.592186 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f83b259-4d3d-4498-9447-494bb7901f75-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.592499 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f83b259-4d3d-4498-9447-494bb7901f75-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.592512 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m5zb\" (UniqueName: \"kubernetes.io/projected/7f83b259-4d3d-4498-9447-494bb7901f75-kube-api-access-6m5zb\") on node \"crc\" DevicePath \"\"" Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.879555 4972 generic.go:334] "Generic (PLEG): container finished" podID="7f83b259-4d3d-4498-9447-494bb7901f75" containerID="94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23" exitCode=0 Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.879625 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jssts" event={"ID":"7f83b259-4d3d-4498-9447-494bb7901f75","Type":"ContainerDied","Data":"94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23"} Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.879669 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jssts" event={"ID":"7f83b259-4d3d-4498-9447-494bb7901f75","Type":"ContainerDied","Data":"5a7d58dbdd6f4841d32b60858c705d366d9a4f35d32ead7cbdbc3f2326f5b211"} Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.879705 4972 scope.go:117] "RemoveContainer" containerID="94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23" Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.879972 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jssts" Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.919999 4972 scope.go:117] "RemoveContainer" containerID="a7f4b262ded7912427f328b3e7ee6b839fef90a220d6d23d7a94bf7ffba5d720" Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.943540 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jssts"] Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.952258 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jssts"] Nov 21 11:18:36 crc kubenswrapper[4972]: I1121 11:18:36.972214 4972 scope.go:117] "RemoveContainer" containerID="935a6866f8869943f6dfbd990e8764a722b1a85fa580a1bc134c012883d8ed27" Nov 21 11:18:37 crc kubenswrapper[4972]: I1121 11:18:37.005695 4972 scope.go:117] "RemoveContainer" containerID="94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23" Nov 21 11:18:37 crc kubenswrapper[4972]: E1121 11:18:37.006574 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23\": container with ID starting with 94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23 not found: ID does not exist" containerID="94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23" Nov 21 11:18:37 crc kubenswrapper[4972]: I1121 11:18:37.006707 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23"} err="failed to get container status \"94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23\": rpc error: code = NotFound desc = could not find container \"94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23\": container with ID starting with 94dd8c71be2f221e8f564aea4e9d05b319b075224e33b3739ecce5db74216c23 not found: ID does not exist" Nov 21 11:18:37 crc kubenswrapper[4972]: I1121 11:18:37.006808 4972 scope.go:117] "RemoveContainer" containerID="a7f4b262ded7912427f328b3e7ee6b839fef90a220d6d23d7a94bf7ffba5d720" Nov 21 11:18:37 crc kubenswrapper[4972]: E1121 11:18:37.007263 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7f4b262ded7912427f328b3e7ee6b839fef90a220d6d23d7a94bf7ffba5d720\": container with ID starting with a7f4b262ded7912427f328b3e7ee6b839fef90a220d6d23d7a94bf7ffba5d720 not found: ID does not exist" containerID="a7f4b262ded7912427f328b3e7ee6b839fef90a220d6d23d7a94bf7ffba5d720" Nov 21 11:18:37 crc kubenswrapper[4972]: I1121 11:18:37.007647 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7f4b262ded7912427f328b3e7ee6b839fef90a220d6d23d7a94bf7ffba5d720"} err="failed to get container status \"a7f4b262ded7912427f328b3e7ee6b839fef90a220d6d23d7a94bf7ffba5d720\": rpc error: code = NotFound desc = could not find container \"a7f4b262ded7912427f328b3e7ee6b839fef90a220d6d23d7a94bf7ffba5d720\": container with ID starting with a7f4b262ded7912427f328b3e7ee6b839fef90a220d6d23d7a94bf7ffba5d720 not found: ID does not exist" Nov 21 11:18:37 crc kubenswrapper[4972]: I1121 11:18:37.007768 4972 scope.go:117] "RemoveContainer" containerID="935a6866f8869943f6dfbd990e8764a722b1a85fa580a1bc134c012883d8ed27" Nov 21 11:18:37 crc kubenswrapper[4972]: E1121 11:18:37.009112 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"935a6866f8869943f6dfbd990e8764a722b1a85fa580a1bc134c012883d8ed27\": container with ID starting with 935a6866f8869943f6dfbd990e8764a722b1a85fa580a1bc134c012883d8ed27 not found: ID does not exist" containerID="935a6866f8869943f6dfbd990e8764a722b1a85fa580a1bc134c012883d8ed27" Nov 21 11:18:37 crc kubenswrapper[4972]: I1121 11:18:37.009170 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"935a6866f8869943f6dfbd990e8764a722b1a85fa580a1bc134c012883d8ed27"} err="failed to get container status \"935a6866f8869943f6dfbd990e8764a722b1a85fa580a1bc134c012883d8ed27\": rpc error: code = NotFound desc = could not find container \"935a6866f8869943f6dfbd990e8764a722b1a85fa580a1bc134c012883d8ed27\": container with ID starting with 935a6866f8869943f6dfbd990e8764a722b1a85fa580a1bc134c012883d8ed27 not found: ID does not exist" Nov 21 11:18:37 crc kubenswrapper[4972]: I1121 11:18:37.772499 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f83b259-4d3d-4498-9447-494bb7901f75" path="/var/lib/kubelet/pods/7f83b259-4d3d-4498-9447-494bb7901f75/volumes" Nov 21 11:18:50 crc kubenswrapper[4972]: I1121 11:18:50.760098 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:18:50 crc kubenswrapper[4972]: E1121 11:18:50.763445 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:19:01 crc kubenswrapper[4972]: I1121 11:19:01.761559 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:19:01 crc kubenswrapper[4972]: E1121 11:19:01.762613 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.522391 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wnb4d"] Nov 21 11:19:09 crc kubenswrapper[4972]: E1121 11:19:09.524653 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" containerName="extract-content" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.524957 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" containerName="extract-content" Nov 21 11:19:09 crc kubenswrapper[4972]: E1121 11:19:09.525067 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" containerName="extract-utilities" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.525154 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" containerName="extract-utilities" Nov 21 11:19:09 crc kubenswrapper[4972]: E1121 11:19:09.525278 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" containerName="registry-server" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.525361 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" containerName="registry-server" Nov 21 11:19:09 crc kubenswrapper[4972]: E1121 11:19:09.525446 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f83b259-4d3d-4498-9447-494bb7901f75" containerName="extract-content" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.525523 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f83b259-4d3d-4498-9447-494bb7901f75" containerName="extract-content" Nov 21 11:19:09 crc kubenswrapper[4972]: E1121 11:19:09.525611 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f83b259-4d3d-4498-9447-494bb7901f75" containerName="extract-utilities" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.525692 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f83b259-4d3d-4498-9447-494bb7901f75" containerName="extract-utilities" Nov 21 11:19:09 crc kubenswrapper[4972]: E1121 11:19:09.525783 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f83b259-4d3d-4498-9447-494bb7901f75" containerName="registry-server" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.525883 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f83b259-4d3d-4498-9447-494bb7901f75" containerName="registry-server" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.526184 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="baecb28e-acb1-4d0a-bc4a-2bf5073ac04a" containerName="registry-server" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.526293 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f83b259-4d3d-4498-9447-494bb7901f75" containerName="registry-server" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.528084 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.538102 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnb4d"] Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.650313 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f584b350-0eec-4040-9b24-7f6999cc8c39-utilities\") pod \"community-operators-wnb4d\" (UID: \"f584b350-0eec-4040-9b24-7f6999cc8c39\") " pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.650447 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f584b350-0eec-4040-9b24-7f6999cc8c39-catalog-content\") pod \"community-operators-wnb4d\" (UID: \"f584b350-0eec-4040-9b24-7f6999cc8c39\") " pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.650597 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg5wd\" (UniqueName: \"kubernetes.io/projected/f584b350-0eec-4040-9b24-7f6999cc8c39-kube-api-access-lg5wd\") pod \"community-operators-wnb4d\" (UID: \"f584b350-0eec-4040-9b24-7f6999cc8c39\") " pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.752261 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg5wd\" (UniqueName: \"kubernetes.io/projected/f584b350-0eec-4040-9b24-7f6999cc8c39-kube-api-access-lg5wd\") pod \"community-operators-wnb4d\" (UID: \"f584b350-0eec-4040-9b24-7f6999cc8c39\") " pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.752442 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f584b350-0eec-4040-9b24-7f6999cc8c39-utilities\") pod \"community-operators-wnb4d\" (UID: \"f584b350-0eec-4040-9b24-7f6999cc8c39\") " pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.752532 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f584b350-0eec-4040-9b24-7f6999cc8c39-catalog-content\") pod \"community-operators-wnb4d\" (UID: \"f584b350-0eec-4040-9b24-7f6999cc8c39\") " pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.753009 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f584b350-0eec-4040-9b24-7f6999cc8c39-utilities\") pod \"community-operators-wnb4d\" (UID: \"f584b350-0eec-4040-9b24-7f6999cc8c39\") " pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.753079 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f584b350-0eec-4040-9b24-7f6999cc8c39-catalog-content\") pod \"community-operators-wnb4d\" (UID: \"f584b350-0eec-4040-9b24-7f6999cc8c39\") " pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.778802 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg5wd\" (UniqueName: \"kubernetes.io/projected/f584b350-0eec-4040-9b24-7f6999cc8c39-kube-api-access-lg5wd\") pod \"community-operators-wnb4d\" (UID: \"f584b350-0eec-4040-9b24-7f6999cc8c39\") " pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:09 crc kubenswrapper[4972]: I1121 11:19:09.868301 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:10 crc kubenswrapper[4972]: I1121 11:19:10.389191 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnb4d"] Nov 21 11:19:11 crc kubenswrapper[4972]: I1121 11:19:11.309907 4972 generic.go:334] "Generic (PLEG): container finished" podID="f584b350-0eec-4040-9b24-7f6999cc8c39" containerID="4b2c6fe139adb9cb447f7c7b8cb97888d58df4c7cdce9b04e96dc95e961eb4bd" exitCode=0 Nov 21 11:19:11 crc kubenswrapper[4972]: I1121 11:19:11.310160 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnb4d" event={"ID":"f584b350-0eec-4040-9b24-7f6999cc8c39","Type":"ContainerDied","Data":"4b2c6fe139adb9cb447f7c7b8cb97888d58df4c7cdce9b04e96dc95e961eb4bd"} Nov 21 11:19:11 crc kubenswrapper[4972]: I1121 11:19:11.310321 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnb4d" event={"ID":"f584b350-0eec-4040-9b24-7f6999cc8c39","Type":"ContainerStarted","Data":"b96aee645589d656b39dac4d4d8fb08e773ac82ec8ed6b2a8ac158ae7a393958"} Nov 21 11:19:13 crc kubenswrapper[4972]: I1121 11:19:13.336066 4972 generic.go:334] "Generic (PLEG): container finished" podID="f584b350-0eec-4040-9b24-7f6999cc8c39" containerID="71735afb9fc4aa2ad425a9067eab502ca6de39d119bbe4df9dc363218d4528e4" exitCode=0 Nov 21 11:19:13 crc kubenswrapper[4972]: I1121 11:19:13.336136 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnb4d" event={"ID":"f584b350-0eec-4040-9b24-7f6999cc8c39","Type":"ContainerDied","Data":"71735afb9fc4aa2ad425a9067eab502ca6de39d119bbe4df9dc363218d4528e4"} Nov 21 11:19:14 crc kubenswrapper[4972]: I1121 11:19:14.348890 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnb4d" event={"ID":"f584b350-0eec-4040-9b24-7f6999cc8c39","Type":"ContainerStarted","Data":"dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c"} Nov 21 11:19:14 crc kubenswrapper[4972]: I1121 11:19:14.759498 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:19:14 crc kubenswrapper[4972]: E1121 11:19:14.760071 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:19:19 crc kubenswrapper[4972]: I1121 11:19:19.869008 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:19 crc kubenswrapper[4972]: I1121 11:19:19.869726 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:19 crc kubenswrapper[4972]: I1121 11:19:19.943502 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:19 crc kubenswrapper[4972]: I1121 11:19:19.973268 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wnb4d" podStartSLOduration=8.525074436 podStartE2EDuration="10.973237075s" podCreationTimestamp="2025-11-21 11:19:09 +0000 UTC" firstStartedPulling="2025-11-21 11:19:11.314092825 +0000 UTC m=+5896.423235363" lastFinishedPulling="2025-11-21 11:19:13.762255514 +0000 UTC m=+5898.871398002" observedRunningTime="2025-11-21 11:19:14.38031872 +0000 UTC m=+5899.489461228" watchObservedRunningTime="2025-11-21 11:19:19.973237075 +0000 UTC m=+5905.082379613" Nov 21 11:19:20 crc kubenswrapper[4972]: I1121 11:19:20.507960 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:20 crc kubenswrapper[4972]: I1121 11:19:20.580534 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnb4d"] Nov 21 11:19:22 crc kubenswrapper[4972]: I1121 11:19:22.461675 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wnb4d" podUID="f584b350-0eec-4040-9b24-7f6999cc8c39" containerName="registry-server" containerID="cri-o://dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c" gracePeriod=2 Nov 21 11:19:22 crc kubenswrapper[4972]: I1121 11:19:22.927898 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.046017 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg5wd\" (UniqueName: \"kubernetes.io/projected/f584b350-0eec-4040-9b24-7f6999cc8c39-kube-api-access-lg5wd\") pod \"f584b350-0eec-4040-9b24-7f6999cc8c39\" (UID: \"f584b350-0eec-4040-9b24-7f6999cc8c39\") " Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.046086 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f584b350-0eec-4040-9b24-7f6999cc8c39-catalog-content\") pod \"f584b350-0eec-4040-9b24-7f6999cc8c39\" (UID: \"f584b350-0eec-4040-9b24-7f6999cc8c39\") " Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.046142 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f584b350-0eec-4040-9b24-7f6999cc8c39-utilities\") pod \"f584b350-0eec-4040-9b24-7f6999cc8c39\" (UID: \"f584b350-0eec-4040-9b24-7f6999cc8c39\") " Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.047616 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f584b350-0eec-4040-9b24-7f6999cc8c39-utilities" (OuterVolumeSpecName: "utilities") pod "f584b350-0eec-4040-9b24-7f6999cc8c39" (UID: "f584b350-0eec-4040-9b24-7f6999cc8c39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.053537 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f584b350-0eec-4040-9b24-7f6999cc8c39-kube-api-access-lg5wd" (OuterVolumeSpecName: "kube-api-access-lg5wd") pod "f584b350-0eec-4040-9b24-7f6999cc8c39" (UID: "f584b350-0eec-4040-9b24-7f6999cc8c39"). InnerVolumeSpecName "kube-api-access-lg5wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.121606 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f584b350-0eec-4040-9b24-7f6999cc8c39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f584b350-0eec-4040-9b24-7f6999cc8c39" (UID: "f584b350-0eec-4040-9b24-7f6999cc8c39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.148713 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg5wd\" (UniqueName: \"kubernetes.io/projected/f584b350-0eec-4040-9b24-7f6999cc8c39-kube-api-access-lg5wd\") on node \"crc\" DevicePath \"\"" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.148774 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f584b350-0eec-4040-9b24-7f6999cc8c39-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.148789 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f584b350-0eec-4040-9b24-7f6999cc8c39-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.477327 4972 generic.go:334] "Generic (PLEG): container finished" podID="f584b350-0eec-4040-9b24-7f6999cc8c39" containerID="dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c" exitCode=0 Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.477416 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnb4d" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.477410 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnb4d" event={"ID":"f584b350-0eec-4040-9b24-7f6999cc8c39","Type":"ContainerDied","Data":"dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c"} Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.477660 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnb4d" event={"ID":"f584b350-0eec-4040-9b24-7f6999cc8c39","Type":"ContainerDied","Data":"b96aee645589d656b39dac4d4d8fb08e773ac82ec8ed6b2a8ac158ae7a393958"} Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.477729 4972 scope.go:117] "RemoveContainer" containerID="dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.514570 4972 scope.go:117] "RemoveContainer" containerID="71735afb9fc4aa2ad425a9067eab502ca6de39d119bbe4df9dc363218d4528e4" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.551709 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnb4d"] Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.564138 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wnb4d"] Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.566576 4972 scope.go:117] "RemoveContainer" containerID="4b2c6fe139adb9cb447f7c7b8cb97888d58df4c7cdce9b04e96dc95e961eb4bd" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.619060 4972 scope.go:117] "RemoveContainer" containerID="dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c" Nov 21 11:19:23 crc kubenswrapper[4972]: E1121 11:19:23.621816 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c\": container with ID starting with dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c not found: ID does not exist" containerID="dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.621907 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c"} err="failed to get container status \"dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c\": rpc error: code = NotFound desc = could not find container \"dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c\": container with ID starting with dcd9818ba6e60823ee5fc2054004481d237cc1142efab6cf54c9e61f3d04887c not found: ID does not exist" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.621950 4972 scope.go:117] "RemoveContainer" containerID="71735afb9fc4aa2ad425a9067eab502ca6de39d119bbe4df9dc363218d4528e4" Nov 21 11:19:23 crc kubenswrapper[4972]: E1121 11:19:23.623127 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71735afb9fc4aa2ad425a9067eab502ca6de39d119bbe4df9dc363218d4528e4\": container with ID starting with 71735afb9fc4aa2ad425a9067eab502ca6de39d119bbe4df9dc363218d4528e4 not found: ID does not exist" containerID="71735afb9fc4aa2ad425a9067eab502ca6de39d119bbe4df9dc363218d4528e4" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.623169 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71735afb9fc4aa2ad425a9067eab502ca6de39d119bbe4df9dc363218d4528e4"} err="failed to get container status \"71735afb9fc4aa2ad425a9067eab502ca6de39d119bbe4df9dc363218d4528e4\": rpc error: code = NotFound desc = could not find container \"71735afb9fc4aa2ad425a9067eab502ca6de39d119bbe4df9dc363218d4528e4\": container with ID starting with 71735afb9fc4aa2ad425a9067eab502ca6de39d119bbe4df9dc363218d4528e4 not found: ID does not exist" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.623203 4972 scope.go:117] "RemoveContainer" containerID="4b2c6fe139adb9cb447f7c7b8cb97888d58df4c7cdce9b04e96dc95e961eb4bd" Nov 21 11:19:23 crc kubenswrapper[4972]: E1121 11:19:23.623655 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2c6fe139adb9cb447f7c7b8cb97888d58df4c7cdce9b04e96dc95e961eb4bd\": container with ID starting with 4b2c6fe139adb9cb447f7c7b8cb97888d58df4c7cdce9b04e96dc95e961eb4bd not found: ID does not exist" containerID="4b2c6fe139adb9cb447f7c7b8cb97888d58df4c7cdce9b04e96dc95e961eb4bd" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.623717 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2c6fe139adb9cb447f7c7b8cb97888d58df4c7cdce9b04e96dc95e961eb4bd"} err="failed to get container status \"4b2c6fe139adb9cb447f7c7b8cb97888d58df4c7cdce9b04e96dc95e961eb4bd\": rpc error: code = NotFound desc = could not find container \"4b2c6fe139adb9cb447f7c7b8cb97888d58df4c7cdce9b04e96dc95e961eb4bd\": container with ID starting with 4b2c6fe139adb9cb447f7c7b8cb97888d58df4c7cdce9b04e96dc95e961eb4bd not found: ID does not exist" Nov 21 11:19:23 crc kubenswrapper[4972]: I1121 11:19:23.773652 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f584b350-0eec-4040-9b24-7f6999cc8c39" path="/var/lib/kubelet/pods/f584b350-0eec-4040-9b24-7f6999cc8c39/volumes" Nov 21 11:19:26 crc kubenswrapper[4972]: I1121 11:19:26.760358 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:19:26 crc kubenswrapper[4972]: E1121 11:19:26.761662 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:19:38 crc kubenswrapper[4972]: I1121 11:19:38.759940 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:19:38 crc kubenswrapper[4972]: E1121 11:19:38.761019 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:19:51 crc kubenswrapper[4972]: I1121 11:19:51.761139 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:19:51 crc kubenswrapper[4972]: E1121 11:19:51.762175 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.172785 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-29k84"] Nov 21 11:20:02 crc kubenswrapper[4972]: E1121 11:20:02.174888 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f584b350-0eec-4040-9b24-7f6999cc8c39" containerName="registry-server" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.174979 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f584b350-0eec-4040-9b24-7f6999cc8c39" containerName="registry-server" Nov 21 11:20:02 crc kubenswrapper[4972]: E1121 11:20:02.175046 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f584b350-0eec-4040-9b24-7f6999cc8c39" containerName="extract-content" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.175102 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f584b350-0eec-4040-9b24-7f6999cc8c39" containerName="extract-content" Nov 21 11:20:02 crc kubenswrapper[4972]: E1121 11:20:02.175170 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f584b350-0eec-4040-9b24-7f6999cc8c39" containerName="extract-utilities" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.175226 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f584b350-0eec-4040-9b24-7f6999cc8c39" containerName="extract-utilities" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.175463 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f584b350-0eec-4040-9b24-7f6999cc8c39" containerName="registry-server" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.176944 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.180717 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-w5z2s" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.181159 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.198471 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pxnhj"] Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.200073 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.210381 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-298fw\" (UniqueName: \"kubernetes.io/projected/19a45089-70ad-464f-88c1-2ede6d0e1265-kube-api-access-298fw\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.210418 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/19a45089-70ad-464f-88c1-2ede6d0e1265-var-lib\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.210435 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7e864747-789f-4b61-88e7-24523b70fe33-var-log-ovn\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.210479 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/19a45089-70ad-464f-88c1-2ede6d0e1265-etc-ovs\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.210507 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/19a45089-70ad-464f-88c1-2ede6d0e1265-var-log\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.210552 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e864747-789f-4b61-88e7-24523b70fe33-var-run-ovn\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.210582 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7e864747-789f-4b61-88e7-24523b70fe33-var-run\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.210594 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19a45089-70ad-464f-88c1-2ede6d0e1265-scripts\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.210627 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19a45089-70ad-464f-88c1-2ede6d0e1265-var-run\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.210647 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e864747-789f-4b61-88e7-24523b70fe33-scripts\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.210670 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58555\" (UniqueName: \"kubernetes.io/projected/7e864747-789f-4b61-88e7-24523b70fe33-kube-api-access-58555\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.218802 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pxnhj"] Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.232151 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-29k84"] Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.312399 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-298fw\" (UniqueName: \"kubernetes.io/projected/19a45089-70ad-464f-88c1-2ede6d0e1265-kube-api-access-298fw\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.312456 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/19a45089-70ad-464f-88c1-2ede6d0e1265-var-lib\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.312490 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7e864747-789f-4b61-88e7-24523b70fe33-var-log-ovn\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.312531 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/19a45089-70ad-464f-88c1-2ede6d0e1265-etc-ovs\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.312573 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/19a45089-70ad-464f-88c1-2ede6d0e1265-var-log\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.312652 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e864747-789f-4b61-88e7-24523b70fe33-var-run-ovn\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.312700 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7e864747-789f-4b61-88e7-24523b70fe33-var-run\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.312720 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19a45089-70ad-464f-88c1-2ede6d0e1265-scripts\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.312765 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19a45089-70ad-464f-88c1-2ede6d0e1265-var-run\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.312793 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e864747-789f-4b61-88e7-24523b70fe33-scripts\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.312823 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58555\" (UniqueName: \"kubernetes.io/projected/7e864747-789f-4b61-88e7-24523b70fe33-kube-api-access-58555\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.313170 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/19a45089-70ad-464f-88c1-2ede6d0e1265-etc-ovs\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.313171 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/19a45089-70ad-464f-88c1-2ede6d0e1265-var-lib\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.313218 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e864747-789f-4b61-88e7-24523b70fe33-var-run-ovn\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.313228 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7e864747-789f-4b61-88e7-24523b70fe33-var-run\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.313176 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7e864747-789f-4b61-88e7-24523b70fe33-var-log-ovn\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.313231 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19a45089-70ad-464f-88c1-2ede6d0e1265-var-run\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.313178 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/19a45089-70ad-464f-88c1-2ede6d0e1265-var-log\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.315709 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e864747-789f-4b61-88e7-24523b70fe33-scripts\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.316079 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19a45089-70ad-464f-88c1-2ede6d0e1265-scripts\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.331745 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58555\" (UniqueName: \"kubernetes.io/projected/7e864747-789f-4b61-88e7-24523b70fe33-kube-api-access-58555\") pod \"ovn-controller-pxnhj\" (UID: \"7e864747-789f-4b61-88e7-24523b70fe33\") " pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.336578 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-298fw\" (UniqueName: \"kubernetes.io/projected/19a45089-70ad-464f-88c1-2ede6d0e1265-kube-api-access-298fw\") pod \"ovn-controller-ovs-29k84\" (UID: \"19a45089-70ad-464f-88c1-2ede6d0e1265\") " pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.504748 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:02 crc kubenswrapper[4972]: I1121 11:20:02.517424 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.018935 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pxnhj"] Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.393971 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-29k84"] Nov 21 11:20:03 crc kubenswrapper[4972]: W1121 11:20:03.409145 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19a45089_70ad_464f_88c1_2ede6d0e1265.slice/crio-abf7f9972039e5c7d1cefb8cd47ba103cc897709b2206c6e392b8a2a42b4a825 WatchSource:0}: Error finding container abf7f9972039e5c7d1cefb8cd47ba103cc897709b2206c6e392b8a2a42b4a825: Status 404 returned error can't find the container with id abf7f9972039e5c7d1cefb8cd47ba103cc897709b2206c6e392b8a2a42b4a825 Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.625272 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-bvbth"] Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.626935 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.630675 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.644348 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bvbth"] Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.737729 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6rhj\" (UniqueName: \"kubernetes.io/projected/e1cf5e01-48da-4c9e-a4e5-937bea851491-kube-api-access-z6rhj\") pod \"ovn-controller-metrics-bvbth\" (UID: \"e1cf5e01-48da-4c9e-a4e5-937bea851491\") " pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.737908 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e1cf5e01-48da-4c9e-a4e5-937bea851491-ovs-rundir\") pod \"ovn-controller-metrics-bvbth\" (UID: \"e1cf5e01-48da-4c9e-a4e5-937bea851491\") " pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.737992 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1cf5e01-48da-4c9e-a4e5-937bea851491-config\") pod \"ovn-controller-metrics-bvbth\" (UID: \"e1cf5e01-48da-4c9e-a4e5-937bea851491\") " pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.738111 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e1cf5e01-48da-4c9e-a4e5-937bea851491-ovn-rundir\") pod \"ovn-controller-metrics-bvbth\" (UID: \"e1cf5e01-48da-4c9e-a4e5-937bea851491\") " pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.839548 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6rhj\" (UniqueName: \"kubernetes.io/projected/e1cf5e01-48da-4c9e-a4e5-937bea851491-kube-api-access-z6rhj\") pod \"ovn-controller-metrics-bvbth\" (UID: \"e1cf5e01-48da-4c9e-a4e5-937bea851491\") " pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.845699 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e1cf5e01-48da-4c9e-a4e5-937bea851491-ovs-rundir\") pod \"ovn-controller-metrics-bvbth\" (UID: \"e1cf5e01-48da-4c9e-a4e5-937bea851491\") " pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.846011 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1cf5e01-48da-4c9e-a4e5-937bea851491-config\") pod \"ovn-controller-metrics-bvbth\" (UID: \"e1cf5e01-48da-4c9e-a4e5-937bea851491\") " pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.846215 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e1cf5e01-48da-4c9e-a4e5-937bea851491-ovn-rundir\") pod \"ovn-controller-metrics-bvbth\" (UID: \"e1cf5e01-48da-4c9e-a4e5-937bea851491\") " pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.846643 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e1cf5e01-48da-4c9e-a4e5-937bea851491-ovn-rundir\") pod \"ovn-controller-metrics-bvbth\" (UID: \"e1cf5e01-48da-4c9e-a4e5-937bea851491\") " pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.847229 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e1cf5e01-48da-4c9e-a4e5-937bea851491-ovs-rundir\") pod \"ovn-controller-metrics-bvbth\" (UID: \"e1cf5e01-48da-4c9e-a4e5-937bea851491\") " pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.847979 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1cf5e01-48da-4c9e-a4e5-937bea851491-config\") pod \"ovn-controller-metrics-bvbth\" (UID: \"e1cf5e01-48da-4c9e-a4e5-937bea851491\") " pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:03 crc kubenswrapper[4972]: I1121 11:20:03.861662 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6rhj\" (UniqueName: \"kubernetes.io/projected/e1cf5e01-48da-4c9e-a4e5-937bea851491-kube-api-access-z6rhj\") pod \"ovn-controller-metrics-bvbth\" (UID: \"e1cf5e01-48da-4c9e-a4e5-937bea851491\") " pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.003096 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pxnhj" event={"ID":"7e864747-789f-4b61-88e7-24523b70fe33","Type":"ContainerStarted","Data":"88fb477cd0aceab1932c420738c79d0eae0f730868b94d595ee060fed1a4ab19"} Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.003141 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pxnhj" event={"ID":"7e864747-789f-4b61-88e7-24523b70fe33","Type":"ContainerStarted","Data":"648a42f66463a11d555639e2778d1cad693a11a24dbc76949c6b76a38241eda1"} Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.004244 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.005889 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-29k84" event={"ID":"19a45089-70ad-464f-88c1-2ede6d0e1265","Type":"ContainerStarted","Data":"4ad39597bbe569bf05550de0e076c316b18dac6a413eb18bf03f14d84ce13f67"} Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.005946 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-29k84" event={"ID":"19a45089-70ad-464f-88c1-2ede6d0e1265","Type":"ContainerStarted","Data":"abf7f9972039e5c7d1cefb8cd47ba103cc897709b2206c6e392b8a2a42b4a825"} Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.012315 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bvbth" Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.025804 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-pxnhj" podStartSLOduration=2.025781543 podStartE2EDuration="2.025781543s" podCreationTimestamp="2025-11-21 11:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:20:04.017403311 +0000 UTC m=+5949.126545809" watchObservedRunningTime="2025-11-21 11:20:04.025781543 +0000 UTC m=+5949.134924041" Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.363603 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-b5hrg"] Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.365807 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-b5hrg" Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.384995 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-b5hrg"] Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.457991 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf5vj\" (UniqueName: \"kubernetes.io/projected/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a-kube-api-access-mf5vj\") pod \"octavia-db-create-b5hrg\" (UID: \"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a\") " pod="openstack/octavia-db-create-b5hrg" Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.458357 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a-operator-scripts\") pod \"octavia-db-create-b5hrg\" (UID: \"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a\") " pod="openstack/octavia-db-create-b5hrg" Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.539648 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bvbth"] Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.560110 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a-operator-scripts\") pod \"octavia-db-create-b5hrg\" (UID: \"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a\") " pod="openstack/octavia-db-create-b5hrg" Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.560224 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf5vj\" (UniqueName: \"kubernetes.io/projected/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a-kube-api-access-mf5vj\") pod \"octavia-db-create-b5hrg\" (UID: \"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a\") " pod="openstack/octavia-db-create-b5hrg" Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.560977 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a-operator-scripts\") pod \"octavia-db-create-b5hrg\" (UID: \"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a\") " pod="openstack/octavia-db-create-b5hrg" Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.581963 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf5vj\" (UniqueName: \"kubernetes.io/projected/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a-kube-api-access-mf5vj\") pod \"octavia-db-create-b5hrg\" (UID: \"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a\") " pod="openstack/octavia-db-create-b5hrg" Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.699406 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-b5hrg" Nov 21 11:20:04 crc kubenswrapper[4972]: I1121 11:20:04.760305 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:20:04 crc kubenswrapper[4972]: E1121 11:20:04.760656 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.016699 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bvbth" event={"ID":"e1cf5e01-48da-4c9e-a4e5-937bea851491","Type":"ContainerStarted","Data":"f232bf405e2f275a0dc1bbe0f45dc8e765316e432cd5aa92db3c5c3700a3baa7"} Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.017103 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bvbth" event={"ID":"e1cf5e01-48da-4c9e-a4e5-937bea851491","Type":"ContainerStarted","Data":"db64942522ea4fa55c3ea570c7a42d355545d9fdce9bbd883352b8e4722ed2d2"} Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.018600 4972 generic.go:334] "Generic (PLEG): container finished" podID="19a45089-70ad-464f-88c1-2ede6d0e1265" containerID="4ad39597bbe569bf05550de0e076c316b18dac6a413eb18bf03f14d84ce13f67" exitCode=0 Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.018672 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-29k84" event={"ID":"19a45089-70ad-464f-88c1-2ede6d0e1265","Type":"ContainerDied","Data":"4ad39597bbe569bf05550de0e076c316b18dac6a413eb18bf03f14d84ce13f67"} Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.040991 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-bvbth" podStartSLOduration=2.040970447 podStartE2EDuration="2.040970447s" podCreationTimestamp="2025-11-21 11:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:20:05.030274843 +0000 UTC m=+5950.139417361" watchObservedRunningTime="2025-11-21 11:20:05.040970447 +0000 UTC m=+5950.150112945" Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.152983 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-b5hrg"] Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.607849 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-9cd6-account-create-8fgxq"] Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.609404 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-9cd6-account-create-8fgxq" Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.617036 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.632164 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-9cd6-account-create-8fgxq"] Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.683990 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe47932-69ba-4784-9884-0938691eeef0-operator-scripts\") pod \"octavia-9cd6-account-create-8fgxq\" (UID: \"efe47932-69ba-4784-9884-0938691eeef0\") " pod="openstack/octavia-9cd6-account-create-8fgxq" Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.684096 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pq55\" (UniqueName: \"kubernetes.io/projected/efe47932-69ba-4784-9884-0938691eeef0-kube-api-access-6pq55\") pod \"octavia-9cd6-account-create-8fgxq\" (UID: \"efe47932-69ba-4784-9884-0938691eeef0\") " pod="openstack/octavia-9cd6-account-create-8fgxq" Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.786546 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe47932-69ba-4784-9884-0938691eeef0-operator-scripts\") pod \"octavia-9cd6-account-create-8fgxq\" (UID: \"efe47932-69ba-4784-9884-0938691eeef0\") " pod="openstack/octavia-9cd6-account-create-8fgxq" Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.786625 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pq55\" (UniqueName: \"kubernetes.io/projected/efe47932-69ba-4784-9884-0938691eeef0-kube-api-access-6pq55\") pod \"octavia-9cd6-account-create-8fgxq\" (UID: \"efe47932-69ba-4784-9884-0938691eeef0\") " pod="openstack/octavia-9cd6-account-create-8fgxq" Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.787587 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe47932-69ba-4784-9884-0938691eeef0-operator-scripts\") pod \"octavia-9cd6-account-create-8fgxq\" (UID: \"efe47932-69ba-4784-9884-0938691eeef0\") " pod="openstack/octavia-9cd6-account-create-8fgxq" Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.804755 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pq55\" (UniqueName: \"kubernetes.io/projected/efe47932-69ba-4784-9884-0938691eeef0-kube-api-access-6pq55\") pod \"octavia-9cd6-account-create-8fgxq\" (UID: \"efe47932-69ba-4784-9884-0938691eeef0\") " pod="openstack/octavia-9cd6-account-create-8fgxq" Nov 21 11:20:05 crc kubenswrapper[4972]: I1121 11:20:05.968404 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-9cd6-account-create-8fgxq" Nov 21 11:20:06 crc kubenswrapper[4972]: I1121 11:20:06.035677 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-29k84" event={"ID":"19a45089-70ad-464f-88c1-2ede6d0e1265","Type":"ContainerStarted","Data":"89efd8f6f5cc0f431bc5e2656e420bbc31df092bc85fab2e6bfa5e056fd08bbf"} Nov 21 11:20:06 crc kubenswrapper[4972]: I1121 11:20:06.035986 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-29k84" event={"ID":"19a45089-70ad-464f-88c1-2ede6d0e1265","Type":"ContainerStarted","Data":"90fc585cf3128951ec5257f935021b3196c7e11abfee4eacc273c41d2e8ea980"} Nov 21 11:20:06 crc kubenswrapper[4972]: I1121 11:20:06.036357 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:06 crc kubenswrapper[4972]: I1121 11:20:06.036389 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:06 crc kubenswrapper[4972]: I1121 11:20:06.040479 4972 generic.go:334] "Generic (PLEG): container finished" podID="2f7ab426-aeb2-4659-8bd9-d9322fa3e63a" containerID="fa006d2ceb4a459f0e2726f6cb495056ad7c8ed0e2f3baf5e28478a885d27508" exitCode=0 Nov 21 11:20:06 crc kubenswrapper[4972]: I1121 11:20:06.041449 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-b5hrg" event={"ID":"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a","Type":"ContainerDied","Data":"fa006d2ceb4a459f0e2726f6cb495056ad7c8ed0e2f3baf5e28478a885d27508"} Nov 21 11:20:06 crc kubenswrapper[4972]: I1121 11:20:06.041479 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-b5hrg" event={"ID":"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a","Type":"ContainerStarted","Data":"798b623545c7a54a476c7bacdc247b263d2dfec3d25be14614b1f2ac9d476da5"} Nov 21 11:20:06 crc kubenswrapper[4972]: I1121 11:20:06.065555 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-29k84" podStartSLOduration=4.065534408 podStartE2EDuration="4.065534408s" podCreationTimestamp="2025-11-21 11:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:20:06.052369489 +0000 UTC m=+5951.161511987" watchObservedRunningTime="2025-11-21 11:20:06.065534408 +0000 UTC m=+5951.174676926" Nov 21 11:20:06 crc kubenswrapper[4972]: W1121 11:20:06.470984 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefe47932_69ba_4784_9884_0938691eeef0.slice/crio-7c4f10aab6f07358fad904d161bd045c5ec6d9d21fad2ff2271378d18ab1cb6f WatchSource:0}: Error finding container 7c4f10aab6f07358fad904d161bd045c5ec6d9d21fad2ff2271378d18ab1cb6f: Status 404 returned error can't find the container with id 7c4f10aab6f07358fad904d161bd045c5ec6d9d21fad2ff2271378d18ab1cb6f Nov 21 11:20:06 crc kubenswrapper[4972]: I1121 11:20:06.473126 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-9cd6-account-create-8fgxq"] Nov 21 11:20:07 crc kubenswrapper[4972]: I1121 11:20:07.054444 4972 generic.go:334] "Generic (PLEG): container finished" podID="efe47932-69ba-4784-9884-0938691eeef0" containerID="e2ecbdfbb326d59324bb1e229a997b9bab7a7cd85bfe5a820fa0449f8774d782" exitCode=0 Nov 21 11:20:07 crc kubenswrapper[4972]: I1121 11:20:07.054520 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-9cd6-account-create-8fgxq" event={"ID":"efe47932-69ba-4784-9884-0938691eeef0","Type":"ContainerDied","Data":"e2ecbdfbb326d59324bb1e229a997b9bab7a7cd85bfe5a820fa0449f8774d782"} Nov 21 11:20:07 crc kubenswrapper[4972]: I1121 11:20:07.054970 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-9cd6-account-create-8fgxq" event={"ID":"efe47932-69ba-4784-9884-0938691eeef0","Type":"ContainerStarted","Data":"7c4f10aab6f07358fad904d161bd045c5ec6d9d21fad2ff2271378d18ab1cb6f"} Nov 21 11:20:07 crc kubenswrapper[4972]: I1121 11:20:07.469160 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-b5hrg" Nov 21 11:20:07 crc kubenswrapper[4972]: I1121 11:20:07.625605 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf5vj\" (UniqueName: \"kubernetes.io/projected/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a-kube-api-access-mf5vj\") pod \"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a\" (UID: \"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a\") " Nov 21 11:20:07 crc kubenswrapper[4972]: I1121 11:20:07.627158 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a-operator-scripts\") pod \"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a\" (UID: \"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a\") " Nov 21 11:20:07 crc kubenswrapper[4972]: I1121 11:20:07.630486 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f7ab426-aeb2-4659-8bd9-d9322fa3e63a" (UID: "2f7ab426-aeb2-4659-8bd9-d9322fa3e63a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:20:07 crc kubenswrapper[4972]: I1121 11:20:07.649921 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a-kube-api-access-mf5vj" (OuterVolumeSpecName: "kube-api-access-mf5vj") pod "2f7ab426-aeb2-4659-8bd9-d9322fa3e63a" (UID: "2f7ab426-aeb2-4659-8bd9-d9322fa3e63a"). InnerVolumeSpecName "kube-api-access-mf5vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:20:07 crc kubenswrapper[4972]: I1121 11:20:07.731289 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf5vj\" (UniqueName: \"kubernetes.io/projected/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a-kube-api-access-mf5vj\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:07 crc kubenswrapper[4972]: I1121 11:20:07.731664 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:08 crc kubenswrapper[4972]: I1121 11:20:08.066308 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-b5hrg" event={"ID":"2f7ab426-aeb2-4659-8bd9-d9322fa3e63a","Type":"ContainerDied","Data":"798b623545c7a54a476c7bacdc247b263d2dfec3d25be14614b1f2ac9d476da5"} Nov 21 11:20:08 crc kubenswrapper[4972]: I1121 11:20:08.066369 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798b623545c7a54a476c7bacdc247b263d2dfec3d25be14614b1f2ac9d476da5" Nov 21 11:20:08 crc kubenswrapper[4972]: I1121 11:20:08.066320 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-b5hrg" Nov 21 11:20:08 crc kubenswrapper[4972]: I1121 11:20:08.593318 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-9cd6-account-create-8fgxq" Nov 21 11:20:08 crc kubenswrapper[4972]: I1121 11:20:08.751858 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe47932-69ba-4784-9884-0938691eeef0-operator-scripts\") pod \"efe47932-69ba-4784-9884-0938691eeef0\" (UID: \"efe47932-69ba-4784-9884-0938691eeef0\") " Nov 21 11:20:08 crc kubenswrapper[4972]: I1121 11:20:08.751946 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pq55\" (UniqueName: \"kubernetes.io/projected/efe47932-69ba-4784-9884-0938691eeef0-kube-api-access-6pq55\") pod \"efe47932-69ba-4784-9884-0938691eeef0\" (UID: \"efe47932-69ba-4784-9884-0938691eeef0\") " Nov 21 11:20:08 crc kubenswrapper[4972]: I1121 11:20:08.752816 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efe47932-69ba-4784-9884-0938691eeef0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "efe47932-69ba-4784-9884-0938691eeef0" (UID: "efe47932-69ba-4784-9884-0938691eeef0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:20:08 crc kubenswrapper[4972]: I1121 11:20:08.758018 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe47932-69ba-4784-9884-0938691eeef0-kube-api-access-6pq55" (OuterVolumeSpecName: "kube-api-access-6pq55") pod "efe47932-69ba-4784-9884-0938691eeef0" (UID: "efe47932-69ba-4784-9884-0938691eeef0"). InnerVolumeSpecName "kube-api-access-6pq55". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:20:08 crc kubenswrapper[4972]: I1121 11:20:08.855085 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe47932-69ba-4784-9884-0938691eeef0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:08 crc kubenswrapper[4972]: I1121 11:20:08.855159 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pq55\" (UniqueName: \"kubernetes.io/projected/efe47932-69ba-4784-9884-0938691eeef0-kube-api-access-6pq55\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:09 crc kubenswrapper[4972]: I1121 11:20:09.076734 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-9cd6-account-create-8fgxq" event={"ID":"efe47932-69ba-4784-9884-0938691eeef0","Type":"ContainerDied","Data":"7c4f10aab6f07358fad904d161bd045c5ec6d9d21fad2ff2271378d18ab1cb6f"} Nov 21 11:20:09 crc kubenswrapper[4972]: I1121 11:20:09.076776 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c4f10aab6f07358fad904d161bd045c5ec6d9d21fad2ff2271378d18ab1cb6f" Nov 21 11:20:09 crc kubenswrapper[4972]: I1121 11:20:09.076846 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-9cd6-account-create-8fgxq" Nov 21 11:20:10 crc kubenswrapper[4972]: I1121 11:20:10.826681 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-5cgr2"] Nov 21 11:20:10 crc kubenswrapper[4972]: E1121 11:20:10.827718 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe47932-69ba-4784-9884-0938691eeef0" containerName="mariadb-account-create" Nov 21 11:20:10 crc kubenswrapper[4972]: I1121 11:20:10.827738 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe47932-69ba-4784-9884-0938691eeef0" containerName="mariadb-account-create" Nov 21 11:20:10 crc kubenswrapper[4972]: E1121 11:20:10.827752 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7ab426-aeb2-4659-8bd9-d9322fa3e63a" containerName="mariadb-database-create" Nov 21 11:20:10 crc kubenswrapper[4972]: I1121 11:20:10.827760 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7ab426-aeb2-4659-8bd9-d9322fa3e63a" containerName="mariadb-database-create" Nov 21 11:20:10 crc kubenswrapper[4972]: I1121 11:20:10.828103 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe47932-69ba-4784-9884-0938691eeef0" containerName="mariadb-account-create" Nov 21 11:20:10 crc kubenswrapper[4972]: I1121 11:20:10.828132 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7ab426-aeb2-4659-8bd9-d9322fa3e63a" containerName="mariadb-database-create" Nov 21 11:20:10 crc kubenswrapper[4972]: I1121 11:20:10.829139 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-5cgr2" Nov 21 11:20:10 crc kubenswrapper[4972]: I1121 11:20:10.844313 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-5cgr2"] Nov 21 11:20:10 crc kubenswrapper[4972]: I1121 11:20:10.895179 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e29f549f-1f3b-43ad-973a-0adf21de58f0-operator-scripts\") pod \"octavia-persistence-db-create-5cgr2\" (UID: \"e29f549f-1f3b-43ad-973a-0adf21de58f0\") " pod="openstack/octavia-persistence-db-create-5cgr2" Nov 21 11:20:10 crc kubenswrapper[4972]: I1121 11:20:10.895255 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgmr4\" (UniqueName: \"kubernetes.io/projected/e29f549f-1f3b-43ad-973a-0adf21de58f0-kube-api-access-kgmr4\") pod \"octavia-persistence-db-create-5cgr2\" (UID: \"e29f549f-1f3b-43ad-973a-0adf21de58f0\") " pod="openstack/octavia-persistence-db-create-5cgr2" Nov 21 11:20:10 crc kubenswrapper[4972]: I1121 11:20:10.997770 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e29f549f-1f3b-43ad-973a-0adf21de58f0-operator-scripts\") pod \"octavia-persistence-db-create-5cgr2\" (UID: \"e29f549f-1f3b-43ad-973a-0adf21de58f0\") " pod="openstack/octavia-persistence-db-create-5cgr2" Nov 21 11:20:10 crc kubenswrapper[4972]: I1121 11:20:10.997898 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgmr4\" (UniqueName: \"kubernetes.io/projected/e29f549f-1f3b-43ad-973a-0adf21de58f0-kube-api-access-kgmr4\") pod \"octavia-persistence-db-create-5cgr2\" (UID: \"e29f549f-1f3b-43ad-973a-0adf21de58f0\") " pod="openstack/octavia-persistence-db-create-5cgr2" Nov 21 11:20:10 crc kubenswrapper[4972]: I1121 11:20:10.998560 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e29f549f-1f3b-43ad-973a-0adf21de58f0-operator-scripts\") pod \"octavia-persistence-db-create-5cgr2\" (UID: \"e29f549f-1f3b-43ad-973a-0adf21de58f0\") " pod="openstack/octavia-persistence-db-create-5cgr2" Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.021700 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgmr4\" (UniqueName: \"kubernetes.io/projected/e29f549f-1f3b-43ad-973a-0adf21de58f0-kube-api-access-kgmr4\") pod \"octavia-persistence-db-create-5cgr2\" (UID: \"e29f549f-1f3b-43ad-973a-0adf21de58f0\") " pod="openstack/octavia-persistence-db-create-5cgr2" Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.154881 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-5cgr2" Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.362958 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-c9cc-account-create-wpbmf"] Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.364729 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c9cc-account-create-wpbmf" Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.367404 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.386495 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-c9cc-account-create-wpbmf"] Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.516194 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3-operator-scripts\") pod \"octavia-c9cc-account-create-wpbmf\" (UID: \"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3\") " pod="openstack/octavia-c9cc-account-create-wpbmf" Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.516280 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxq7g\" (UniqueName: \"kubernetes.io/projected/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3-kube-api-access-lxq7g\") pod \"octavia-c9cc-account-create-wpbmf\" (UID: \"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3\") " pod="openstack/octavia-c9cc-account-create-wpbmf" Nov 21 11:20:11 crc kubenswrapper[4972]: W1121 11:20:11.548125 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode29f549f_1f3b_43ad_973a_0adf21de58f0.slice/crio-f17b6a679b85a031a994a20f597c5428ba90038ca7e15d2aa905a83f8df0d68c WatchSource:0}: Error finding container f17b6a679b85a031a994a20f597c5428ba90038ca7e15d2aa905a83f8df0d68c: Status 404 returned error can't find the container with id f17b6a679b85a031a994a20f597c5428ba90038ca7e15d2aa905a83f8df0d68c Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.552989 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-5cgr2"] Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.617729 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3-operator-scripts\") pod \"octavia-c9cc-account-create-wpbmf\" (UID: \"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3\") " pod="openstack/octavia-c9cc-account-create-wpbmf" Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.618594 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxq7g\" (UniqueName: \"kubernetes.io/projected/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3-kube-api-access-lxq7g\") pod \"octavia-c9cc-account-create-wpbmf\" (UID: \"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3\") " pod="openstack/octavia-c9cc-account-create-wpbmf" Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.618796 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3-operator-scripts\") pod \"octavia-c9cc-account-create-wpbmf\" (UID: \"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3\") " pod="openstack/octavia-c9cc-account-create-wpbmf" Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.647112 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxq7g\" (UniqueName: \"kubernetes.io/projected/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3-kube-api-access-lxq7g\") pod \"octavia-c9cc-account-create-wpbmf\" (UID: \"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3\") " pod="openstack/octavia-c9cc-account-create-wpbmf" Nov 21 11:20:11 crc kubenswrapper[4972]: I1121 11:20:11.693879 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c9cc-account-create-wpbmf" Nov 21 11:20:12 crc kubenswrapper[4972]: I1121 11:20:12.047558 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-aa0c-account-create-lwx74"] Nov 21 11:20:12 crc kubenswrapper[4972]: I1121 11:20:12.056881 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mjvkp"] Nov 21 11:20:12 crc kubenswrapper[4972]: I1121 11:20:12.066070 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-aa0c-account-create-lwx74"] Nov 21 11:20:12 crc kubenswrapper[4972]: I1121 11:20:12.074014 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mjvkp"] Nov 21 11:20:12 crc kubenswrapper[4972]: I1121 11:20:12.110819 4972 generic.go:334] "Generic (PLEG): container finished" podID="e29f549f-1f3b-43ad-973a-0adf21de58f0" containerID="b221eac010938bdd2f0353f243100715b651459b45199fb2d4f37b43b4f8fc87" exitCode=0 Nov 21 11:20:12 crc kubenswrapper[4972]: I1121 11:20:12.110867 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-5cgr2" event={"ID":"e29f549f-1f3b-43ad-973a-0adf21de58f0","Type":"ContainerDied","Data":"b221eac010938bdd2f0353f243100715b651459b45199fb2d4f37b43b4f8fc87"} Nov 21 11:20:12 crc kubenswrapper[4972]: I1121 11:20:12.110901 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-5cgr2" event={"ID":"e29f549f-1f3b-43ad-973a-0adf21de58f0","Type":"ContainerStarted","Data":"f17b6a679b85a031a994a20f597c5428ba90038ca7e15d2aa905a83f8df0d68c"} Nov 21 11:20:12 crc kubenswrapper[4972]: W1121 11:20:12.199227 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb26e1de3_52fc_4a82_abcb_e54ffa0b95a3.slice/crio-2b32ff2a0d6b3a169b9fadc2cfb862297a0da5fc71e3eb1ab07445e8c9c308dc WatchSource:0}: Error finding container 2b32ff2a0d6b3a169b9fadc2cfb862297a0da5fc71e3eb1ab07445e8c9c308dc: Status 404 returned error can't find the container with id 2b32ff2a0d6b3a169b9fadc2cfb862297a0da5fc71e3eb1ab07445e8c9c308dc Nov 21 11:20:12 crc kubenswrapper[4972]: I1121 11:20:12.203194 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-c9cc-account-create-wpbmf"] Nov 21 11:20:13 crc kubenswrapper[4972]: I1121 11:20:13.132298 4972 generic.go:334] "Generic (PLEG): container finished" podID="b26e1de3-52fc-4a82-abcb-e54ffa0b95a3" containerID="76f8dfd8146b6a2e3133564a3c025b73600b86a4f1a0d5c2b606907dfc5fc15f" exitCode=0 Nov 21 11:20:13 crc kubenswrapper[4972]: I1121 11:20:13.132474 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c9cc-account-create-wpbmf" event={"ID":"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3","Type":"ContainerDied","Data":"76f8dfd8146b6a2e3133564a3c025b73600b86a4f1a0d5c2b606907dfc5fc15f"} Nov 21 11:20:13 crc kubenswrapper[4972]: I1121 11:20:13.132804 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c9cc-account-create-wpbmf" event={"ID":"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3","Type":"ContainerStarted","Data":"2b32ff2a0d6b3a169b9fadc2cfb862297a0da5fc71e3eb1ab07445e8c9c308dc"} Nov 21 11:20:13 crc kubenswrapper[4972]: I1121 11:20:13.566489 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-5cgr2" Nov 21 11:20:13 crc kubenswrapper[4972]: I1121 11:20:13.659290 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e29f549f-1f3b-43ad-973a-0adf21de58f0-operator-scripts\") pod \"e29f549f-1f3b-43ad-973a-0adf21de58f0\" (UID: \"e29f549f-1f3b-43ad-973a-0adf21de58f0\") " Nov 21 11:20:13 crc kubenswrapper[4972]: I1121 11:20:13.659426 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgmr4\" (UniqueName: \"kubernetes.io/projected/e29f549f-1f3b-43ad-973a-0adf21de58f0-kube-api-access-kgmr4\") pod \"e29f549f-1f3b-43ad-973a-0adf21de58f0\" (UID: \"e29f549f-1f3b-43ad-973a-0adf21de58f0\") " Nov 21 11:20:13 crc kubenswrapper[4972]: I1121 11:20:13.667983 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e29f549f-1f3b-43ad-973a-0adf21de58f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e29f549f-1f3b-43ad-973a-0adf21de58f0" (UID: "e29f549f-1f3b-43ad-973a-0adf21de58f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:20:13 crc kubenswrapper[4972]: I1121 11:20:13.672007 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29f549f-1f3b-43ad-973a-0adf21de58f0-kube-api-access-kgmr4" (OuterVolumeSpecName: "kube-api-access-kgmr4") pod "e29f549f-1f3b-43ad-973a-0adf21de58f0" (UID: "e29f549f-1f3b-43ad-973a-0adf21de58f0"). InnerVolumeSpecName "kube-api-access-kgmr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:20:13 crc kubenswrapper[4972]: I1121 11:20:13.767371 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e29f549f-1f3b-43ad-973a-0adf21de58f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:13 crc kubenswrapper[4972]: I1121 11:20:13.767427 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgmr4\" (UniqueName: \"kubernetes.io/projected/e29f549f-1f3b-43ad-973a-0adf21de58f0-kube-api-access-kgmr4\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:13 crc kubenswrapper[4972]: I1121 11:20:13.776340 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4126bb96-8016-415d-8613-db7e0d8f5777" path="/var/lib/kubelet/pods/4126bb96-8016-415d-8613-db7e0d8f5777/volumes" Nov 21 11:20:13 crc kubenswrapper[4972]: I1121 11:20:13.777561 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9fe8c69-9432-4abe-8517-d25773129b6b" path="/var/lib/kubelet/pods/f9fe8c69-9432-4abe-8517-d25773129b6b/volumes" Nov 21 11:20:14 crc kubenswrapper[4972]: I1121 11:20:14.147762 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-5cgr2" Nov 21 11:20:14 crc kubenswrapper[4972]: I1121 11:20:14.147750 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-5cgr2" event={"ID":"e29f549f-1f3b-43ad-973a-0adf21de58f0","Type":"ContainerDied","Data":"f17b6a679b85a031a994a20f597c5428ba90038ca7e15d2aa905a83f8df0d68c"} Nov 21 11:20:14 crc kubenswrapper[4972]: I1121 11:20:14.147924 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f17b6a679b85a031a994a20f597c5428ba90038ca7e15d2aa905a83f8df0d68c" Nov 21 11:20:14 crc kubenswrapper[4972]: I1121 11:20:14.609594 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c9cc-account-create-wpbmf" Nov 21 11:20:14 crc kubenswrapper[4972]: I1121 11:20:14.791336 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxq7g\" (UniqueName: \"kubernetes.io/projected/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3-kube-api-access-lxq7g\") pod \"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3\" (UID: \"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3\") " Nov 21 11:20:14 crc kubenswrapper[4972]: I1121 11:20:14.792524 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3-operator-scripts\") pod \"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3\" (UID: \"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3\") " Nov 21 11:20:14 crc kubenswrapper[4972]: I1121 11:20:14.794051 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b26e1de3-52fc-4a82-abcb-e54ffa0b95a3" (UID: "b26e1de3-52fc-4a82-abcb-e54ffa0b95a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:20:14 crc kubenswrapper[4972]: I1121 11:20:14.801388 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3-kube-api-access-lxq7g" (OuterVolumeSpecName: "kube-api-access-lxq7g") pod "b26e1de3-52fc-4a82-abcb-e54ffa0b95a3" (UID: "b26e1de3-52fc-4a82-abcb-e54ffa0b95a3"). InnerVolumeSpecName "kube-api-access-lxq7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:20:14 crc kubenswrapper[4972]: I1121 11:20:14.896428 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:14 crc kubenswrapper[4972]: I1121 11:20:14.896483 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxq7g\" (UniqueName: \"kubernetes.io/projected/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3-kube-api-access-lxq7g\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:15 crc kubenswrapper[4972]: I1121 11:20:15.170179 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c9cc-account-create-wpbmf" event={"ID":"b26e1de3-52fc-4a82-abcb-e54ffa0b95a3","Type":"ContainerDied","Data":"2b32ff2a0d6b3a169b9fadc2cfb862297a0da5fc71e3eb1ab07445e8c9c308dc"} Nov 21 11:20:15 crc kubenswrapper[4972]: I1121 11:20:15.170229 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b32ff2a0d6b3a169b9fadc2cfb862297a0da5fc71e3eb1ab07445e8c9c308dc" Nov 21 11:20:15 crc kubenswrapper[4972]: I1121 11:20:15.170358 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c9cc-account-create-wpbmf" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.071105 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-db9c5b5bd-gdkr4"] Nov 21 11:20:17 crc kubenswrapper[4972]: E1121 11:20:17.071964 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29f549f-1f3b-43ad-973a-0adf21de58f0" containerName="mariadb-database-create" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.071981 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29f549f-1f3b-43ad-973a-0adf21de58f0" containerName="mariadb-database-create" Nov 21 11:20:17 crc kubenswrapper[4972]: E1121 11:20:17.072023 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26e1de3-52fc-4a82-abcb-e54ffa0b95a3" containerName="mariadb-account-create" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.072032 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26e1de3-52fc-4a82-abcb-e54ffa0b95a3" containerName="mariadb-account-create" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.072297 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26e1de3-52fc-4a82-abcb-e54ffa0b95a3" containerName="mariadb-account-create" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.072317 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29f549f-1f3b-43ad-973a-0adf21de58f0" containerName="mariadb-database-create" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.074227 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.077322 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.077603 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-9g6rz" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.077824 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.085290 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-db9c5b5bd-gdkr4"] Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.150740 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-combined-ca-bundle\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.150807 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-config-data-merged\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.150862 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-octavia-run\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.150928 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-scripts\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.150963 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-config-data\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.252309 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-scripts\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.252364 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-config-data\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.252462 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-combined-ca-bundle\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.252491 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-config-data-merged\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.252512 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-octavia-run\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.253541 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-octavia-run\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.253984 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-config-data-merged\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.259859 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-scripts\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.262494 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-combined-ca-bundle\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.265952 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80e0cc1-c14a-45ec-97bf-5c7401ce569f-config-data\") pod \"octavia-api-db9c5b5bd-gdkr4\" (UID: \"a80e0cc1-c14a-45ec-97bf-5c7401ce569f\") " pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.393318 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:17 crc kubenswrapper[4972]: I1121 11:20:17.858156 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-db9c5b5bd-gdkr4"] Nov 21 11:20:18 crc kubenswrapper[4972]: I1121 11:20:18.206444 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-db9c5b5bd-gdkr4" event={"ID":"a80e0cc1-c14a-45ec-97bf-5c7401ce569f","Type":"ContainerStarted","Data":"a7bd6279866113b5858942f2b718d0958ee340b615a72bb61d73b53d0fa14fec"} Nov 21 11:20:18 crc kubenswrapper[4972]: I1121 11:20:18.759358 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:20:18 crc kubenswrapper[4972]: E1121 11:20:18.759974 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:20:19 crc kubenswrapper[4972]: I1121 11:20:19.062319 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-w6q6g"] Nov 21 11:20:19 crc kubenswrapper[4972]: I1121 11:20:19.075873 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-w6q6g"] Nov 21 11:20:19 crc kubenswrapper[4972]: I1121 11:20:19.773625 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e08dff-7d3e-41a5-a992-6c9c7567fe97" path="/var/lib/kubelet/pods/03e08dff-7d3e-41a5-a992-6c9c7567fe97/volumes" Nov 21 11:20:28 crc kubenswrapper[4972]: I1121 11:20:28.544762 4972 scope.go:117] "RemoveContainer" containerID="d436046ca3e17d6c7a535773e01b32751a938dfc4ea28ae8856bfbf683fb6e98" Nov 21 11:20:32 crc kubenswrapper[4972]: I1121 11:20:32.043118 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l6gbg"] Nov 21 11:20:32 crc kubenswrapper[4972]: I1121 11:20:32.053777 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l6gbg"] Nov 21 11:20:32 crc kubenswrapper[4972]: I1121 11:20:32.520744 4972 scope.go:117] "RemoveContainer" containerID="84a126c3e179384b8ee1a05703614dc7f9fa32922073e77e021268e702f42199" Nov 21 11:20:32 crc kubenswrapper[4972]: E1121 11:20:32.525198 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:ab7f58d75373e26c020d12fc845c30b64ba0f236a163efd1880c523a3321054c" Nov 21 11:20:32 crc kubenswrapper[4972]: E1121 11:20:32.525497 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:ab7f58d75373e26c020d12fc845c30b64ba0f236a163efd1880c523a3321054c,Command:[/bin/bash],Args:[-c /usr/local/bin/container-scripts/init.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-merged,ReadOnly:false,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42437,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42437,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-api-db9c5b5bd-gdkr4_openstack(a80e0cc1-c14a-45ec-97bf-5c7401ce569f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 11:20:32 crc kubenswrapper[4972]: E1121 11:20:32.526951 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/octavia-api-db9c5b5bd-gdkr4" podUID="a80e0cc1-c14a-45ec-97bf-5c7401ce569f" Nov 21 11:20:32 crc kubenswrapper[4972]: I1121 11:20:32.559128 4972 scope.go:117] "RemoveContainer" containerID="f9cf2407cd33691be30fb4bd180103f0b3aa9092e6760e47e59d70659e33fb27" Nov 21 11:20:32 crc kubenswrapper[4972]: I1121 11:20:32.601582 4972 scope.go:117] "RemoveContainer" containerID="85dc7234f489b4b336c664aa2ea5c958b8bbf886c5a6cb9b85a63988228c6fe6" Nov 21 11:20:32 crc kubenswrapper[4972]: I1121 11:20:32.631226 4972 scope.go:117] "RemoveContainer" containerID="866206511a501bbc6f00c590944e8b6dc2b04a25ec1b7360b5a9b7087c40667a" Nov 21 11:20:33 crc kubenswrapper[4972]: E1121 11:20:33.383861 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:ab7f58d75373e26c020d12fc845c30b64ba0f236a163efd1880c523a3321054c\\\"\"" pod="openstack/octavia-api-db9c5b5bd-gdkr4" podUID="a80e0cc1-c14a-45ec-97bf-5c7401ce569f" Nov 21 11:20:33 crc kubenswrapper[4972]: I1121 11:20:33.760281 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:20:33 crc kubenswrapper[4972]: E1121 11:20:33.760998 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:20:33 crc kubenswrapper[4972]: I1121 11:20:33.778739 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291" path="/var/lib/kubelet/pods/3c15a3b4-b9fa-4e8f-ba73-ac9c4732c291/volumes" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.569681 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.577133 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-29k84" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.589740 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pxnhj" podUID="7e864747-789f-4b61-88e7-24523b70fe33" containerName="ovn-controller" probeResult="failure" output=< Nov 21 11:20:37 crc kubenswrapper[4972]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 21 11:20:37 crc kubenswrapper[4972]: > Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.722124 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pxnhj-config-bgpwz"] Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.723664 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.725516 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.731319 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pxnhj-config-bgpwz"] Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.870587 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-additional-scripts\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.870666 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-run-ovn\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.870894 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-log-ovn\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.871029 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcr8c\" (UniqueName: \"kubernetes.io/projected/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-kube-api-access-fcr8c\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.871077 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-scripts\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.871248 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-run\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.973353 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-log-ovn\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.973434 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcr8c\" (UniqueName: \"kubernetes.io/projected/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-kube-api-access-fcr8c\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.973469 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-scripts\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.973527 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-run\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.973764 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-log-ovn\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.975112 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-run\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.975154 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-additional-scripts\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.975220 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-run-ovn\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.975394 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-run-ovn\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.975691 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-scripts\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.975892 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-additional-scripts\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:37 crc kubenswrapper[4972]: I1121 11:20:37.997904 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcr8c\" (UniqueName: \"kubernetes.io/projected/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-kube-api-access-fcr8c\") pod \"ovn-controller-pxnhj-config-bgpwz\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:38 crc kubenswrapper[4972]: I1121 11:20:38.052598 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:38 crc kubenswrapper[4972]: I1121 11:20:38.508532 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pxnhj-config-bgpwz"] Nov 21 11:20:38 crc kubenswrapper[4972]: W1121 11:20:38.512465 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d9f0a8a_8c46_46d6_a407_21e560e0b0e9.slice/crio-e1546effdbd6120cbcbe503c203de7574dc497c1d592b981c0de133dd7029621 WatchSource:0}: Error finding container e1546effdbd6120cbcbe503c203de7574dc497c1d592b981c0de133dd7029621: Status 404 returned error can't find the container with id e1546effdbd6120cbcbe503c203de7574dc497c1d592b981c0de133dd7029621 Nov 21 11:20:39 crc kubenswrapper[4972]: I1121 11:20:39.455747 4972 generic.go:334] "Generic (PLEG): container finished" podID="9d9f0a8a-8c46-46d6-a407-21e560e0b0e9" containerID="5c5a9fda0192ac8ca92b60a6bd9fdaa53ac09a9688b258629692c5593061b971" exitCode=0 Nov 21 11:20:39 crc kubenswrapper[4972]: I1121 11:20:39.455845 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pxnhj-config-bgpwz" event={"ID":"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9","Type":"ContainerDied","Data":"5c5a9fda0192ac8ca92b60a6bd9fdaa53ac09a9688b258629692c5593061b971"} Nov 21 11:20:39 crc kubenswrapper[4972]: I1121 11:20:39.456107 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pxnhj-config-bgpwz" event={"ID":"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9","Type":"ContainerStarted","Data":"e1546effdbd6120cbcbe503c203de7574dc497c1d592b981c0de133dd7029621"} Nov 21 11:20:40 crc kubenswrapper[4972]: I1121 11:20:40.915129 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.041701 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-run-ovn\") pod \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.041800 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcr8c\" (UniqueName: \"kubernetes.io/projected/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-kube-api-access-fcr8c\") pod \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.041879 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-scripts\") pod \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.041927 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-run\") pod \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.041952 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9d9f0a8a-8c46-46d6-a407-21e560e0b0e9" (UID: "9d9f0a8a-8c46-46d6-a407-21e560e0b0e9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.041986 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-additional-scripts\") pod \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.042011 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-run" (OuterVolumeSpecName: "var-run") pod "9d9f0a8a-8c46-46d6-a407-21e560e0b0e9" (UID: "9d9f0a8a-8c46-46d6-a407-21e560e0b0e9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.042021 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-log-ovn\") pod \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\" (UID: \"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9\") " Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.042136 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9d9f0a8a-8c46-46d6-a407-21e560e0b0e9" (UID: "9d9f0a8a-8c46-46d6-a407-21e560e0b0e9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.042682 4972 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.042699 4972 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-run\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.042709 4972 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.042803 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9d9f0a8a-8c46-46d6-a407-21e560e0b0e9" (UID: "9d9f0a8a-8c46-46d6-a407-21e560e0b0e9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.043364 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-scripts" (OuterVolumeSpecName: "scripts") pod "9d9f0a8a-8c46-46d6-a407-21e560e0b0e9" (UID: "9d9f0a8a-8c46-46d6-a407-21e560e0b0e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.058017 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-kube-api-access-fcr8c" (OuterVolumeSpecName: "kube-api-access-fcr8c") pod "9d9f0a8a-8c46-46d6-a407-21e560e0b0e9" (UID: "9d9f0a8a-8c46-46d6-a407-21e560e0b0e9"). InnerVolumeSpecName "kube-api-access-fcr8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.144893 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcr8c\" (UniqueName: \"kubernetes.io/projected/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-kube-api-access-fcr8c\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.144937 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.144952 4972 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.478977 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pxnhj-config-bgpwz" event={"ID":"9d9f0a8a-8c46-46d6-a407-21e560e0b0e9","Type":"ContainerDied","Data":"e1546effdbd6120cbcbe503c203de7574dc497c1d592b981c0de133dd7029621"} Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.479400 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1546effdbd6120cbcbe503c203de7574dc497c1d592b981c0de133dd7029621" Nov 21 11:20:41 crc kubenswrapper[4972]: I1121 11:20:41.479100 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj-config-bgpwz" Nov 21 11:20:41 crc kubenswrapper[4972]: E1121 11:20:41.736926 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d9f0a8a_8c46_46d6_a407_21e560e0b0e9.slice\": RecentStats: unable to find data in memory cache]" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.002282 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pxnhj-config-bgpwz"] Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.014391 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pxnhj-config-bgpwz"] Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.037680 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pxnhj-config-bgcf2"] Nov 21 11:20:42 crc kubenswrapper[4972]: E1121 11:20:42.038390 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9f0a8a-8c46-46d6-a407-21e560e0b0e9" containerName="ovn-config" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.038427 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9f0a8a-8c46-46d6-a407-21e560e0b0e9" containerName="ovn-config" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.038800 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d9f0a8a-8c46-46d6-a407-21e560e0b0e9" containerName="ovn-config" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.039975 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.043009 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.057850 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pxnhj-config-bgcf2"] Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.064296 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c747676-25ff-4583-9a92-bdd27508e618-scripts\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.064506 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-log-ovn\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.064562 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-run\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.064592 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-run-ovn\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.064941 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c747676-25ff-4583-9a92-bdd27508e618-additional-scripts\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.065003 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bllkf\" (UniqueName: \"kubernetes.io/projected/1c747676-25ff-4583-9a92-bdd27508e618-kube-api-access-bllkf\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.166887 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c747676-25ff-4583-9a92-bdd27508e618-additional-scripts\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.166933 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bllkf\" (UniqueName: \"kubernetes.io/projected/1c747676-25ff-4583-9a92-bdd27508e618-kube-api-access-bllkf\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.166989 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c747676-25ff-4583-9a92-bdd27508e618-scripts\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.167096 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-log-ovn\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.167126 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-run\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.167147 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-run-ovn\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.167411 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-run-ovn\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.167415 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-log-ovn\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.167492 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-run\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.167727 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c747676-25ff-4583-9a92-bdd27508e618-additional-scripts\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.169112 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c747676-25ff-4583-9a92-bdd27508e618-scripts\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.184801 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bllkf\" (UniqueName: \"kubernetes.io/projected/1c747676-25ff-4583-9a92-bdd27508e618-kube-api-access-bllkf\") pod \"ovn-controller-pxnhj-config-bgcf2\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.375618 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.566410 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-pxnhj" Nov 21 11:20:42 crc kubenswrapper[4972]: I1121 11:20:42.863046 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pxnhj-config-bgcf2"] Nov 21 11:20:42 crc kubenswrapper[4972]: W1121 11:20:42.874040 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c747676_25ff_4583_9a92_bdd27508e618.slice/crio-d030d57705d9024be45206a431ef98da58c649abbae8ac7948825a3753e0f6f6 WatchSource:0}: Error finding container d030d57705d9024be45206a431ef98da58c649abbae8ac7948825a3753e0f6f6: Status 404 returned error can't find the container with id d030d57705d9024be45206a431ef98da58c649abbae8ac7948825a3753e0f6f6 Nov 21 11:20:43 crc kubenswrapper[4972]: I1121 11:20:43.501096 4972 generic.go:334] "Generic (PLEG): container finished" podID="1c747676-25ff-4583-9a92-bdd27508e618" containerID="bd8effcb34abbd2f6b94ff04c9e4cecadf6c3227fa51066ab0fb01cce4a89066" exitCode=0 Nov 21 11:20:43 crc kubenswrapper[4972]: I1121 11:20:43.501157 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pxnhj-config-bgcf2" event={"ID":"1c747676-25ff-4583-9a92-bdd27508e618","Type":"ContainerDied","Data":"bd8effcb34abbd2f6b94ff04c9e4cecadf6c3227fa51066ab0fb01cce4a89066"} Nov 21 11:20:43 crc kubenswrapper[4972]: I1121 11:20:43.501359 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pxnhj-config-bgcf2" event={"ID":"1c747676-25ff-4583-9a92-bdd27508e618","Type":"ContainerStarted","Data":"d030d57705d9024be45206a431ef98da58c649abbae8ac7948825a3753e0f6f6"} Nov 21 11:20:43 crc kubenswrapper[4972]: I1121 11:20:43.801427 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d9f0a8a-8c46-46d6-a407-21e560e0b0e9" path="/var/lib/kubelet/pods/9d9f0a8a-8c46-46d6-a407-21e560e0b0e9/volumes" Nov 21 11:20:44 crc kubenswrapper[4972]: I1121 11:20:44.885620 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.052199 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c747676-25ff-4583-9a92-bdd27508e618-scripts\") pod \"1c747676-25ff-4583-9a92-bdd27508e618\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.052319 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-run\") pod \"1c747676-25ff-4583-9a92-bdd27508e618\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.052353 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bllkf\" (UniqueName: \"kubernetes.io/projected/1c747676-25ff-4583-9a92-bdd27508e618-kube-api-access-bllkf\") pod \"1c747676-25ff-4583-9a92-bdd27508e618\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.052394 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-log-ovn\") pod \"1c747676-25ff-4583-9a92-bdd27508e618\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.052435 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-run-ovn\") pod \"1c747676-25ff-4583-9a92-bdd27508e618\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.052510 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c747676-25ff-4583-9a92-bdd27508e618-additional-scripts\") pod \"1c747676-25ff-4583-9a92-bdd27508e618\" (UID: \"1c747676-25ff-4583-9a92-bdd27508e618\") " Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.053021 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1c747676-25ff-4583-9a92-bdd27508e618" (UID: "1c747676-25ff-4583-9a92-bdd27508e618"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.053059 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-run" (OuterVolumeSpecName: "var-run") pod "1c747676-25ff-4583-9a92-bdd27508e618" (UID: "1c747676-25ff-4583-9a92-bdd27508e618"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.053090 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1c747676-25ff-4583-9a92-bdd27508e618" (UID: "1c747676-25ff-4583-9a92-bdd27508e618"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.053941 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c747676-25ff-4583-9a92-bdd27508e618-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1c747676-25ff-4583-9a92-bdd27508e618" (UID: "1c747676-25ff-4583-9a92-bdd27508e618"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.054111 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c747676-25ff-4583-9a92-bdd27508e618-scripts" (OuterVolumeSpecName: "scripts") pod "1c747676-25ff-4583-9a92-bdd27508e618" (UID: "1c747676-25ff-4583-9a92-bdd27508e618"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.060394 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c747676-25ff-4583-9a92-bdd27508e618-kube-api-access-bllkf" (OuterVolumeSpecName: "kube-api-access-bllkf") pod "1c747676-25ff-4583-9a92-bdd27508e618" (UID: "1c747676-25ff-4583-9a92-bdd27508e618"). InnerVolumeSpecName "kube-api-access-bllkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.155685 4972 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-run\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.155770 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bllkf\" (UniqueName: \"kubernetes.io/projected/1c747676-25ff-4583-9a92-bdd27508e618-kube-api-access-bllkf\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.155818 4972 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.155914 4972 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c747676-25ff-4583-9a92-bdd27508e618-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.155933 4972 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c747676-25ff-4583-9a92-bdd27508e618-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.155950 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c747676-25ff-4583-9a92-bdd27508e618-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.522228 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pxnhj-config-bgcf2" event={"ID":"1c747676-25ff-4583-9a92-bdd27508e618","Type":"ContainerDied","Data":"d030d57705d9024be45206a431ef98da58c649abbae8ac7948825a3753e0f6f6"} Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.523047 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d030d57705d9024be45206a431ef98da58c649abbae8ac7948825a3753e0f6f6" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.522286 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj-config-bgcf2" Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.980391 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pxnhj-config-bgcf2"] Nov 21 11:20:45 crc kubenswrapper[4972]: I1121 11:20:45.999337 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pxnhj-config-bgcf2"] Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.109419 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pxnhj-config-qwswz"] Nov 21 11:20:46 crc kubenswrapper[4972]: E1121 11:20:46.110114 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c747676-25ff-4583-9a92-bdd27508e618" containerName="ovn-config" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.110136 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c747676-25ff-4583-9a92-bdd27508e618" containerName="ovn-config" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.110428 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c747676-25ff-4583-9a92-bdd27508e618" containerName="ovn-config" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.111485 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.113793 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.118920 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pxnhj-config-qwswz"] Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.173244 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-run-ovn\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.173290 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-scripts\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.173480 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-log-ovn\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.173605 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-additional-scripts\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.173739 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-run\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.173860 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvnt9\" (UniqueName: \"kubernetes.io/projected/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-kube-api-access-nvnt9\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.277213 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-run\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.277383 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvnt9\" (UniqueName: \"kubernetes.io/projected/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-kube-api-access-nvnt9\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.277514 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-run-ovn\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.277547 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-scripts\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.277610 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-log-ovn\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.277653 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-additional-scripts\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.277763 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-run\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.277821 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-run-ovn\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.277914 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-log-ovn\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.280860 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-additional-scripts\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.281116 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-scripts\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.305207 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvnt9\" (UniqueName: \"kubernetes.io/projected/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-kube-api-access-nvnt9\") pod \"ovn-controller-pxnhj-config-qwswz\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.435587 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.535593 4972 generic.go:334] "Generic (PLEG): container finished" podID="a80e0cc1-c14a-45ec-97bf-5c7401ce569f" containerID="6307a7aa23d2c4c27f811691ff0edb8a12716a31837dd9919bf1b5fcd54ac315" exitCode=0 Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.535630 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-db9c5b5bd-gdkr4" event={"ID":"a80e0cc1-c14a-45ec-97bf-5c7401ce569f","Type":"ContainerDied","Data":"6307a7aa23d2c4c27f811691ff0edb8a12716a31837dd9919bf1b5fcd54ac315"} Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.760157 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:20:46 crc kubenswrapper[4972]: E1121 11:20:46.760891 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:20:46 crc kubenswrapper[4972]: I1121 11:20:46.909029 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pxnhj-config-qwswz"] Nov 21 11:20:46 crc kubenswrapper[4972]: W1121 11:20:46.940053 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c903761_a47d_44bd_a6ad_fe59a6cd62fc.slice/crio-db31602746195ac4ef72dd1c35848e24f64dc28bfdcc159ba4e61320e32b0d00 WatchSource:0}: Error finding container db31602746195ac4ef72dd1c35848e24f64dc28bfdcc159ba4e61320e32b0d00: Status 404 returned error can't find the container with id db31602746195ac4ef72dd1c35848e24f64dc28bfdcc159ba4e61320e32b0d00 Nov 21 11:20:47 crc kubenswrapper[4972]: I1121 11:20:47.549400 4972 generic.go:334] "Generic (PLEG): container finished" podID="9c903761-a47d-44bd-a6ad-fe59a6cd62fc" containerID="83ea61c56c6c92ec69b8409c787dec915511d5e725471676d6cd840c1572fbad" exitCode=0 Nov 21 11:20:47 crc kubenswrapper[4972]: I1121 11:20:47.549451 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pxnhj-config-qwswz" event={"ID":"9c903761-a47d-44bd-a6ad-fe59a6cd62fc","Type":"ContainerDied","Data":"83ea61c56c6c92ec69b8409c787dec915511d5e725471676d6cd840c1572fbad"} Nov 21 11:20:47 crc kubenswrapper[4972]: I1121 11:20:47.550066 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pxnhj-config-qwswz" event={"ID":"9c903761-a47d-44bd-a6ad-fe59a6cd62fc","Type":"ContainerStarted","Data":"db31602746195ac4ef72dd1c35848e24f64dc28bfdcc159ba4e61320e32b0d00"} Nov 21 11:20:47 crc kubenswrapper[4972]: I1121 11:20:47.558409 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-db9c5b5bd-gdkr4" event={"ID":"a80e0cc1-c14a-45ec-97bf-5c7401ce569f","Type":"ContainerStarted","Data":"ffeca426f041e8859bfaf4f00b98d036607cddcc2adbfcac224b99cfafe8302a"} Nov 21 11:20:47 crc kubenswrapper[4972]: I1121 11:20:47.558478 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-db9c5b5bd-gdkr4" event={"ID":"a80e0cc1-c14a-45ec-97bf-5c7401ce569f","Type":"ContainerStarted","Data":"e662808c978d8fa2624abe4a5bbd720971cd045a522ff6975276d940dac9e081"} Nov 21 11:20:47 crc kubenswrapper[4972]: I1121 11:20:47.558721 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:47 crc kubenswrapper[4972]: I1121 11:20:47.558805 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:20:47 crc kubenswrapper[4972]: I1121 11:20:47.588107 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-db9c5b5bd-gdkr4" podStartSLOduration=2.999448006 podStartE2EDuration="30.58808815s" podCreationTimestamp="2025-11-21 11:20:17 +0000 UTC" firstStartedPulling="2025-11-21 11:20:17.876364676 +0000 UTC m=+5962.985507174" lastFinishedPulling="2025-11-21 11:20:45.46500482 +0000 UTC m=+5990.574147318" observedRunningTime="2025-11-21 11:20:47.586170019 +0000 UTC m=+5992.695312517" watchObservedRunningTime="2025-11-21 11:20:47.58808815 +0000 UTC m=+5992.697230648" Nov 21 11:20:47 crc kubenswrapper[4972]: I1121 11:20:47.778225 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c747676-25ff-4583-9a92-bdd27508e618" path="/var/lib/kubelet/pods/1c747676-25ff-4583-9a92-bdd27508e618/volumes" Nov 21 11:20:48 crc kubenswrapper[4972]: I1121 11:20:48.957254 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.144243 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-additional-scripts\") pod \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.144632 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvnt9\" (UniqueName: \"kubernetes.io/projected/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-kube-api-access-nvnt9\") pod \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.144748 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-run\") pod \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.144852 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-run" (OuterVolumeSpecName: "var-run") pod "9c903761-a47d-44bd-a6ad-fe59a6cd62fc" (UID: "9c903761-a47d-44bd-a6ad-fe59a6cd62fc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.145005 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-log-ovn\") pod \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.145176 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-scripts\") pod \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.145066 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9c903761-a47d-44bd-a6ad-fe59a6cd62fc" (UID: "9c903761-a47d-44bd-a6ad-fe59a6cd62fc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.145363 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-run-ovn\") pod \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\" (UID: \"9c903761-a47d-44bd-a6ad-fe59a6cd62fc\") " Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.145397 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9c903761-a47d-44bd-a6ad-fe59a6cd62fc" (UID: "9c903761-a47d-44bd-a6ad-fe59a6cd62fc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.145702 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9c903761-a47d-44bd-a6ad-fe59a6cd62fc" (UID: "9c903761-a47d-44bd-a6ad-fe59a6cd62fc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.146088 4972 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.146180 4972 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-run\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.146245 4972 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.146304 4972 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.146467 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-scripts" (OuterVolumeSpecName: "scripts") pod "9c903761-a47d-44bd-a6ad-fe59a6cd62fc" (UID: "9c903761-a47d-44bd-a6ad-fe59a6cd62fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.165094 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-kube-api-access-nvnt9" (OuterVolumeSpecName: "kube-api-access-nvnt9") pod "9c903761-a47d-44bd-a6ad-fe59a6cd62fc" (UID: "9c903761-a47d-44bd-a6ad-fe59a6cd62fc"). InnerVolumeSpecName "kube-api-access-nvnt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.248794 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvnt9\" (UniqueName: \"kubernetes.io/projected/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-kube-api-access-nvnt9\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.248857 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c903761-a47d-44bd-a6ad-fe59a6cd62fc-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.578730 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pxnhj-config-qwswz" event={"ID":"9c903761-a47d-44bd-a6ad-fe59a6cd62fc","Type":"ContainerDied","Data":"db31602746195ac4ef72dd1c35848e24f64dc28bfdcc159ba4e61320e32b0d00"} Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.579147 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db31602746195ac4ef72dd1c35848e24f64dc28bfdcc159ba4e61320e32b0d00" Nov 21 11:20:49 crc kubenswrapper[4972]: I1121 11:20:49.578818 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pxnhj-config-qwswz" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.073432 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pxnhj-config-qwswz"] Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.087781 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pxnhj-config-qwswz"] Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.519780 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-bkc5d"] Nov 21 11:20:50 crc kubenswrapper[4972]: E1121 11:20:50.520356 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c903761-a47d-44bd-a6ad-fe59a6cd62fc" containerName="ovn-config" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.520420 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c903761-a47d-44bd-a6ad-fe59a6cd62fc" containerName="ovn-config" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.520697 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c903761-a47d-44bd-a6ad-fe59a6cd62fc" containerName="ovn-config" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.521715 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.524111 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.528622 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.528867 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.542967 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-bkc5d"] Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.690203 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a8e270d1-6354-4941-831f-d5e3e2605206-config-data-merged\") pod \"octavia-rsyslog-bkc5d\" (UID: \"a8e270d1-6354-4941-831f-d5e3e2605206\") " pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.690262 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e270d1-6354-4941-831f-d5e3e2605206-config-data\") pod \"octavia-rsyslog-bkc5d\" (UID: \"a8e270d1-6354-4941-831f-d5e3e2605206\") " pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.690299 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e270d1-6354-4941-831f-d5e3e2605206-scripts\") pod \"octavia-rsyslog-bkc5d\" (UID: \"a8e270d1-6354-4941-831f-d5e3e2605206\") " pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.690412 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a8e270d1-6354-4941-831f-d5e3e2605206-hm-ports\") pod \"octavia-rsyslog-bkc5d\" (UID: \"a8e270d1-6354-4941-831f-d5e3e2605206\") " pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.792320 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e270d1-6354-4941-831f-d5e3e2605206-config-data\") pod \"octavia-rsyslog-bkc5d\" (UID: \"a8e270d1-6354-4941-831f-d5e3e2605206\") " pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.792387 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e270d1-6354-4941-831f-d5e3e2605206-scripts\") pod \"octavia-rsyslog-bkc5d\" (UID: \"a8e270d1-6354-4941-831f-d5e3e2605206\") " pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.792552 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a8e270d1-6354-4941-831f-d5e3e2605206-hm-ports\") pod \"octavia-rsyslog-bkc5d\" (UID: \"a8e270d1-6354-4941-831f-d5e3e2605206\") " pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.792719 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a8e270d1-6354-4941-831f-d5e3e2605206-config-data-merged\") pod \"octavia-rsyslog-bkc5d\" (UID: \"a8e270d1-6354-4941-831f-d5e3e2605206\") " pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.793355 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a8e270d1-6354-4941-831f-d5e3e2605206-config-data-merged\") pod \"octavia-rsyslog-bkc5d\" (UID: \"a8e270d1-6354-4941-831f-d5e3e2605206\") " pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.793940 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a8e270d1-6354-4941-831f-d5e3e2605206-hm-ports\") pod \"octavia-rsyslog-bkc5d\" (UID: \"a8e270d1-6354-4941-831f-d5e3e2605206\") " pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.799190 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e270d1-6354-4941-831f-d5e3e2605206-config-data\") pod \"octavia-rsyslog-bkc5d\" (UID: \"a8e270d1-6354-4941-831f-d5e3e2605206\") " pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.799850 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e270d1-6354-4941-831f-d5e3e2605206-scripts\") pod \"octavia-rsyslog-bkc5d\" (UID: \"a8e270d1-6354-4941-831f-d5e3e2605206\") " pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:50 crc kubenswrapper[4972]: I1121 11:20:50.889373 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.285583 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-5955f5554b-97jm5"] Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.289778 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-5955f5554b-97jm5" Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.296067 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.307310 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-5955f5554b-97jm5"] Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.405312 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2571e5d7-8e15-4ec7-8105-248845410a4d-amphora-image\") pod \"octavia-image-upload-5955f5554b-97jm5\" (UID: \"2571e5d7-8e15-4ec7-8105-248845410a4d\") " pod="openstack/octavia-image-upload-5955f5554b-97jm5" Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.409387 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2571e5d7-8e15-4ec7-8105-248845410a4d-httpd-config\") pod \"octavia-image-upload-5955f5554b-97jm5\" (UID: \"2571e5d7-8e15-4ec7-8105-248845410a4d\") " pod="openstack/octavia-image-upload-5955f5554b-97jm5" Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.461067 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-bkc5d"] Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.514874 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2571e5d7-8e15-4ec7-8105-248845410a4d-amphora-image\") pod \"octavia-image-upload-5955f5554b-97jm5\" (UID: \"2571e5d7-8e15-4ec7-8105-248845410a4d\") " pod="openstack/octavia-image-upload-5955f5554b-97jm5" Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.514991 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2571e5d7-8e15-4ec7-8105-248845410a4d-httpd-config\") pod \"octavia-image-upload-5955f5554b-97jm5\" (UID: \"2571e5d7-8e15-4ec7-8105-248845410a4d\") " pod="openstack/octavia-image-upload-5955f5554b-97jm5" Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.516005 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2571e5d7-8e15-4ec7-8105-248845410a4d-amphora-image\") pod \"octavia-image-upload-5955f5554b-97jm5\" (UID: \"2571e5d7-8e15-4ec7-8105-248845410a4d\") " pod="openstack/octavia-image-upload-5955f5554b-97jm5" Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.523271 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2571e5d7-8e15-4ec7-8105-248845410a4d-httpd-config\") pod \"octavia-image-upload-5955f5554b-97jm5\" (UID: \"2571e5d7-8e15-4ec7-8105-248845410a4d\") " pod="openstack/octavia-image-upload-5955f5554b-97jm5" Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.633898 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-bkc5d" event={"ID":"a8e270d1-6354-4941-831f-d5e3e2605206","Type":"ContainerStarted","Data":"920bd4d9312ee980557899e3f27dda92eedc187564a7f7dfde621e23fe7737ba"} Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.644344 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-5955f5554b-97jm5" Nov 21 11:20:51 crc kubenswrapper[4972]: I1121 11:20:51.772217 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c903761-a47d-44bd-a6ad-fe59a6cd62fc" path="/var/lib/kubelet/pods/9c903761-a47d-44bd-a6ad-fe59a6cd62fc/volumes" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.158553 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-5955f5554b-97jm5"] Nov 21 11:20:52 crc kubenswrapper[4972]: W1121 11:20:52.176090 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2571e5d7_8e15_4ec7_8105_248845410a4d.slice/crio-2ff873b7bf56a29f145325352044e7e7acfc4f296424d805a722f2f17cf4ca94 WatchSource:0}: Error finding container 2ff873b7bf56a29f145325352044e7e7acfc4f296424d805a722f2f17cf4ca94: Status 404 returned error can't find the container with id 2ff873b7bf56a29f145325352044e7e7acfc4f296424d805a722f2f17cf4ca94 Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.252729 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-4ql9t"] Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.254918 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.257840 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.261510 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-4ql9t"] Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.456566 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-config-data\") pod \"octavia-db-sync-4ql9t\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.456623 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-combined-ca-bundle\") pod \"octavia-db-sync-4ql9t\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.456805 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/50b27865-54df-4c42-9736-58c80e882e00-config-data-merged\") pod \"octavia-db-sync-4ql9t\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.456982 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-scripts\") pod \"octavia-db-sync-4ql9t\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.558290 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-config-data\") pod \"octavia-db-sync-4ql9t\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.558352 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-combined-ca-bundle\") pod \"octavia-db-sync-4ql9t\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.558433 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/50b27865-54df-4c42-9736-58c80e882e00-config-data-merged\") pod \"octavia-db-sync-4ql9t\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.558506 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-scripts\") pod \"octavia-db-sync-4ql9t\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.559113 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/50b27865-54df-4c42-9736-58c80e882e00-config-data-merged\") pod \"octavia-db-sync-4ql9t\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.565092 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-combined-ca-bundle\") pod \"octavia-db-sync-4ql9t\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.566413 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-scripts\") pod \"octavia-db-sync-4ql9t\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.569971 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-config-data\") pod \"octavia-db-sync-4ql9t\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.584696 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:20:52 crc kubenswrapper[4972]: I1121 11:20:52.655893 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-97jm5" event={"ID":"2571e5d7-8e15-4ec7-8105-248845410a4d","Type":"ContainerStarted","Data":"2ff873b7bf56a29f145325352044e7e7acfc4f296424d805a722f2f17cf4ca94"} Nov 21 11:20:53 crc kubenswrapper[4972]: I1121 11:20:53.795279 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-4ql9t"] Nov 21 11:20:53 crc kubenswrapper[4972]: W1121 11:20:53.802646 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50b27865_54df_4c42_9736_58c80e882e00.slice/crio-bbc3117072ab6644e6302d9aa82becbdd78d016cde925ab95f70fdc2de2cf6df WatchSource:0}: Error finding container bbc3117072ab6644e6302d9aa82becbdd78d016cde925ab95f70fdc2de2cf6df: Status 404 returned error can't find the container with id bbc3117072ab6644e6302d9aa82becbdd78d016cde925ab95f70fdc2de2cf6df Nov 21 11:20:54 crc kubenswrapper[4972]: I1121 11:20:54.696999 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4ql9t" event={"ID":"50b27865-54df-4c42-9736-58c80e882e00","Type":"ContainerStarted","Data":"bbc3117072ab6644e6302d9aa82becbdd78d016cde925ab95f70fdc2de2cf6df"} Nov 21 11:20:55 crc kubenswrapper[4972]: I1121 11:20:55.713575 4972 generic.go:334] "Generic (PLEG): container finished" podID="50b27865-54df-4c42-9736-58c80e882e00" containerID="958edea82e9cc878b3542efaf4effeaa08caad8c3519d9e081bab43e298cfe23" exitCode=0 Nov 21 11:20:55 crc kubenswrapper[4972]: I1121 11:20:55.713786 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4ql9t" event={"ID":"50b27865-54df-4c42-9736-58c80e882e00","Type":"ContainerDied","Data":"958edea82e9cc878b3542efaf4effeaa08caad8c3519d9e081bab43e298cfe23"} Nov 21 11:20:55 crc kubenswrapper[4972]: I1121 11:20:55.717657 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-bkc5d" event={"ID":"a8e270d1-6354-4941-831f-d5e3e2605206","Type":"ContainerStarted","Data":"ac3a98747864c6ba3565d926d3f673b1872899e744196c216f57c97360d47ba7"} Nov 21 11:20:57 crc kubenswrapper[4972]: I1121 11:20:57.736248 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4ql9t" event={"ID":"50b27865-54df-4c42-9736-58c80e882e00","Type":"ContainerStarted","Data":"52db02c247b74a1801e4ff5e4137738cdd4631dda9e0c67a7f255287cc78f74c"} Nov 21 11:20:57 crc kubenswrapper[4972]: I1121 11:20:57.760633 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-4ql9t" podStartSLOduration=5.760613595 podStartE2EDuration="5.760613595s" podCreationTimestamp="2025-11-21 11:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:20:57.752335476 +0000 UTC m=+6002.861477984" watchObservedRunningTime="2025-11-21 11:20:57.760613595 +0000 UTC m=+6002.869756103" Nov 21 11:20:59 crc kubenswrapper[4972]: I1121 11:20:59.773612 4972 generic.go:334] "Generic (PLEG): container finished" podID="a8e270d1-6354-4941-831f-d5e3e2605206" containerID="ac3a98747864c6ba3565d926d3f673b1872899e744196c216f57c97360d47ba7" exitCode=0 Nov 21 11:20:59 crc kubenswrapper[4972]: I1121 11:20:59.782082 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-bkc5d" event={"ID":"a8e270d1-6354-4941-831f-d5e3e2605206","Type":"ContainerDied","Data":"ac3a98747864c6ba3565d926d3f673b1872899e744196c216f57c97360d47ba7"} Nov 21 11:21:01 crc kubenswrapper[4972]: I1121 11:21:01.759537 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:21:05 crc kubenswrapper[4972]: I1121 11:21:05.873152 4972 generic.go:334] "Generic (PLEG): container finished" podID="50b27865-54df-4c42-9736-58c80e882e00" containerID="52db02c247b74a1801e4ff5e4137738cdd4631dda9e0c67a7f255287cc78f74c" exitCode=0 Nov 21 11:21:05 crc kubenswrapper[4972]: I1121 11:21:05.873208 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4ql9t" event={"ID":"50b27865-54df-4c42-9736-58c80e882e00","Type":"ContainerDied","Data":"52db02c247b74a1801e4ff5e4137738cdd4631dda9e0c67a7f255287cc78f74c"} Nov 21 11:21:06 crc kubenswrapper[4972]: I1121 11:21:06.484082 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:21:06 crc kubenswrapper[4972]: I1121 11:21:06.600693 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-db9c5b5bd-gdkr4" Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.215734 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.344188 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-combined-ca-bundle\") pod \"50b27865-54df-4c42-9736-58c80e882e00\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.344323 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/50b27865-54df-4c42-9736-58c80e882e00-config-data-merged\") pod \"50b27865-54df-4c42-9736-58c80e882e00\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.344490 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-config-data\") pod \"50b27865-54df-4c42-9736-58c80e882e00\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.344521 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-scripts\") pod \"50b27865-54df-4c42-9736-58c80e882e00\" (UID: \"50b27865-54df-4c42-9736-58c80e882e00\") " Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.353141 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-config-data" (OuterVolumeSpecName: "config-data") pod "50b27865-54df-4c42-9736-58c80e882e00" (UID: "50b27865-54df-4c42-9736-58c80e882e00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.353900 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-scripts" (OuterVolumeSpecName: "scripts") pod "50b27865-54df-4c42-9736-58c80e882e00" (UID: "50b27865-54df-4c42-9736-58c80e882e00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.379977 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b27865-54df-4c42-9736-58c80e882e00-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "50b27865-54df-4c42-9736-58c80e882e00" (UID: "50b27865-54df-4c42-9736-58c80e882e00"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.392185 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50b27865-54df-4c42-9736-58c80e882e00" (UID: "50b27865-54df-4c42-9736-58c80e882e00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.447330 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.447812 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.447854 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b27865-54df-4c42-9736-58c80e882e00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.447876 4972 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/50b27865-54df-4c42-9736-58c80e882e00-config-data-merged\") on node \"crc\" DevicePath \"\"" Nov 21 11:21:12 crc kubenswrapper[4972]: E1121 11:21:12.657895 4972 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/gthiemonge/octavia-amphora-image:latest" Nov 21 11:21:12 crc kubenswrapper[4972]: E1121 11:21:12.658129 4972 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/gthiemonge/octavia-amphora-image,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEST_DIR,Value:/usr/local/apache2/htdocs,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:amphora-image,ReadOnly:false,MountPath:/usr/local/apache2/htdocs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-image-upload-5955f5554b-97jm5_openstack(2571e5d7-8e15-4ec7-8105-248845410a4d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 11:21:12 crc kubenswrapper[4972]: E1121 11:21:12.659564 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/octavia-image-upload-5955f5554b-97jm5" podUID="2571e5d7-8e15-4ec7-8105-248845410a4d" Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.958907 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4ql9t" event={"ID":"50b27865-54df-4c42-9736-58c80e882e00","Type":"ContainerDied","Data":"bbc3117072ab6644e6302d9aa82becbdd78d016cde925ab95f70fdc2de2cf6df"} Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.959371 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbc3117072ab6644e6302d9aa82becbdd78d016cde925ab95f70fdc2de2cf6df" Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.959745 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4ql9t" Nov 21 11:21:12 crc kubenswrapper[4972]: I1121 11:21:12.964069 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"8ec025f5e23fdc9483086d949001e8977fb85a6d1c335c571eb4db2f58dafa45"} Nov 21 11:21:12 crc kubenswrapper[4972]: E1121 11:21:12.966531 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/gthiemonge/octavia-amphora-image\\\"\"" pod="openstack/octavia-image-upload-5955f5554b-97jm5" podUID="2571e5d7-8e15-4ec7-8105-248845410a4d" Nov 21 11:21:13 crc kubenswrapper[4972]: I1121 11:21:13.973276 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-bkc5d" event={"ID":"a8e270d1-6354-4941-831f-d5e3e2605206","Type":"ContainerStarted","Data":"3f08891f3a8b2f956224166ec9f61d1c8637e078c348e52380afa93e0247dd2f"} Nov 21 11:21:13 crc kubenswrapper[4972]: I1121 11:21:13.974104 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:21:13 crc kubenswrapper[4972]: I1121 11:21:13.996422 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-bkc5d" podStartSLOduration=2.6514125010000003 podStartE2EDuration="23.996399999s" podCreationTimestamp="2025-11-21 11:20:50 +0000 UTC" firstStartedPulling="2025-11-21 11:20:51.475487571 +0000 UTC m=+5996.584630079" lastFinishedPulling="2025-11-21 11:21:12.820475069 +0000 UTC m=+6017.929617577" observedRunningTime="2025-11-21 11:21:13.989999099 +0000 UTC m=+6019.099141597" watchObservedRunningTime="2025-11-21 11:21:13.996399999 +0000 UTC m=+6019.105542507" Nov 21 11:21:20 crc kubenswrapper[4972]: I1121 11:21:20.925170 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-bkc5d" Nov 21 11:21:29 crc kubenswrapper[4972]: I1121 11:21:29.140714 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-97jm5" event={"ID":"2571e5d7-8e15-4ec7-8105-248845410a4d","Type":"ContainerStarted","Data":"40c29606850d93cf746170ab0f58c3ee880b5c52082e2194c9775478376476ed"} Nov 21 11:21:30 crc kubenswrapper[4972]: I1121 11:21:30.151484 4972 generic.go:334] "Generic (PLEG): container finished" podID="2571e5d7-8e15-4ec7-8105-248845410a4d" containerID="40c29606850d93cf746170ab0f58c3ee880b5c52082e2194c9775478376476ed" exitCode=0 Nov 21 11:21:30 crc kubenswrapper[4972]: I1121 11:21:30.151539 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-97jm5" event={"ID":"2571e5d7-8e15-4ec7-8105-248845410a4d","Type":"ContainerDied","Data":"40c29606850d93cf746170ab0f58c3ee880b5c52082e2194c9775478376476ed"} Nov 21 11:21:31 crc kubenswrapper[4972]: I1121 11:21:31.167242 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-97jm5" event={"ID":"2571e5d7-8e15-4ec7-8105-248845410a4d","Type":"ContainerStarted","Data":"1691e33b5df29bfa877e8a7933e92a8cc43b58696406ff1c0830369bcf3a9770"} Nov 21 11:21:31 crc kubenswrapper[4972]: I1121 11:21:31.183753 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-5955f5554b-97jm5" podStartSLOduration=4.150697194 podStartE2EDuration="40.183732321s" podCreationTimestamp="2025-11-21 11:20:51 +0000 UTC" firstStartedPulling="2025-11-21 11:20:52.178093607 +0000 UTC m=+5997.287236105" lastFinishedPulling="2025-11-21 11:21:28.211128734 +0000 UTC m=+6033.320271232" observedRunningTime="2025-11-21 11:21:31.181491771 +0000 UTC m=+6036.290634279" watchObservedRunningTime="2025-11-21 11:21:31.183732321 +0000 UTC m=+6036.292874839" Nov 21 11:21:32 crc kubenswrapper[4972]: I1121 11:21:32.751885 4972 scope.go:117] "RemoveContainer" containerID="4b19c77648130a930f2f596a4ec1e966bf8705b6f9d05e5c94a1c733eb4d3892" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.345719 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-r4jnw"] Nov 21 11:21:55 crc kubenswrapper[4972]: E1121 11:21:55.346922 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b27865-54df-4c42-9736-58c80e882e00" containerName="octavia-db-sync" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.346941 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b27865-54df-4c42-9736-58c80e882e00" containerName="octavia-db-sync" Nov 21 11:21:55 crc kubenswrapper[4972]: E1121 11:21:55.346966 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b27865-54df-4c42-9736-58c80e882e00" containerName="init" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.346975 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b27865-54df-4c42-9736-58c80e882e00" containerName="init" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.347269 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b27865-54df-4c42-9736-58c80e882e00" containerName="octavia-db-sync" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.349024 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.351025 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.352288 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.352713 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.355912 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-r4jnw"] Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.458605 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-scripts\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.459070 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-config-data-merged\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.459133 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-config-data\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.459180 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-hm-ports\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.459462 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-combined-ca-bundle\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.459851 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-amphora-certs\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.561896 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-combined-ca-bundle\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.562062 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-amphora-certs\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.562124 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-scripts\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.562215 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-config-data-merged\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.562267 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-config-data\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.562331 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-hm-ports\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.563161 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-config-data-merged\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.564020 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-hm-ports\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.570820 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-amphora-certs\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.570964 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-combined-ca-bundle\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.571243 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-scripts\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.583063 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5bf7530-6fe3-4a93-b44a-0665818a4fd8-config-data\") pod \"octavia-healthmanager-r4jnw\" (UID: \"e5bf7530-6fe3-4a93-b44a-0665818a4fd8\") " pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.676668 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.943896 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-5955f5554b-97jm5"] Nov 21 11:21:55 crc kubenswrapper[4972]: I1121 11:21:55.944564 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-5955f5554b-97jm5" podUID="2571e5d7-8e15-4ec7-8105-248845410a4d" containerName="octavia-amphora-httpd" containerID="cri-o://1691e33b5df29bfa877e8a7933e92a8cc43b58696406ff1c0830369bcf3a9770" gracePeriod=30 Nov 21 11:21:56 crc kubenswrapper[4972]: I1121 11:21:56.360276 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-r4jnw"] Nov 21 11:21:56 crc kubenswrapper[4972]: I1121 11:21:56.468624 4972 generic.go:334] "Generic (PLEG): container finished" podID="2571e5d7-8e15-4ec7-8105-248845410a4d" containerID="1691e33b5df29bfa877e8a7933e92a8cc43b58696406ff1c0830369bcf3a9770" exitCode=0 Nov 21 11:21:56 crc kubenswrapper[4972]: I1121 11:21:56.468704 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-97jm5" event={"ID":"2571e5d7-8e15-4ec7-8105-248845410a4d","Type":"ContainerDied","Data":"1691e33b5df29bfa877e8a7933e92a8cc43b58696406ff1c0830369bcf3a9770"} Nov 21 11:21:56 crc kubenswrapper[4972]: I1121 11:21:56.472997 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-r4jnw" event={"ID":"e5bf7530-6fe3-4a93-b44a-0665818a4fd8","Type":"ContainerStarted","Data":"0a4f38eb46f6623dc63e754ff450e8a0bbaa3704dd4df59592d46c914d5ba974"} Nov 21 11:21:56 crc kubenswrapper[4972]: I1121 11:21:56.487912 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-5955f5554b-97jm5" Nov 21 11:21:56 crc kubenswrapper[4972]: I1121 11:21:56.603763 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2571e5d7-8e15-4ec7-8105-248845410a4d-amphora-image\") pod \"2571e5d7-8e15-4ec7-8105-248845410a4d\" (UID: \"2571e5d7-8e15-4ec7-8105-248845410a4d\") " Nov 21 11:21:56 crc kubenswrapper[4972]: I1121 11:21:56.603930 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2571e5d7-8e15-4ec7-8105-248845410a4d-httpd-config\") pod \"2571e5d7-8e15-4ec7-8105-248845410a4d\" (UID: \"2571e5d7-8e15-4ec7-8105-248845410a4d\") " Nov 21 11:21:56 crc kubenswrapper[4972]: I1121 11:21:56.633581 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2571e5d7-8e15-4ec7-8105-248845410a4d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2571e5d7-8e15-4ec7-8105-248845410a4d" (UID: "2571e5d7-8e15-4ec7-8105-248845410a4d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:21:56 crc kubenswrapper[4972]: I1121 11:21:56.687400 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2571e5d7-8e15-4ec7-8105-248845410a4d-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "2571e5d7-8e15-4ec7-8105-248845410a4d" (UID: "2571e5d7-8e15-4ec7-8105-248845410a4d"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:21:56 crc kubenswrapper[4972]: I1121 11:21:56.706469 4972 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2571e5d7-8e15-4ec7-8105-248845410a4d-amphora-image\") on node \"crc\" DevicePath \"\"" Nov 21 11:21:56 crc kubenswrapper[4972]: I1121 11:21:56.706512 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2571e5d7-8e15-4ec7-8105-248845410a4d-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.121150 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-pk5n9"] Nov 21 11:21:57 crc kubenswrapper[4972]: E1121 11:21:57.121577 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2571e5d7-8e15-4ec7-8105-248845410a4d" containerName="octavia-amphora-httpd" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.121596 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2571e5d7-8e15-4ec7-8105-248845410a4d" containerName="octavia-amphora-httpd" Nov 21 11:21:57 crc kubenswrapper[4972]: E1121 11:21:57.121612 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2571e5d7-8e15-4ec7-8105-248845410a4d" containerName="init" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.121618 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2571e5d7-8e15-4ec7-8105-248845410a4d" containerName="init" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.121819 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2571e5d7-8e15-4ec7-8105-248845410a4d" containerName="octavia-amphora-httpd" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.123176 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.125498 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.128167 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.139659 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-pk5n9"] Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.217393 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/26a73c9b-7334-4441-9eab-090821afdf46-hm-ports\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.217441 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a73c9b-7334-4441-9eab-090821afdf46-combined-ca-bundle\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.217472 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/26a73c9b-7334-4441-9eab-090821afdf46-amphora-certs\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.217772 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a73c9b-7334-4441-9eab-090821afdf46-scripts\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.217947 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/26a73c9b-7334-4441-9eab-090821afdf46-config-data-merged\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.218065 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a73c9b-7334-4441-9eab-090821afdf46-config-data\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.320221 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a73c9b-7334-4441-9eab-090821afdf46-scripts\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.320702 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/26a73c9b-7334-4441-9eab-090821afdf46-config-data-merged\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.320974 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a73c9b-7334-4441-9eab-090821afdf46-config-data\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.321153 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/26a73c9b-7334-4441-9eab-090821afdf46-hm-ports\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.321201 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a73c9b-7334-4441-9eab-090821afdf46-combined-ca-bundle\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.321237 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/26a73c9b-7334-4441-9eab-090821afdf46-amphora-certs\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.321344 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/26a73c9b-7334-4441-9eab-090821afdf46-config-data-merged\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.322283 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/26a73c9b-7334-4441-9eab-090821afdf46-hm-ports\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.328303 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a73c9b-7334-4441-9eab-090821afdf46-config-data\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.329066 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a73c9b-7334-4441-9eab-090821afdf46-scripts\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.329187 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a73c9b-7334-4441-9eab-090821afdf46-combined-ca-bundle\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.330472 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/26a73c9b-7334-4441-9eab-090821afdf46-amphora-certs\") pod \"octavia-housekeeping-pk5n9\" (UID: \"26a73c9b-7334-4441-9eab-090821afdf46\") " pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.447210 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.486086 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-r4jnw" event={"ID":"e5bf7530-6fe3-4a93-b44a-0665818a4fd8","Type":"ContainerStarted","Data":"dcc3435d219e520a5d1a14dcc7940fd0f9c086f4595f9ca6d73e2b44900216ca"} Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.490120 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-97jm5" event={"ID":"2571e5d7-8e15-4ec7-8105-248845410a4d","Type":"ContainerDied","Data":"2ff873b7bf56a29f145325352044e7e7acfc4f296424d805a722f2f17cf4ca94"} Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.490158 4972 scope.go:117] "RemoveContainer" containerID="1691e33b5df29bfa877e8a7933e92a8cc43b58696406ff1c0830369bcf3a9770" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.490268 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-5955f5554b-97jm5" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.532374 4972 scope.go:117] "RemoveContainer" containerID="40c29606850d93cf746170ab0f58c3ee880b5c52082e2194c9775478376476ed" Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.534408 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-5955f5554b-97jm5"] Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.545075 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-5955f5554b-97jm5"] Nov 21 11:21:57 crc kubenswrapper[4972]: I1121 11:21:57.775713 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2571e5d7-8e15-4ec7-8105-248845410a4d" path="/var/lib/kubelet/pods/2571e5d7-8e15-4ec7-8105-248845410a4d/volumes" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.048136 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-pk5n9"] Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.294870 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-g7k56"] Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.308393 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.316007 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.316345 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.318679 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-g7k56"] Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.447329 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/98c40da9-47c4-449c-82c0-f09f09f7006b-hm-ports\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.447913 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98c40da9-47c4-449c-82c0-f09f09f7006b-scripts\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.448060 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/98c40da9-47c4-449c-82c0-f09f09f7006b-config-data-merged\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.448179 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98c40da9-47c4-449c-82c0-f09f09f7006b-combined-ca-bundle\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.448278 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/98c40da9-47c4-449c-82c0-f09f09f7006b-amphora-certs\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.448487 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98c40da9-47c4-449c-82c0-f09f09f7006b-config-data\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.501934 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-pk5n9" event={"ID":"26a73c9b-7334-4441-9eab-090821afdf46","Type":"ContainerStarted","Data":"c15f1ba5d5999cfbc129e7aee16e79588a012a62ed2afa1429447f7a3782fad7"} Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.550662 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98c40da9-47c4-449c-82c0-f09f09f7006b-config-data\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.550861 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/98c40da9-47c4-449c-82c0-f09f09f7006b-hm-ports\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.550932 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98c40da9-47c4-449c-82c0-f09f09f7006b-scripts\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.551009 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/98c40da9-47c4-449c-82c0-f09f09f7006b-config-data-merged\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.551096 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98c40da9-47c4-449c-82c0-f09f09f7006b-combined-ca-bundle\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.551147 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/98c40da9-47c4-449c-82c0-f09f09f7006b-amphora-certs\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.552138 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/98c40da9-47c4-449c-82c0-f09f09f7006b-config-data-merged\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.552463 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/98c40da9-47c4-449c-82c0-f09f09f7006b-hm-ports\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.562648 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/98c40da9-47c4-449c-82c0-f09f09f7006b-amphora-certs\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.563570 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98c40da9-47c4-449c-82c0-f09f09f7006b-combined-ca-bundle\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.566527 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98c40da9-47c4-449c-82c0-f09f09f7006b-config-data\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.567440 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98c40da9-47c4-449c-82c0-f09f09f7006b-scripts\") pod \"octavia-worker-g7k56\" (UID: \"98c40da9-47c4-449c-82c0-f09f09f7006b\") " pod="openstack/octavia-worker-g7k56" Nov 21 11:21:58 crc kubenswrapper[4972]: I1121 11:21:58.635911 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-g7k56" Nov 21 11:21:59 crc kubenswrapper[4972]: I1121 11:21:59.219022 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-g7k56"] Nov 21 11:21:59 crc kubenswrapper[4972]: W1121 11:21:59.233406 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98c40da9_47c4_449c_82c0_f09f09f7006b.slice/crio-68912bbabca6c5d6631086d208c7d0f8259cf737fd369254789195123eac7a7a WatchSource:0}: Error finding container 68912bbabca6c5d6631086d208c7d0f8259cf737fd369254789195123eac7a7a: Status 404 returned error can't find the container with id 68912bbabca6c5d6631086d208c7d0f8259cf737fd369254789195123eac7a7a Nov 21 11:21:59 crc kubenswrapper[4972]: I1121 11:21:59.516600 4972 generic.go:334] "Generic (PLEG): container finished" podID="e5bf7530-6fe3-4a93-b44a-0665818a4fd8" containerID="dcc3435d219e520a5d1a14dcc7940fd0f9c086f4595f9ca6d73e2b44900216ca" exitCode=0 Nov 21 11:21:59 crc kubenswrapper[4972]: I1121 11:21:59.516682 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-r4jnw" event={"ID":"e5bf7530-6fe3-4a93-b44a-0665818a4fd8","Type":"ContainerDied","Data":"dcc3435d219e520a5d1a14dcc7940fd0f9c086f4595f9ca6d73e2b44900216ca"} Nov 21 11:21:59 crc kubenswrapper[4972]: I1121 11:21:59.519155 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-g7k56" event={"ID":"98c40da9-47c4-449c-82c0-f09f09f7006b","Type":"ContainerStarted","Data":"68912bbabca6c5d6631086d208c7d0f8259cf737fd369254789195123eac7a7a"} Nov 21 11:22:00 crc kubenswrapper[4972]: I1121 11:22:00.543688 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-r4jnw" event={"ID":"e5bf7530-6fe3-4a93-b44a-0665818a4fd8","Type":"ContainerStarted","Data":"0479ce1b9d61bb99a9f48ae98bc26596cef93d0370819235dd5cee61a157c593"} Nov 21 11:22:00 crc kubenswrapper[4972]: I1121 11:22:00.545449 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:22:00 crc kubenswrapper[4972]: I1121 11:22:00.549728 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-pk5n9" event={"ID":"26a73c9b-7334-4441-9eab-090821afdf46","Type":"ContainerStarted","Data":"83da1bf75c70d180e50525f27a41a2dcf349ef969402f769fb635cdc6b752e7c"} Nov 21 11:22:00 crc kubenswrapper[4972]: I1121 11:22:00.573689 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-r4jnw" podStartSLOduration=5.573667944 podStartE2EDuration="5.573667944s" podCreationTimestamp="2025-11-21 11:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:22:00.561196194 +0000 UTC m=+6065.670338712" watchObservedRunningTime="2025-11-21 11:22:00.573667944 +0000 UTC m=+6065.682810442" Nov 21 11:22:01 crc kubenswrapper[4972]: I1121 11:22:01.562704 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-g7k56" event={"ID":"98c40da9-47c4-449c-82c0-f09f09f7006b","Type":"ContainerStarted","Data":"0a1fc2f5c0781948f7c35240ebb58f93ec6e46e28da480e4d753bf1a2d6f777f"} Nov 21 11:22:01 crc kubenswrapper[4972]: I1121 11:22:01.565163 4972 generic.go:334] "Generic (PLEG): container finished" podID="26a73c9b-7334-4441-9eab-090821afdf46" containerID="83da1bf75c70d180e50525f27a41a2dcf349ef969402f769fb635cdc6b752e7c" exitCode=0 Nov 21 11:22:01 crc kubenswrapper[4972]: I1121 11:22:01.566899 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-pk5n9" event={"ID":"26a73c9b-7334-4441-9eab-090821afdf46","Type":"ContainerDied","Data":"83da1bf75c70d180e50525f27a41a2dcf349ef969402f769fb635cdc6b752e7c"} Nov 21 11:22:02 crc kubenswrapper[4972]: I1121 11:22:02.584091 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-pk5n9" event={"ID":"26a73c9b-7334-4441-9eab-090821afdf46","Type":"ContainerStarted","Data":"9468dd948cadc70677a3a0797abbfb806f5738007935da030613877edcac4a9e"} Nov 21 11:22:03 crc kubenswrapper[4972]: I1121 11:22:03.594681 4972 generic.go:334] "Generic (PLEG): container finished" podID="98c40da9-47c4-449c-82c0-f09f09f7006b" containerID="0a1fc2f5c0781948f7c35240ebb58f93ec6e46e28da480e4d753bf1a2d6f777f" exitCode=0 Nov 21 11:22:03 crc kubenswrapper[4972]: I1121 11:22:03.594904 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-g7k56" event={"ID":"98c40da9-47c4-449c-82c0-f09f09f7006b","Type":"ContainerDied","Data":"0a1fc2f5c0781948f7c35240ebb58f93ec6e46e28da480e4d753bf1a2d6f777f"} Nov 21 11:22:03 crc kubenswrapper[4972]: I1121 11:22:03.595615 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:22:03 crc kubenswrapper[4972]: I1121 11:22:03.638277 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-pk5n9" podStartSLOduration=4.87145812 podStartE2EDuration="6.638249526s" podCreationTimestamp="2025-11-21 11:21:57 +0000 UTC" firstStartedPulling="2025-11-21 11:21:58.051719161 +0000 UTC m=+6063.160861699" lastFinishedPulling="2025-11-21 11:21:59.818510607 +0000 UTC m=+6064.927653105" observedRunningTime="2025-11-21 11:22:03.634821625 +0000 UTC m=+6068.743964123" watchObservedRunningTime="2025-11-21 11:22:03.638249526 +0000 UTC m=+6068.747392024" Nov 21 11:22:05 crc kubenswrapper[4972]: I1121 11:22:05.617133 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-g7k56" event={"ID":"98c40da9-47c4-449c-82c0-f09f09f7006b","Type":"ContainerStarted","Data":"89ff6ba72a6f346c445bc2b80dc82a4bab2676704f77e6f245c13cf0ffd7ac2a"} Nov 21 11:22:05 crc kubenswrapper[4972]: I1121 11:22:05.619065 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-g7k56" Nov 21 11:22:05 crc kubenswrapper[4972]: I1121 11:22:05.641120 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-g7k56" podStartSLOduration=6.22076445 podStartE2EDuration="7.641096312s" podCreationTimestamp="2025-11-21 11:21:58 +0000 UTC" firstStartedPulling="2025-11-21 11:21:59.235392255 +0000 UTC m=+6064.344534753" lastFinishedPulling="2025-11-21 11:22:00.655724107 +0000 UTC m=+6065.764866615" observedRunningTime="2025-11-21 11:22:05.639587002 +0000 UTC m=+6070.748729540" watchObservedRunningTime="2025-11-21 11:22:05.641096312 +0000 UTC m=+6070.750238810" Nov 21 11:22:10 crc kubenswrapper[4972]: I1121 11:22:10.708682 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-r4jnw" Nov 21 11:22:12 crc kubenswrapper[4972]: I1121 11:22:12.501242 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-pk5n9" Nov 21 11:22:13 crc kubenswrapper[4972]: I1121 11:22:13.681952 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-g7k56" Nov 21 11:22:50 crc kubenswrapper[4972]: I1121 11:22:50.072271 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-z69mt"] Nov 21 11:22:50 crc kubenswrapper[4972]: I1121 11:22:50.083554 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8b15-account-create-qgzk8"] Nov 21 11:22:50 crc kubenswrapper[4972]: I1121 11:22:50.093288 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-z69mt"] Nov 21 11:22:50 crc kubenswrapper[4972]: I1121 11:22:50.102260 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8b15-account-create-qgzk8"] Nov 21 11:22:51 crc kubenswrapper[4972]: I1121 11:22:51.771158 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e03bc85-e0b6-45ad-86a0-27a49e0cfe17" path="/var/lib/kubelet/pods/1e03bc85-e0b6-45ad-86a0-27a49e0cfe17/volumes" Nov 21 11:22:51 crc kubenswrapper[4972]: I1121 11:22:51.772379 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed42ffd0-2593-4c1f-a442-4b0d0c607c93" path="/var/lib/kubelet/pods/ed42ffd0-2593-4c1f-a442-4b0d0c607c93/volumes" Nov 21 11:22:56 crc kubenswrapper[4972]: I1121 11:22:56.034106 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xgq7v"] Nov 21 11:22:56 crc kubenswrapper[4972]: I1121 11:22:56.056353 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xgq7v"] Nov 21 11:22:57 crc kubenswrapper[4972]: I1121 11:22:57.771567 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192af3fd-3fa6-43b6-ac79-7e613ff1845d" path="/var/lib/kubelet/pods/192af3fd-3fa6-43b6-ac79-7e613ff1845d/volumes" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.206932 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-598b74d6b9-77pkc"] Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.208896 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.215705 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.216049 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.216167 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jphwm" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.216264 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.228268 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-598b74d6b9-77pkc"] Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.280440 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.280674 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d92abb74-0a12-4643-ab97-5239d575301f" containerName="glance-log" containerID="cri-o://7595a9c724ea3ee3e320c47c0f60a0e2082a6b01ceb11c8c9aa16ff1d1a9014b" gracePeriod=30 Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.280799 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d92abb74-0a12-4643-ab97-5239d575301f" containerName="glance-httpd" containerID="cri-o://b745ce440a14b8d70fb298d8ab2a5a3de281c6c1b8877fc0b2aa5190fbd56b96" gracePeriod=30 Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.302160 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-logs\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.302246 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpmdg\" (UniqueName: \"kubernetes.io/projected/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-kube-api-access-jpmdg\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.302306 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-horizon-secret-key\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.302343 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-scripts\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.302360 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-config-data\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.323156 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-794f64f8c9-96s66"] Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.326218 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.339283 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-794f64f8c9-96s66"] Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.378791 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.379069 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="65c73566-904d-4e66-a7c3-5ee16b691565" containerName="glance-log" containerID="cri-o://fbfd982a646d2f786bb4dcad7b0ec57f6ba7572f48df02ef170c278fc4b4c6e8" gracePeriod=30 Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.379215 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="65c73566-904d-4e66-a7c3-5ee16b691565" containerName="glance-httpd" containerID="cri-o://990c8ed0bfa18e664e8ba651aea29c2da75a272f2da09bb1ba15e140fb0bc6ec" gracePeriod=30 Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.404436 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-horizon-secret-key\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.404490 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6rf8\" (UniqueName: \"kubernetes.io/projected/2c7b1503-7053-4ebc-b7d8-e510d25ea939-kube-api-access-v6rf8\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.404526 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c7b1503-7053-4ebc-b7d8-e510d25ea939-scripts\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.404559 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-scripts\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.404576 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-config-data\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.405052 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c7b1503-7053-4ebc-b7d8-e510d25ea939-config-data\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.405138 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-logs\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.405476 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c7b1503-7053-4ebc-b7d8-e510d25ea939-logs\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.405515 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpmdg\" (UniqueName: \"kubernetes.io/projected/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-kube-api-access-jpmdg\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.405604 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2c7b1503-7053-4ebc-b7d8-e510d25ea939-horizon-secret-key\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.405685 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-logs\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.406070 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-scripts\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.406112 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-config-data\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.411967 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-horizon-secret-key\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.422346 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpmdg\" (UniqueName: \"kubernetes.io/projected/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-kube-api-access-jpmdg\") pod \"horizon-598b74d6b9-77pkc\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.506582 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c7b1503-7053-4ebc-b7d8-e510d25ea939-config-data\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.506709 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c7b1503-7053-4ebc-b7d8-e510d25ea939-logs\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.506745 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2c7b1503-7053-4ebc-b7d8-e510d25ea939-horizon-secret-key\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.506814 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6rf8\" (UniqueName: \"kubernetes.io/projected/2c7b1503-7053-4ebc-b7d8-e510d25ea939-kube-api-access-v6rf8\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.506860 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c7b1503-7053-4ebc-b7d8-e510d25ea939-scripts\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.507873 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c7b1503-7053-4ebc-b7d8-e510d25ea939-config-data\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.507879 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c7b1503-7053-4ebc-b7d8-e510d25ea939-logs\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.508058 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c7b1503-7053-4ebc-b7d8-e510d25ea939-scripts\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.510853 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2c7b1503-7053-4ebc-b7d8-e510d25ea939-horizon-secret-key\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.521516 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6rf8\" (UniqueName: \"kubernetes.io/projected/2c7b1503-7053-4ebc-b7d8-e510d25ea939-kube-api-access-v6rf8\") pod \"horizon-794f64f8c9-96s66\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.542655 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.647857 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.831199 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-598b74d6b9-77pkc"] Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.861332 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5866d78465-8kt6n"] Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.876050 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.891282 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5866d78465-8kt6n"] Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.932062 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/482ac932-3c0a-43e7-8878-9989d75c9e29-config-data\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.932217 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5vxq\" (UniqueName: \"kubernetes.io/projected/482ac932-3c0a-43e7-8878-9989d75c9e29-kube-api-access-k5vxq\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.932422 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/482ac932-3c0a-43e7-8878-9989d75c9e29-scripts\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.932589 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/482ac932-3c0a-43e7-8878-9989d75c9e29-horizon-secret-key\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:22:59 crc kubenswrapper[4972]: I1121 11:22:59.932660 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482ac932-3c0a-43e7-8878-9989d75c9e29-logs\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.035014 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/482ac932-3c0a-43e7-8878-9989d75c9e29-scripts\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.035088 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/482ac932-3c0a-43e7-8878-9989d75c9e29-horizon-secret-key\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.035124 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482ac932-3c0a-43e7-8878-9989d75c9e29-logs\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.035190 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/482ac932-3c0a-43e7-8878-9989d75c9e29-config-data\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.035227 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5vxq\" (UniqueName: \"kubernetes.io/projected/482ac932-3c0a-43e7-8878-9989d75c9e29-kube-api-access-k5vxq\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.035924 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/482ac932-3c0a-43e7-8878-9989d75c9e29-scripts\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.037456 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/482ac932-3c0a-43e7-8878-9989d75c9e29-config-data\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.037740 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482ac932-3c0a-43e7-8878-9989d75c9e29-logs\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.041812 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/482ac932-3c0a-43e7-8878-9989d75c9e29-horizon-secret-key\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.057351 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5vxq\" (UniqueName: \"kubernetes.io/projected/482ac932-3c0a-43e7-8878-9989d75c9e29-kube-api-access-k5vxq\") pod \"horizon-5866d78465-8kt6n\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:00 crc kubenswrapper[4972]: W1121 11:23:00.071846 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f0ea51d_29ae_4532_b64b_67c7d26f2cd0.slice/crio-724a3a7d97202e66a368cf86f8254025d8877b696cc57389269aa429f703d261 WatchSource:0}: Error finding container 724a3a7d97202e66a368cf86f8254025d8877b696cc57389269aa429f703d261: Status 404 returned error can't find the container with id 724a3a7d97202e66a368cf86f8254025d8877b696cc57389269aa429f703d261 Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.073895 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.074334 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-598b74d6b9-77pkc"] Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.199528 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.230507 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-794f64f8c9-96s66"] Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.239111 4972 generic.go:334] "Generic (PLEG): container finished" podID="d92abb74-0a12-4643-ab97-5239d575301f" containerID="7595a9c724ea3ee3e320c47c0f60a0e2082a6b01ceb11c8c9aa16ff1d1a9014b" exitCode=143 Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.239177 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d92abb74-0a12-4643-ab97-5239d575301f","Type":"ContainerDied","Data":"7595a9c724ea3ee3e320c47c0f60a0e2082a6b01ceb11c8c9aa16ff1d1a9014b"} Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.241337 4972 generic.go:334] "Generic (PLEG): container finished" podID="65c73566-904d-4e66-a7c3-5ee16b691565" containerID="fbfd982a646d2f786bb4dcad7b0ec57f6ba7572f48df02ef170c278fc4b4c6e8" exitCode=143 Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.241367 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65c73566-904d-4e66-a7c3-5ee16b691565","Type":"ContainerDied","Data":"fbfd982a646d2f786bb4dcad7b0ec57f6ba7572f48df02ef170c278fc4b4c6e8"} Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.242474 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-598b74d6b9-77pkc" event={"ID":"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0","Type":"ContainerStarted","Data":"724a3a7d97202e66a368cf86f8254025d8877b696cc57389269aa429f703d261"} Nov 21 11:23:00 crc kubenswrapper[4972]: I1121 11:23:00.714102 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5866d78465-8kt6n"] Nov 21 11:23:00 crc kubenswrapper[4972]: W1121 11:23:00.723263 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod482ac932_3c0a_43e7_8878_9989d75c9e29.slice/crio-ce331d3beae4bff595d17a5100511659559d6933c7a0552d90b70a010f07bad3 WatchSource:0}: Error finding container ce331d3beae4bff595d17a5100511659559d6933c7a0552d90b70a010f07bad3: Status 404 returned error can't find the container with id ce331d3beae4bff595d17a5100511659559d6933c7a0552d90b70a010f07bad3 Nov 21 11:23:01 crc kubenswrapper[4972]: I1121 11:23:01.252318 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-794f64f8c9-96s66" event={"ID":"2c7b1503-7053-4ebc-b7d8-e510d25ea939","Type":"ContainerStarted","Data":"477c055fcb594fd72236783656829bb2e1793f69bb7914b721adb03358e199c1"} Nov 21 11:23:01 crc kubenswrapper[4972]: I1121 11:23:01.256033 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5866d78465-8kt6n" event={"ID":"482ac932-3c0a-43e7-8878-9989d75c9e29","Type":"ContainerStarted","Data":"ce331d3beae4bff595d17a5100511659559d6933c7a0552d90b70a010f07bad3"} Nov 21 11:23:02 crc kubenswrapper[4972]: I1121 11:23:02.539287 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="65c73566-904d-4e66-a7c3-5ee16b691565" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.1.41:9292/healthcheck\": read tcp 10.217.0.2:47704->10.217.1.41:9292: read: connection reset by peer" Nov 21 11:23:02 crc kubenswrapper[4972]: I1121 11:23:02.539288 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="65c73566-904d-4e66-a7c3-5ee16b691565" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.1.41:9292/healthcheck\": read tcp 10.217.0.2:47694->10.217.1.41:9292: read: connection reset by peer" Nov 21 11:23:03 crc kubenswrapper[4972]: I1121 11:23:03.282067 4972 generic.go:334] "Generic (PLEG): container finished" podID="d92abb74-0a12-4643-ab97-5239d575301f" containerID="b745ce440a14b8d70fb298d8ab2a5a3de281c6c1b8877fc0b2aa5190fbd56b96" exitCode=0 Nov 21 11:23:03 crc kubenswrapper[4972]: I1121 11:23:03.282116 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d92abb74-0a12-4643-ab97-5239d575301f","Type":"ContainerDied","Data":"b745ce440a14b8d70fb298d8ab2a5a3de281c6c1b8877fc0b2aa5190fbd56b96"} Nov 21 11:23:03 crc kubenswrapper[4972]: I1121 11:23:03.286053 4972 generic.go:334] "Generic (PLEG): container finished" podID="65c73566-904d-4e66-a7c3-5ee16b691565" containerID="990c8ed0bfa18e664e8ba651aea29c2da75a272f2da09bb1ba15e140fb0bc6ec" exitCode=0 Nov 21 11:23:03 crc kubenswrapper[4972]: I1121 11:23:03.286095 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65c73566-904d-4e66-a7c3-5ee16b691565","Type":"ContainerDied","Data":"990c8ed0bfa18e664e8ba651aea29c2da75a272f2da09bb1ba15e140fb0bc6ec"} Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.386960 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.523216 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf6bw\" (UniqueName: \"kubernetes.io/projected/65c73566-904d-4e66-a7c3-5ee16b691565-kube-api-access-jf6bw\") pod \"65c73566-904d-4e66-a7c3-5ee16b691565\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.523656 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65c73566-904d-4e66-a7c3-5ee16b691565-logs\") pod \"65c73566-904d-4e66-a7c3-5ee16b691565\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.523745 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-scripts\") pod \"65c73566-904d-4e66-a7c3-5ee16b691565\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.523767 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/65c73566-904d-4e66-a7c3-5ee16b691565-ceph\") pod \"65c73566-904d-4e66-a7c3-5ee16b691565\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.523832 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-config-data\") pod \"65c73566-904d-4e66-a7c3-5ee16b691565\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.523884 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-combined-ca-bundle\") pod \"65c73566-904d-4e66-a7c3-5ee16b691565\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.523975 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65c73566-904d-4e66-a7c3-5ee16b691565-httpd-run\") pod \"65c73566-904d-4e66-a7c3-5ee16b691565\" (UID: \"65c73566-904d-4e66-a7c3-5ee16b691565\") " Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.524620 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c73566-904d-4e66-a7c3-5ee16b691565-logs" (OuterVolumeSpecName: "logs") pod "65c73566-904d-4e66-a7c3-5ee16b691565" (UID: "65c73566-904d-4e66-a7c3-5ee16b691565"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.524639 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c73566-904d-4e66-a7c3-5ee16b691565-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "65c73566-904d-4e66-a7c3-5ee16b691565" (UID: "65c73566-904d-4e66-a7c3-5ee16b691565"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.529386 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c73566-904d-4e66-a7c3-5ee16b691565-ceph" (OuterVolumeSpecName: "ceph") pod "65c73566-904d-4e66-a7c3-5ee16b691565" (UID: "65c73566-904d-4e66-a7c3-5ee16b691565"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.532128 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-scripts" (OuterVolumeSpecName: "scripts") pod "65c73566-904d-4e66-a7c3-5ee16b691565" (UID: "65c73566-904d-4e66-a7c3-5ee16b691565"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.532701 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c73566-904d-4e66-a7c3-5ee16b691565-kube-api-access-jf6bw" (OuterVolumeSpecName: "kube-api-access-jf6bw") pod "65c73566-904d-4e66-a7c3-5ee16b691565" (UID: "65c73566-904d-4e66-a7c3-5ee16b691565"). InnerVolumeSpecName "kube-api-access-jf6bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.588090 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65c73566-904d-4e66-a7c3-5ee16b691565" (UID: "65c73566-904d-4e66-a7c3-5ee16b691565"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.626517 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.626803 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/65c73566-904d-4e66-a7c3-5ee16b691565-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.626924 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.627008 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65c73566-904d-4e66-a7c3-5ee16b691565-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.627085 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf6bw\" (UniqueName: \"kubernetes.io/projected/65c73566-904d-4e66-a7c3-5ee16b691565-kube-api-access-jf6bw\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.627176 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65c73566-904d-4e66-a7c3-5ee16b691565-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.718326 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-config-data" (OuterVolumeSpecName: "config-data") pod "65c73566-904d-4e66-a7c3-5ee16b691565" (UID: "65c73566-904d-4e66-a7c3-5ee16b691565"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.730156 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c73566-904d-4e66-a7c3-5ee16b691565-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:07 crc kubenswrapper[4972]: I1121 11:23:07.978640 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.044060 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d92abb74-0a12-4643-ab97-5239d575301f-httpd-run\") pod \"d92abb74-0a12-4643-ab97-5239d575301f\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.044129 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9ng6\" (UniqueName: \"kubernetes.io/projected/d92abb74-0a12-4643-ab97-5239d575301f-kube-api-access-z9ng6\") pod \"d92abb74-0a12-4643-ab97-5239d575301f\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.044206 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d92abb74-0a12-4643-ab97-5239d575301f-ceph\") pod \"d92abb74-0a12-4643-ab97-5239d575301f\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.044458 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d92abb74-0a12-4643-ab97-5239d575301f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d92abb74-0a12-4643-ab97-5239d575301f" (UID: "d92abb74-0a12-4643-ab97-5239d575301f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.045049 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92abb74-0a12-4643-ab97-5239d575301f-logs\") pod \"d92abb74-0a12-4643-ab97-5239d575301f\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.045115 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-scripts\") pod \"d92abb74-0a12-4643-ab97-5239d575301f\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.045165 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-config-data\") pod \"d92abb74-0a12-4643-ab97-5239d575301f\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.045190 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-combined-ca-bundle\") pod \"d92abb74-0a12-4643-ab97-5239d575301f\" (UID: \"d92abb74-0a12-4643-ab97-5239d575301f\") " Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.045363 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d92abb74-0a12-4643-ab97-5239d575301f-logs" (OuterVolumeSpecName: "logs") pod "d92abb74-0a12-4643-ab97-5239d575301f" (UID: "d92abb74-0a12-4643-ab97-5239d575301f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.045743 4972 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d92abb74-0a12-4643-ab97-5239d575301f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.045764 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92abb74-0a12-4643-ab97-5239d575301f-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.050213 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-scripts" (OuterVolumeSpecName: "scripts") pod "d92abb74-0a12-4643-ab97-5239d575301f" (UID: "d92abb74-0a12-4643-ab97-5239d575301f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.050673 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d92abb74-0a12-4643-ab97-5239d575301f-ceph" (OuterVolumeSpecName: "ceph") pod "d92abb74-0a12-4643-ab97-5239d575301f" (UID: "d92abb74-0a12-4643-ab97-5239d575301f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.062296 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d92abb74-0a12-4643-ab97-5239d575301f-kube-api-access-z9ng6" (OuterVolumeSpecName: "kube-api-access-z9ng6") pod "d92abb74-0a12-4643-ab97-5239d575301f" (UID: "d92abb74-0a12-4643-ab97-5239d575301f"). InnerVolumeSpecName "kube-api-access-z9ng6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.079372 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d92abb74-0a12-4643-ab97-5239d575301f" (UID: "d92abb74-0a12-4643-ab97-5239d575301f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.108799 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-config-data" (OuterVolumeSpecName: "config-data") pod "d92abb74-0a12-4643-ab97-5239d575301f" (UID: "d92abb74-0a12-4643-ab97-5239d575301f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.147647 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.147686 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.147699 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92abb74-0a12-4643-ab97-5239d575301f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.147716 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9ng6\" (UniqueName: \"kubernetes.io/projected/d92abb74-0a12-4643-ab97-5239d575301f-kube-api-access-z9ng6\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.147728 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d92abb74-0a12-4643-ab97-5239d575301f-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.344089 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-794f64f8c9-96s66" event={"ID":"2c7b1503-7053-4ebc-b7d8-e510d25ea939","Type":"ContainerStarted","Data":"ae66ae6140638644985ae0138691d99f39bec7c44166a4c9af1291c65941bd1b"} Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.344422 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-794f64f8c9-96s66" event={"ID":"2c7b1503-7053-4ebc-b7d8-e510d25ea939","Type":"ContainerStarted","Data":"d2ecf321f4b4dd764ec2eb616ae9d8ca1ead486f754d549530ebe394521eff76"} Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.347015 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5866d78465-8kt6n" event={"ID":"482ac932-3c0a-43e7-8878-9989d75c9e29","Type":"ContainerStarted","Data":"a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2"} Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.347118 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5866d78465-8kt6n" event={"ID":"482ac932-3c0a-43e7-8878-9989d75c9e29","Type":"ContainerStarted","Data":"c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1"} Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.349980 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-598b74d6b9-77pkc" podUID="8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" containerName="horizon-log" containerID="cri-o://7feca0e0abe6aa39112e535ff525bb6ddcc2a6dd363d5402ce1c9a95b986194e" gracePeriod=30 Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.350338 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-598b74d6b9-77pkc" event={"ID":"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0","Type":"ContainerStarted","Data":"6a4eb0d1ce4fb0b2650a784bec023cca5715c57332d22c638d01de231f4b5aea"} Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.350399 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-598b74d6b9-77pkc" event={"ID":"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0","Type":"ContainerStarted","Data":"7feca0e0abe6aa39112e535ff525bb6ddcc2a6dd363d5402ce1c9a95b986194e"} Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.350463 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-598b74d6b9-77pkc" podUID="8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" containerName="horizon" containerID="cri-o://6a4eb0d1ce4fb0b2650a784bec023cca5715c57332d22c638d01de231f4b5aea" gracePeriod=30 Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.352895 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d92abb74-0a12-4643-ab97-5239d575301f","Type":"ContainerDied","Data":"471650c81838c4a31bdcab829ec245c4ca13856b466f917be2a4e3eef161545b"} Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.352931 4972 scope.go:117] "RemoveContainer" containerID="b745ce440a14b8d70fb298d8ab2a5a3de281c6c1b8877fc0b2aa5190fbd56b96" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.353029 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.373579 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65c73566-904d-4e66-a7c3-5ee16b691565","Type":"ContainerDied","Data":"e1dfee453c48525054057a2380e722091cc4369cb90cebb885b0f43d1c1b5174"} Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.373649 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.382647 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-794f64f8c9-96s66" podStartSLOduration=2.5924389789999998 podStartE2EDuration="9.382623897s" podCreationTimestamp="2025-11-21 11:22:59 +0000 UTC" firstStartedPulling="2025-11-21 11:23:00.231843269 +0000 UTC m=+6125.340985777" lastFinishedPulling="2025-11-21 11:23:07.022028187 +0000 UTC m=+6132.131170695" observedRunningTime="2025-11-21 11:23:08.367517247 +0000 UTC m=+6133.476659755" watchObservedRunningTime="2025-11-21 11:23:08.382623897 +0000 UTC m=+6133.491766415" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.394406 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5866d78465-8kt6n" podStartSLOduration=3.103201964 podStartE2EDuration="9.394388908s" podCreationTimestamp="2025-11-21 11:22:59 +0000 UTC" firstStartedPulling="2025-11-21 11:23:00.728954123 +0000 UTC m=+6125.838096621" lastFinishedPulling="2025-11-21 11:23:07.020141057 +0000 UTC m=+6132.129283565" observedRunningTime="2025-11-21 11:23:08.384062755 +0000 UTC m=+6133.493205253" watchObservedRunningTime="2025-11-21 11:23:08.394388908 +0000 UTC m=+6133.503531406" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.400856 4972 scope.go:117] "RemoveContainer" containerID="7595a9c724ea3ee3e320c47c0f60a0e2082a6b01ceb11c8c9aa16ff1d1a9014b" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.424023 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-598b74d6b9-77pkc" podStartSLOduration=2.449440812 podStartE2EDuration="9.424004223s" podCreationTimestamp="2025-11-21 11:22:59 +0000 UTC" firstStartedPulling="2025-11-21 11:23:00.073690161 +0000 UTC m=+6125.182832659" lastFinishedPulling="2025-11-21 11:23:07.048253562 +0000 UTC m=+6132.157396070" observedRunningTime="2025-11-21 11:23:08.412805496 +0000 UTC m=+6133.521947994" watchObservedRunningTime="2025-11-21 11:23:08.424004223 +0000 UTC m=+6133.533146721" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.434944 4972 scope.go:117] "RemoveContainer" containerID="990c8ed0bfa18e664e8ba651aea29c2da75a272f2da09bb1ba15e140fb0bc6ec" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.443416 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.467247 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.473583 4972 scope.go:117] "RemoveContainer" containerID="fbfd982a646d2f786bb4dcad7b0ec57f6ba7572f48df02ef170c278fc4b4c6e8" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.478109 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.485518 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:23:08 crc kubenswrapper[4972]: E1121 11:23:08.485942 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92abb74-0a12-4643-ab97-5239d575301f" containerName="glance-httpd" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.485957 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92abb74-0a12-4643-ab97-5239d575301f" containerName="glance-httpd" Nov 21 11:23:08 crc kubenswrapper[4972]: E1121 11:23:08.485979 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92abb74-0a12-4643-ab97-5239d575301f" containerName="glance-log" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.485986 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92abb74-0a12-4643-ab97-5239d575301f" containerName="glance-log" Nov 21 11:23:08 crc kubenswrapper[4972]: E1121 11:23:08.486004 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c73566-904d-4e66-a7c3-5ee16b691565" containerName="glance-log" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.486010 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c73566-904d-4e66-a7c3-5ee16b691565" containerName="glance-log" Nov 21 11:23:08 crc kubenswrapper[4972]: E1121 11:23:08.486023 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c73566-904d-4e66-a7c3-5ee16b691565" containerName="glance-httpd" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.486029 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c73566-904d-4e66-a7c3-5ee16b691565" containerName="glance-httpd" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.486245 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c73566-904d-4e66-a7c3-5ee16b691565" containerName="glance-log" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.486267 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92abb74-0a12-4643-ab97-5239d575301f" containerName="glance-log" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.486276 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92abb74-0a12-4643-ab97-5239d575301f" containerName="glance-httpd" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.486290 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c73566-904d-4e66-a7c3-5ee16b691565" containerName="glance-httpd" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.487465 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.489797 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.490159 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mqqnk" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.490617 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.493644 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.508806 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.517815 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.518074 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.521267 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.554940 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71978ab6-d300-431a-8075-07dd3ed1a95a-logs\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.555008 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71978ab6-d300-431a-8075-07dd3ed1a95a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.555036 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/71978ab6-d300-431a-8075-07dd3ed1a95a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.555092 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71978ab6-d300-431a-8075-07dd3ed1a95a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.555116 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71978ab6-d300-431a-8075-07dd3ed1a95a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.555149 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7hch\" (UniqueName: \"kubernetes.io/projected/71978ab6-d300-431a-8075-07dd3ed1a95a-kube-api-access-k7hch\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.555173 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71978ab6-d300-431a-8075-07dd3ed1a95a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.561630 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.656723 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-config-data\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.656798 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-logs\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.656877 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71978ab6-d300-431a-8075-07dd3ed1a95a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.658940 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.658999 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71978ab6-d300-431a-8075-07dd3ed1a95a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.659414 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7hch\" (UniqueName: \"kubernetes.io/projected/71978ab6-d300-431a-8075-07dd3ed1a95a-kube-api-access-k7hch\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.659482 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.659520 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71978ab6-d300-431a-8075-07dd3ed1a95a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.659563 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpscq\" (UniqueName: \"kubernetes.io/projected/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-kube-api-access-fpscq\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.659656 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71978ab6-d300-431a-8075-07dd3ed1a95a-logs\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.659740 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71978ab6-d300-431a-8075-07dd3ed1a95a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.659786 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/71978ab6-d300-431a-8075-07dd3ed1a95a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.659857 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-ceph\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.659892 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-scripts\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.660150 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71978ab6-d300-431a-8075-07dd3ed1a95a-logs\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.660362 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71978ab6-d300-431a-8075-07dd3ed1a95a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.661782 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71978ab6-d300-431a-8075-07dd3ed1a95a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.667647 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71978ab6-d300-431a-8075-07dd3ed1a95a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.667696 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71978ab6-d300-431a-8075-07dd3ed1a95a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.667822 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/71978ab6-d300-431a-8075-07dd3ed1a95a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.680221 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7hch\" (UniqueName: \"kubernetes.io/projected/71978ab6-d300-431a-8075-07dd3ed1a95a-kube-api-access-k7hch\") pod \"glance-default-internal-api-0\" (UID: \"71978ab6-d300-431a-8075-07dd3ed1a95a\") " pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.760802 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.760870 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpscq\" (UniqueName: \"kubernetes.io/projected/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-kube-api-access-fpscq\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.760975 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-ceph\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.760997 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-scripts\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.761022 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-config-data\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.761041 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-logs\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.761066 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.761448 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.762069 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-logs\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.764380 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.764888 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-scripts\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.765556 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-config-data\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.766176 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-ceph\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.777389 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpscq\" (UniqueName: \"kubernetes.io/projected/012b4555-c9ab-48c0-ad99-d3a77d7d3c2b-kube-api-access-fpscq\") pod \"glance-default-external-api-0\" (UID: \"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b\") " pod="openstack/glance-default-external-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.840269 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 11:23:08 crc kubenswrapper[4972]: I1121 11:23:08.854342 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 11:23:09 crc kubenswrapper[4972]: I1121 11:23:09.543027 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:23:09 crc kubenswrapper[4972]: I1121 11:23:09.550853 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 11:23:09 crc kubenswrapper[4972]: I1121 11:23:09.649336 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:23:09 crc kubenswrapper[4972]: I1121 11:23:09.649681 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:23:09 crc kubenswrapper[4972]: I1121 11:23:09.662621 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 11:23:09 crc kubenswrapper[4972]: W1121 11:23:09.671142 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod012b4555_c9ab_48c0_ad99_d3a77d7d3c2b.slice/crio-f870977263d9bc5fcad80247b93440323575d1be593722a032c42c75e7c8cf60 WatchSource:0}: Error finding container f870977263d9bc5fcad80247b93440323575d1be593722a032c42c75e7c8cf60: Status 404 returned error can't find the container with id f870977263d9bc5fcad80247b93440323575d1be593722a032c42c75e7c8cf60 Nov 21 11:23:09 crc kubenswrapper[4972]: I1121 11:23:09.778303 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c73566-904d-4e66-a7c3-5ee16b691565" path="/var/lib/kubelet/pods/65c73566-904d-4e66-a7c3-5ee16b691565/volumes" Nov 21 11:23:09 crc kubenswrapper[4972]: I1121 11:23:09.779519 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d92abb74-0a12-4643-ab97-5239d575301f" path="/var/lib/kubelet/pods/d92abb74-0a12-4643-ab97-5239d575301f/volumes" Nov 21 11:23:10 crc kubenswrapper[4972]: I1121 11:23:10.199925 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:10 crc kubenswrapper[4972]: I1121 11:23:10.200488 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:10 crc kubenswrapper[4972]: I1121 11:23:10.430571 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b","Type":"ContainerStarted","Data":"f870977263d9bc5fcad80247b93440323575d1be593722a032c42c75e7c8cf60"} Nov 21 11:23:10 crc kubenswrapper[4972]: I1121 11:23:10.432499 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71978ab6-d300-431a-8075-07dd3ed1a95a","Type":"ContainerStarted","Data":"719766cd1da3620a9807b2fb764a9301cd2304276b54abcfd41b0fc2fe463ebb"} Nov 21 11:23:13 crc kubenswrapper[4972]: I1121 11:23:13.462649 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b","Type":"ContainerStarted","Data":"4a8417e58fc892c6478f5f39d3d95aa36b23384f3833eeced69386c8e7eb9e58"} Nov 21 11:23:13 crc kubenswrapper[4972]: I1121 11:23:13.464189 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71978ab6-d300-431a-8075-07dd3ed1a95a","Type":"ContainerStarted","Data":"f1518cfc0ed8d59ec2a996edcb4574ba98339af266aea3cb3f76d8b8aa813a52"} Nov 21 11:23:14 crc kubenswrapper[4972]: I1121 11:23:14.486233 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"012b4555-c9ab-48c0-ad99-d3a77d7d3c2b","Type":"ContainerStarted","Data":"847e8900e011ff2d66be0b05e0ed3ba04aee8172be98e47f71bbd69c613e9aba"} Nov 21 11:23:14 crc kubenswrapper[4972]: I1121 11:23:14.488767 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71978ab6-d300-431a-8075-07dd3ed1a95a","Type":"ContainerStarted","Data":"a96a3dd075cbd411a436a7860c230ace63557a84e77106efe88e9136b1bc87d7"} Nov 21 11:23:14 crc kubenswrapper[4972]: I1121 11:23:14.559860 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.559828383 podStartE2EDuration="6.559828383s" podCreationTimestamp="2025-11-21 11:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:23:14.526561752 +0000 UTC m=+6139.635704270" watchObservedRunningTime="2025-11-21 11:23:14.559828383 +0000 UTC m=+6139.668970871" Nov 21 11:23:18 crc kubenswrapper[4972]: I1121 11:23:18.841704 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 11:23:18 crc kubenswrapper[4972]: I1121 11:23:18.843787 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 11:23:18 crc kubenswrapper[4972]: I1121 11:23:18.862807 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 11:23:18 crc kubenswrapper[4972]: I1121 11:23:18.862883 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 11:23:18 crc kubenswrapper[4972]: I1121 11:23:18.890495 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 11:23:18 crc kubenswrapper[4972]: I1121 11:23:18.892589 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 11:23:18 crc kubenswrapper[4972]: I1121 11:23:18.930424 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.930394718 podStartE2EDuration="10.930394718s" podCreationTimestamp="2025-11-21 11:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:23:14.549076748 +0000 UTC m=+6139.658219266" watchObservedRunningTime="2025-11-21 11:23:18.930394718 +0000 UTC m=+6144.039537226" Nov 21 11:23:18 crc kubenswrapper[4972]: I1121 11:23:18.938511 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 11:23:18 crc kubenswrapper[4972]: I1121 11:23:18.938660 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 11:23:19 crc kubenswrapper[4972]: I1121 11:23:19.550181 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 11:23:19 crc kubenswrapper[4972]: I1121 11:23:19.550249 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 11:23:19 crc kubenswrapper[4972]: I1121 11:23:19.551152 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 11:23:19 crc kubenswrapper[4972]: I1121 11:23:19.551183 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 11:23:19 crc kubenswrapper[4972]: I1121 11:23:19.650257 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-794f64f8c9-96s66" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.110:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8080: connect: connection refused" Nov 21 11:23:20 crc kubenswrapper[4972]: I1121 11:23:20.202887 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5866d78465-8kt6n" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.111:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8080: connect: connection refused" Nov 21 11:23:24 crc kubenswrapper[4972]: I1121 11:23:24.049705 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-daf3-account-create-b7tcc"] Nov 21 11:23:24 crc kubenswrapper[4972]: I1121 11:23:24.068055 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2jcnh"] Nov 21 11:23:24 crc kubenswrapper[4972]: I1121 11:23:24.080846 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2jcnh"] Nov 21 11:23:24 crc kubenswrapper[4972]: I1121 11:23:24.091166 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-daf3-account-create-b7tcc"] Nov 21 11:23:24 crc kubenswrapper[4972]: I1121 11:23:24.424646 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 11:23:24 crc kubenswrapper[4972]: I1121 11:23:24.424767 4972 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 11:23:24 crc kubenswrapper[4972]: I1121 11:23:24.431866 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 11:23:24 crc kubenswrapper[4972]: I1121 11:23:24.434043 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 11:23:24 crc kubenswrapper[4972]: I1121 11:23:24.434250 4972 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 11:23:24 crc kubenswrapper[4972]: I1121 11:23:24.437579 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 11:23:25 crc kubenswrapper[4972]: I1121 11:23:25.771921 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7007310e-51eb-46e9-9344-2458f4d82516" path="/var/lib/kubelet/pods/7007310e-51eb-46e9-9344-2458f4d82516/volumes" Nov 21 11:23:25 crc kubenswrapper[4972]: I1121 11:23:25.773783 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54069fb-21b9-4f93-92d7-677cd9490299" path="/var/lib/kubelet/pods/e54069fb-21b9-4f93-92d7-677cd9490299/volumes" Nov 21 11:23:26 crc kubenswrapper[4972]: I1121 11:23:26.610625 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:23:26 crc kubenswrapper[4972]: I1121 11:23:26.610865 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:23:31 crc kubenswrapper[4972]: I1121 11:23:31.627114 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:23:31 crc kubenswrapper[4972]: I1121 11:23:31.835219 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.042175 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-v9pb4"] Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.058923 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-v9pb4"] Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.188547 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.229543 4972 scope.go:117] "RemoveContainer" containerID="74e8a80c154ef35f68cd02969f75785f0d672ab4c284095630df145274551056" Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.308258 4972 scope.go:117] "RemoveContainer" containerID="7eb7a907d95f6882c5d96c53f136c64646057fbe0046409ee6a4cc5a5cc24240" Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.353892 4972 scope.go:117] "RemoveContainer" containerID="cab3fbe8a5160f5f0bacdb36d7310bcc9760138a6f7d6a7aae41a188e9811029" Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.389738 4972 scope.go:117] "RemoveContainer" containerID="70ed00bc71feab9b52b539958d8cdbcdef290a3352ef5be6bfdd9be17df9eeff" Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.420630 4972 scope.go:117] "RemoveContainer" containerID="c8dbbb9edde75dc71e5e6670f5226e67a2462689d68cade583e47f04b3b0f908" Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.474372 4972 scope.go:117] "RemoveContainer" containerID="57c9ba4c55760949588a274b63a1546a259f82708556d8ea3a27fb7e03cc5373" Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.511064 4972 scope.go:117] "RemoveContainer" containerID="3279be070e284fc884487fa9e9cf9a85c43e044010789766fc7062c7677771d1" Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.525164 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.605325 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-794f64f8c9-96s66"] Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.740771 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-794f64f8c9-96s66" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerName="horizon-log" containerID="cri-o://ae66ae6140638644985ae0138691d99f39bec7c44166a4c9af1291c65941bd1b" gracePeriod=30 Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.741129 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-794f64f8c9-96s66" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerName="horizon" containerID="cri-o://d2ecf321f4b4dd764ec2eb616ae9d8ca1ead486f754d549530ebe394521eff76" gracePeriod=30 Nov 21 11:23:33 crc kubenswrapper[4972]: I1121 11:23:33.771203 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9020c23e-fa46-4c43-8052-26bd2ce4e4ea" path="/var/lib/kubelet/pods/9020c23e-fa46-4c43-8052-26bd2ce4e4ea/volumes" Nov 21 11:23:37 crc kubenswrapper[4972]: I1121 11:23:37.784450 4972 generic.go:334] "Generic (PLEG): container finished" podID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerID="d2ecf321f4b4dd764ec2eb616ae9d8ca1ead486f754d549530ebe394521eff76" exitCode=0 Nov 21 11:23:37 crc kubenswrapper[4972]: I1121 11:23:37.784516 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-794f64f8c9-96s66" event={"ID":"2c7b1503-7053-4ebc-b7d8-e510d25ea939","Type":"ContainerDied","Data":"d2ecf321f4b4dd764ec2eb616ae9d8ca1ead486f754d549530ebe394521eff76"} Nov 21 11:23:38 crc kubenswrapper[4972]: I1121 11:23:38.794639 4972 generic.go:334] "Generic (PLEG): container finished" podID="8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" containerID="6a4eb0d1ce4fb0b2650a784bec023cca5715c57332d22c638d01de231f4b5aea" exitCode=137 Nov 21 11:23:38 crc kubenswrapper[4972]: I1121 11:23:38.795138 4972 generic.go:334] "Generic (PLEG): container finished" podID="8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" containerID="7feca0e0abe6aa39112e535ff525bb6ddcc2a6dd363d5402ce1c9a95b986194e" exitCode=137 Nov 21 11:23:38 crc kubenswrapper[4972]: I1121 11:23:38.794719 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-598b74d6b9-77pkc" event={"ID":"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0","Type":"ContainerDied","Data":"6a4eb0d1ce4fb0b2650a784bec023cca5715c57332d22c638d01de231f4b5aea"} Nov 21 11:23:38 crc kubenswrapper[4972]: I1121 11:23:38.795169 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-598b74d6b9-77pkc" event={"ID":"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0","Type":"ContainerDied","Data":"7feca0e0abe6aa39112e535ff525bb6ddcc2a6dd363d5402ce1c9a95b986194e"} Nov 21 11:23:38 crc kubenswrapper[4972]: I1121 11:23:38.795180 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-598b74d6b9-77pkc" event={"ID":"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0","Type":"ContainerDied","Data":"724a3a7d97202e66a368cf86f8254025d8877b696cc57389269aa429f703d261"} Nov 21 11:23:38 crc kubenswrapper[4972]: I1121 11:23:38.795192 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="724a3a7d97202e66a368cf86f8254025d8877b696cc57389269aa429f703d261" Nov 21 11:23:38 crc kubenswrapper[4972]: I1121 11:23:38.903665 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.009948 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-scripts\") pod \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.010013 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-horizon-secret-key\") pod \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.010198 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpmdg\" (UniqueName: \"kubernetes.io/projected/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-kube-api-access-jpmdg\") pod \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.010229 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-config-data\") pod \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.010317 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-logs\") pod \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\" (UID: \"8f0ea51d-29ae-4532-b64b-67c7d26f2cd0\") " Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.011216 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-logs" (OuterVolumeSpecName: "logs") pod "8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" (UID: "8f0ea51d-29ae-4532-b64b-67c7d26f2cd0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.018274 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" (UID: "8f0ea51d-29ae-4532-b64b-67c7d26f2cd0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.026812 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-kube-api-access-jpmdg" (OuterVolumeSpecName: "kube-api-access-jpmdg") pod "8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" (UID: "8f0ea51d-29ae-4532-b64b-67c7d26f2cd0"). InnerVolumeSpecName "kube-api-access-jpmdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.050075 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-scripts" (OuterVolumeSpecName: "scripts") pod "8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" (UID: "8f0ea51d-29ae-4532-b64b-67c7d26f2cd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.074392 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-config-data" (OuterVolumeSpecName: "config-data") pod "8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" (UID: "8f0ea51d-29ae-4532-b64b-67c7d26f2cd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.113281 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.113306 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.113316 4972 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.113327 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpmdg\" (UniqueName: \"kubernetes.io/projected/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-kube-api-access-jpmdg\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.113335 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.649875 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-794f64f8c9-96s66" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.110:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8080: connect: connection refused" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.803550 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-598b74d6b9-77pkc" Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.827975 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-598b74d6b9-77pkc"] Nov 21 11:23:39 crc kubenswrapper[4972]: I1121 11:23:39.858865 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-598b74d6b9-77pkc"] Nov 21 11:23:41 crc kubenswrapper[4972]: I1121 11:23:41.780759 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" path="/var/lib/kubelet/pods/8f0ea51d-29ae-4532-b64b-67c7d26f2cd0/volumes" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.113924 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58456df47f-c7fp7"] Nov 21 11:23:47 crc kubenswrapper[4972]: E1121 11:23:47.114974 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" containerName="horizon-log" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.114991 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" containerName="horizon-log" Nov 21 11:23:47 crc kubenswrapper[4972]: E1121 11:23:47.115009 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" containerName="horizon" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.115016 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" containerName="horizon" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.115240 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" containerName="horizon" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.115262 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0ea51d-29ae-4532-b64b-67c7d26f2cd0" containerName="horizon-log" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.116450 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.135573 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58456df47f-c7fp7"] Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.206027 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-logs\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.206108 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-scripts\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.206161 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-config-data\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.206213 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx84c\" (UniqueName: \"kubernetes.io/projected/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-kube-api-access-bx84c\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.206475 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-horizon-secret-key\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.311947 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-horizon-secret-key\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.312069 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-logs\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.312154 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-scripts\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.312180 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-config-data\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.312281 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx84c\" (UniqueName: \"kubernetes.io/projected/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-kube-api-access-bx84c\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.313877 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-scripts\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.314116 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-logs\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.314948 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-config-data\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.321971 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-horizon-secret-key\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.330032 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx84c\" (UniqueName: \"kubernetes.io/projected/6ef37c3f-7753-44c3-88d7-b8ebf40a687e-kube-api-access-bx84c\") pod \"horizon-58456df47f-c7fp7\" (UID: \"6ef37c3f-7753-44c3-88d7-b8ebf40a687e\") " pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.436381 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:47 crc kubenswrapper[4972]: I1121 11:23:47.945840 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58456df47f-c7fp7"] Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.267755 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-rzn97"] Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.269190 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-rzn97" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.277912 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-rzn97"] Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.408377 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-4f8e-account-create-89w2g"] Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.410280 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4f8e-account-create-89w2g" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.412313 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.418675 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-4f8e-account-create-89w2g"] Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.431563 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p2qz\" (UniqueName: \"kubernetes.io/projected/8d2d2162-c2ba-4192-a494-b4d8825272af-kube-api-access-7p2qz\") pod \"heat-db-create-rzn97\" (UID: \"8d2d2162-c2ba-4192-a494-b4d8825272af\") " pod="openstack/heat-db-create-rzn97" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.431644 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d2d2162-c2ba-4192-a494-b4d8825272af-operator-scripts\") pod \"heat-db-create-rzn97\" (UID: \"8d2d2162-c2ba-4192-a494-b4d8825272af\") " pod="openstack/heat-db-create-rzn97" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.533677 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f-operator-scripts\") pod \"heat-4f8e-account-create-89w2g\" (UID: \"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f\") " pod="openstack/heat-4f8e-account-create-89w2g" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.533736 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkwl6\" (UniqueName: \"kubernetes.io/projected/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f-kube-api-access-nkwl6\") pod \"heat-4f8e-account-create-89w2g\" (UID: \"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f\") " pod="openstack/heat-4f8e-account-create-89w2g" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.534285 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p2qz\" (UniqueName: \"kubernetes.io/projected/8d2d2162-c2ba-4192-a494-b4d8825272af-kube-api-access-7p2qz\") pod \"heat-db-create-rzn97\" (UID: \"8d2d2162-c2ba-4192-a494-b4d8825272af\") " pod="openstack/heat-db-create-rzn97" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.534346 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d2d2162-c2ba-4192-a494-b4d8825272af-operator-scripts\") pod \"heat-db-create-rzn97\" (UID: \"8d2d2162-c2ba-4192-a494-b4d8825272af\") " pod="openstack/heat-db-create-rzn97" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.535130 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d2d2162-c2ba-4192-a494-b4d8825272af-operator-scripts\") pod \"heat-db-create-rzn97\" (UID: \"8d2d2162-c2ba-4192-a494-b4d8825272af\") " pod="openstack/heat-db-create-rzn97" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.553008 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p2qz\" (UniqueName: \"kubernetes.io/projected/8d2d2162-c2ba-4192-a494-b4d8825272af-kube-api-access-7p2qz\") pod \"heat-db-create-rzn97\" (UID: \"8d2d2162-c2ba-4192-a494-b4d8825272af\") " pod="openstack/heat-db-create-rzn97" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.591747 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-rzn97" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.636453 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f-operator-scripts\") pod \"heat-4f8e-account-create-89w2g\" (UID: \"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f\") " pod="openstack/heat-4f8e-account-create-89w2g" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.636910 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkwl6\" (UniqueName: \"kubernetes.io/projected/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f-kube-api-access-nkwl6\") pod \"heat-4f8e-account-create-89w2g\" (UID: \"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f\") " pod="openstack/heat-4f8e-account-create-89w2g" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.637872 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f-operator-scripts\") pod \"heat-4f8e-account-create-89w2g\" (UID: \"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f\") " pod="openstack/heat-4f8e-account-create-89w2g" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.658253 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkwl6\" (UniqueName: \"kubernetes.io/projected/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f-kube-api-access-nkwl6\") pod \"heat-4f8e-account-create-89w2g\" (UID: \"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f\") " pod="openstack/heat-4f8e-account-create-89w2g" Nov 21 11:23:48 crc kubenswrapper[4972]: I1121 11:23:48.731478 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4f8e-account-create-89w2g" Nov 21 11:23:49 crc kubenswrapper[4972]: I1121 11:23:48.896708 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58456df47f-c7fp7" event={"ID":"6ef37c3f-7753-44c3-88d7-b8ebf40a687e","Type":"ContainerStarted","Data":"b821c1b2f0dec91b84fa8793b373fe70a9ad1e6d5d8ce08207431a92f762ef08"} Nov 21 11:23:49 crc kubenswrapper[4972]: I1121 11:23:48.896745 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58456df47f-c7fp7" event={"ID":"6ef37c3f-7753-44c3-88d7-b8ebf40a687e","Type":"ContainerStarted","Data":"7c311a9fe09e698ab33d49389cd336b98201bfe602604415dc22dc5447a88100"} Nov 21 11:23:49 crc kubenswrapper[4972]: I1121 11:23:48.896755 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58456df47f-c7fp7" event={"ID":"6ef37c3f-7753-44c3-88d7-b8ebf40a687e","Type":"ContainerStarted","Data":"c371688a895f70cea26b5f7d4f0472294e5d4f9468a446a217b39f838014ea02"} Nov 21 11:23:49 crc kubenswrapper[4972]: I1121 11:23:48.927856 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-58456df47f-c7fp7" podStartSLOduration=1.927821959 podStartE2EDuration="1.927821959s" podCreationTimestamp="2025-11-21 11:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:23:48.91691537 +0000 UTC m=+6174.026057878" watchObservedRunningTime="2025-11-21 11:23:48.927821959 +0000 UTC m=+6174.036964457" Nov 21 11:23:49 crc kubenswrapper[4972]: I1121 11:23:49.649805 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-794f64f8c9-96s66" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.110:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8080: connect: connection refused" Nov 21 11:23:50 crc kubenswrapper[4972]: W1121 11:23:50.079585 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d2d2162_c2ba_4192_a494_b4d8825272af.slice/crio-4cfd9eb248c47e39e4d362f2518b5b5e16bd74c66fb73819788c0c5f8c8fe3ee WatchSource:0}: Error finding container 4cfd9eb248c47e39e4d362f2518b5b5e16bd74c66fb73819788c0c5f8c8fe3ee: Status 404 returned error can't find the container with id 4cfd9eb248c47e39e4d362f2518b5b5e16bd74c66fb73819788c0c5f8c8fe3ee Nov 21 11:23:50 crc kubenswrapper[4972]: I1121 11:23:50.079676 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-rzn97"] Nov 21 11:23:50 crc kubenswrapper[4972]: W1121 11:23:50.235020 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod054461e0_29d2_4eb4_be72_a6e6e0c6fe5f.slice/crio-d9b2732a252fd1b4c60d62290141524cd68c1f412b58f9584e712b26b7541baf WatchSource:0}: Error finding container d9b2732a252fd1b4c60d62290141524cd68c1f412b58f9584e712b26b7541baf: Status 404 returned error can't find the container with id d9b2732a252fd1b4c60d62290141524cd68c1f412b58f9584e712b26b7541baf Nov 21 11:23:50 crc kubenswrapper[4972]: I1121 11:23:50.241228 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-4f8e-account-create-89w2g"] Nov 21 11:23:50 crc kubenswrapper[4972]: I1121 11:23:50.920230 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-rzn97" event={"ID":"8d2d2162-c2ba-4192-a494-b4d8825272af","Type":"ContainerDied","Data":"3e2c3c48e02b6939c41ddb61b6e5ba4d82bb4b3177fc97ea38d25ac273c05cd6"} Nov 21 11:23:50 crc kubenswrapper[4972]: I1121 11:23:50.920236 4972 generic.go:334] "Generic (PLEG): container finished" podID="8d2d2162-c2ba-4192-a494-b4d8825272af" containerID="3e2c3c48e02b6939c41ddb61b6e5ba4d82bb4b3177fc97ea38d25ac273c05cd6" exitCode=0 Nov 21 11:23:50 crc kubenswrapper[4972]: I1121 11:23:50.921005 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-rzn97" event={"ID":"8d2d2162-c2ba-4192-a494-b4d8825272af","Type":"ContainerStarted","Data":"4cfd9eb248c47e39e4d362f2518b5b5e16bd74c66fb73819788c0c5f8c8fe3ee"} Nov 21 11:23:50 crc kubenswrapper[4972]: I1121 11:23:50.924753 4972 generic.go:334] "Generic (PLEG): container finished" podID="054461e0-29d2-4eb4-be72-a6e6e0c6fe5f" containerID="abf8b55c8c5b2bb6f348026097b4eba3d45252b7417e97ba8b586d2a9b095172" exitCode=0 Nov 21 11:23:50 crc kubenswrapper[4972]: I1121 11:23:50.924804 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4f8e-account-create-89w2g" event={"ID":"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f","Type":"ContainerDied","Data":"abf8b55c8c5b2bb6f348026097b4eba3d45252b7417e97ba8b586d2a9b095172"} Nov 21 11:23:50 crc kubenswrapper[4972]: I1121 11:23:50.924863 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4f8e-account-create-89w2g" event={"ID":"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f","Type":"ContainerStarted","Data":"d9b2732a252fd1b4c60d62290141524cd68c1f412b58f9584e712b26b7541baf"} Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.560331 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-rzn97" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.569326 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4f8e-account-create-89w2g" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.746287 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d2d2162-c2ba-4192-a494-b4d8825272af-operator-scripts\") pod \"8d2d2162-c2ba-4192-a494-b4d8825272af\" (UID: \"8d2d2162-c2ba-4192-a494-b4d8825272af\") " Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.746371 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p2qz\" (UniqueName: \"kubernetes.io/projected/8d2d2162-c2ba-4192-a494-b4d8825272af-kube-api-access-7p2qz\") pod \"8d2d2162-c2ba-4192-a494-b4d8825272af\" (UID: \"8d2d2162-c2ba-4192-a494-b4d8825272af\") " Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.746486 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkwl6\" (UniqueName: \"kubernetes.io/projected/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f-kube-api-access-nkwl6\") pod \"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f\" (UID: \"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f\") " Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.746609 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f-operator-scripts\") pod \"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f\" (UID: \"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f\") " Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.747631 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "054461e0-29d2-4eb4-be72-a6e6e0c6fe5f" (UID: "054461e0-29d2-4eb4-be72-a6e6e0c6fe5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.747786 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d2d2162-c2ba-4192-a494-b4d8825272af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d2d2162-c2ba-4192-a494-b4d8825272af" (UID: "8d2d2162-c2ba-4192-a494-b4d8825272af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.753266 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f-kube-api-access-nkwl6" (OuterVolumeSpecName: "kube-api-access-nkwl6") pod "054461e0-29d2-4eb4-be72-a6e6e0c6fe5f" (UID: "054461e0-29d2-4eb4-be72-a6e6e0c6fe5f"). InnerVolumeSpecName "kube-api-access-nkwl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.754194 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2d2162-c2ba-4192-a494-b4d8825272af-kube-api-access-7p2qz" (OuterVolumeSpecName: "kube-api-access-7p2qz") pod "8d2d2162-c2ba-4192-a494-b4d8825272af" (UID: "8d2d2162-c2ba-4192-a494-b4d8825272af"). InnerVolumeSpecName "kube-api-access-7p2qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.848797 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.848850 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d2d2162-c2ba-4192-a494-b4d8825272af-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.848863 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p2qz\" (UniqueName: \"kubernetes.io/projected/8d2d2162-c2ba-4192-a494-b4d8825272af-kube-api-access-7p2qz\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.848879 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkwl6\" (UniqueName: \"kubernetes.io/projected/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f-kube-api-access-nkwl6\") on node \"crc\" DevicePath \"\"" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.957426 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-rzn97" event={"ID":"8d2d2162-c2ba-4192-a494-b4d8825272af","Type":"ContainerDied","Data":"4cfd9eb248c47e39e4d362f2518b5b5e16bd74c66fb73819788c0c5f8c8fe3ee"} Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.957501 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cfd9eb248c47e39e4d362f2518b5b5e16bd74c66fb73819788c0c5f8c8fe3ee" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.957441 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-rzn97" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.960049 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4f8e-account-create-89w2g" event={"ID":"054461e0-29d2-4eb4-be72-a6e6e0c6fe5f","Type":"ContainerDied","Data":"d9b2732a252fd1b4c60d62290141524cd68c1f412b58f9584e712b26b7541baf"} Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.960112 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9b2732a252fd1b4c60d62290141524cd68c1f412b58f9584e712b26b7541baf" Nov 21 11:23:52 crc kubenswrapper[4972]: I1121 11:23:52.960191 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4f8e-account-create-89w2g" Nov 21 11:23:56 crc kubenswrapper[4972]: I1121 11:23:56.179180 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:23:56 crc kubenswrapper[4972]: I1121 11:23:56.180259 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:23:57 crc kubenswrapper[4972]: I1121 11:23:57.437409 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:57 crc kubenswrapper[4972]: I1121 11:23:57.437711 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:23:58 crc kubenswrapper[4972]: I1121 11:23:58.886803 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-z42h6"] Nov 21 11:23:58 crc kubenswrapper[4972]: E1121 11:23:58.887773 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054461e0-29d2-4eb4-be72-a6e6e0c6fe5f" containerName="mariadb-account-create" Nov 21 11:23:58 crc kubenswrapper[4972]: I1121 11:23:58.887789 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="054461e0-29d2-4eb4-be72-a6e6e0c6fe5f" containerName="mariadb-account-create" Nov 21 11:23:58 crc kubenswrapper[4972]: E1121 11:23:58.887862 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2d2162-c2ba-4192-a494-b4d8825272af" containerName="mariadb-database-create" Nov 21 11:23:58 crc kubenswrapper[4972]: I1121 11:23:58.887871 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2d2162-c2ba-4192-a494-b4d8825272af" containerName="mariadb-database-create" Nov 21 11:23:58 crc kubenswrapper[4972]: I1121 11:23:58.888101 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d2d2162-c2ba-4192-a494-b4d8825272af" containerName="mariadb-database-create" Nov 21 11:23:58 crc kubenswrapper[4972]: I1121 11:23:58.888124 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="054461e0-29d2-4eb4-be72-a6e6e0c6fe5f" containerName="mariadb-account-create" Nov 21 11:23:58 crc kubenswrapper[4972]: I1121 11:23:58.888973 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-z42h6" Nov 21 11:23:58 crc kubenswrapper[4972]: I1121 11:23:58.891449 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-xdc44" Nov 21 11:23:58 crc kubenswrapper[4972]: I1121 11:23:58.891909 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 21 11:23:58 crc kubenswrapper[4972]: I1121 11:23:58.907158 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-z42h6"] Nov 21 11:23:58 crc kubenswrapper[4972]: I1121 11:23:58.988045 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b0d062-1781-4712-aa0d-fe59728e52e1-combined-ca-bundle\") pod \"heat-db-sync-z42h6\" (UID: \"b1b0d062-1781-4712-aa0d-fe59728e52e1\") " pod="openstack/heat-db-sync-z42h6" Nov 21 11:23:58 crc kubenswrapper[4972]: I1121 11:23:58.988438 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5zx\" (UniqueName: \"kubernetes.io/projected/b1b0d062-1781-4712-aa0d-fe59728e52e1-kube-api-access-th5zx\") pod \"heat-db-sync-z42h6\" (UID: \"b1b0d062-1781-4712-aa0d-fe59728e52e1\") " pod="openstack/heat-db-sync-z42h6" Nov 21 11:23:58 crc kubenswrapper[4972]: I1121 11:23:58.988584 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b0d062-1781-4712-aa0d-fe59728e52e1-config-data\") pod \"heat-db-sync-z42h6\" (UID: \"b1b0d062-1781-4712-aa0d-fe59728e52e1\") " pod="openstack/heat-db-sync-z42h6" Nov 21 11:23:59 crc kubenswrapper[4972]: I1121 11:23:59.090128 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th5zx\" (UniqueName: \"kubernetes.io/projected/b1b0d062-1781-4712-aa0d-fe59728e52e1-kube-api-access-th5zx\") pod \"heat-db-sync-z42h6\" (UID: \"b1b0d062-1781-4712-aa0d-fe59728e52e1\") " pod="openstack/heat-db-sync-z42h6" Nov 21 11:23:59 crc kubenswrapper[4972]: I1121 11:23:59.090192 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b0d062-1781-4712-aa0d-fe59728e52e1-config-data\") pod \"heat-db-sync-z42h6\" (UID: \"b1b0d062-1781-4712-aa0d-fe59728e52e1\") " pod="openstack/heat-db-sync-z42h6" Nov 21 11:23:59 crc kubenswrapper[4972]: I1121 11:23:59.090218 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b0d062-1781-4712-aa0d-fe59728e52e1-combined-ca-bundle\") pod \"heat-db-sync-z42h6\" (UID: \"b1b0d062-1781-4712-aa0d-fe59728e52e1\") " pod="openstack/heat-db-sync-z42h6" Nov 21 11:23:59 crc kubenswrapper[4972]: I1121 11:23:59.096680 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b0d062-1781-4712-aa0d-fe59728e52e1-combined-ca-bundle\") pod \"heat-db-sync-z42h6\" (UID: \"b1b0d062-1781-4712-aa0d-fe59728e52e1\") " pod="openstack/heat-db-sync-z42h6" Nov 21 11:23:59 crc kubenswrapper[4972]: I1121 11:23:59.096863 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b0d062-1781-4712-aa0d-fe59728e52e1-config-data\") pod \"heat-db-sync-z42h6\" (UID: \"b1b0d062-1781-4712-aa0d-fe59728e52e1\") " pod="openstack/heat-db-sync-z42h6" Nov 21 11:23:59 crc kubenswrapper[4972]: I1121 11:23:59.109620 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th5zx\" (UniqueName: \"kubernetes.io/projected/b1b0d062-1781-4712-aa0d-fe59728e52e1-kube-api-access-th5zx\") pod \"heat-db-sync-z42h6\" (UID: \"b1b0d062-1781-4712-aa0d-fe59728e52e1\") " pod="openstack/heat-db-sync-z42h6" Nov 21 11:23:59 crc kubenswrapper[4972]: I1121 11:23:59.217468 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-z42h6" Nov 21 11:23:59 crc kubenswrapper[4972]: I1121 11:23:59.649462 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-794f64f8c9-96s66" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.110:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8080: connect: connection refused" Nov 21 11:23:59 crc kubenswrapper[4972]: I1121 11:23:59.649937 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:23:59 crc kubenswrapper[4972]: I1121 11:23:59.655111 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-z42h6"] Nov 21 11:24:00 crc kubenswrapper[4972]: I1121 11:24:00.073909 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-z42h6" event={"ID":"b1b0d062-1781-4712-aa0d-fe59728e52e1","Type":"ContainerStarted","Data":"55311c69404bd64751ccec45e5bf5a12015d30b79ddbb306b88faff1bf9ce817"} Nov 21 11:24:04 crc kubenswrapper[4972]: I1121 11:24:04.119096 4972 generic.go:334] "Generic (PLEG): container finished" podID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerID="ae66ae6140638644985ae0138691d99f39bec7c44166a4c9af1291c65941bd1b" exitCode=137 Nov 21 11:24:04 crc kubenswrapper[4972]: I1121 11:24:04.119220 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-794f64f8c9-96s66" event={"ID":"2c7b1503-7053-4ebc-b7d8-e510d25ea939","Type":"ContainerDied","Data":"ae66ae6140638644985ae0138691d99f39bec7c44166a4c9af1291c65941bd1b"} Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.141986 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.144716 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-794f64f8c9-96s66" event={"ID":"2c7b1503-7053-4ebc-b7d8-e510d25ea939","Type":"ContainerDied","Data":"477c055fcb594fd72236783656829bb2e1793f69bb7914b721adb03358e199c1"} Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.144854 4972 scope.go:117] "RemoveContainer" containerID="d2ecf321f4b4dd764ec2eb616ae9d8ca1ead486f754d549530ebe394521eff76" Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.239955 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c7b1503-7053-4ebc-b7d8-e510d25ea939-config-data\") pod \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.240137 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6rf8\" (UniqueName: \"kubernetes.io/projected/2c7b1503-7053-4ebc-b7d8-e510d25ea939-kube-api-access-v6rf8\") pod \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.240238 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c7b1503-7053-4ebc-b7d8-e510d25ea939-scripts\") pod \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.240375 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c7b1503-7053-4ebc-b7d8-e510d25ea939-logs\") pod \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.240482 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2c7b1503-7053-4ebc-b7d8-e510d25ea939-horizon-secret-key\") pod \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\" (UID: \"2c7b1503-7053-4ebc-b7d8-e510d25ea939\") " Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.241027 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c7b1503-7053-4ebc-b7d8-e510d25ea939-logs" (OuterVolumeSpecName: "logs") pod "2c7b1503-7053-4ebc-b7d8-e510d25ea939" (UID: "2c7b1503-7053-4ebc-b7d8-e510d25ea939"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.241420 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c7b1503-7053-4ebc-b7d8-e510d25ea939-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.266224 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c7b1503-7053-4ebc-b7d8-e510d25ea939-kube-api-access-v6rf8" (OuterVolumeSpecName: "kube-api-access-v6rf8") pod "2c7b1503-7053-4ebc-b7d8-e510d25ea939" (UID: "2c7b1503-7053-4ebc-b7d8-e510d25ea939"). InnerVolumeSpecName "kube-api-access-v6rf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.268447 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7b1503-7053-4ebc-b7d8-e510d25ea939-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2c7b1503-7053-4ebc-b7d8-e510d25ea939" (UID: "2c7b1503-7053-4ebc-b7d8-e510d25ea939"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.276925 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c7b1503-7053-4ebc-b7d8-e510d25ea939-config-data" (OuterVolumeSpecName: "config-data") pod "2c7b1503-7053-4ebc-b7d8-e510d25ea939" (UID: "2c7b1503-7053-4ebc-b7d8-e510d25ea939"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.283951 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c7b1503-7053-4ebc-b7d8-e510d25ea939-scripts" (OuterVolumeSpecName: "scripts") pod "2c7b1503-7053-4ebc-b7d8-e510d25ea939" (UID: "2c7b1503-7053-4ebc-b7d8-e510d25ea939"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.346859 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c7b1503-7053-4ebc-b7d8-e510d25ea939-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.346936 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6rf8\" (UniqueName: \"kubernetes.io/projected/2c7b1503-7053-4ebc-b7d8-e510d25ea939-kube-api-access-v6rf8\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.346949 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c7b1503-7053-4ebc-b7d8-e510d25ea939-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.346959 4972 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2c7b1503-7053-4ebc-b7d8-e510d25ea939-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:06 crc kubenswrapper[4972]: I1121 11:24:06.363520 4972 scope.go:117] "RemoveContainer" containerID="ae66ae6140638644985ae0138691d99f39bec7c44166a4c9af1291c65941bd1b" Nov 21 11:24:07 crc kubenswrapper[4972]: I1121 11:24:07.160049 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-794f64f8c9-96s66" Nov 21 11:24:07 crc kubenswrapper[4972]: I1121 11:24:07.162325 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-z42h6" event={"ID":"b1b0d062-1781-4712-aa0d-fe59728e52e1","Type":"ContainerStarted","Data":"8baaf5fcfdcf9482ee48ca7d86f8e6f088a3d3ca25d83828703e56359ce019ac"} Nov 21 11:24:07 crc kubenswrapper[4972]: I1121 11:24:07.194290 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-z42h6" podStartSLOduration=2.491862213 podStartE2EDuration="9.194273366s" podCreationTimestamp="2025-11-21 11:23:58 +0000 UTC" firstStartedPulling="2025-11-21 11:23:59.661208866 +0000 UTC m=+6184.770351384" lastFinishedPulling="2025-11-21 11:24:06.363620039 +0000 UTC m=+6191.472762537" observedRunningTime="2025-11-21 11:24:07.183418998 +0000 UTC m=+6192.292561506" watchObservedRunningTime="2025-11-21 11:24:07.194273366 +0000 UTC m=+6192.303415864" Nov 21 11:24:07 crc kubenswrapper[4972]: I1121 11:24:07.216587 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-794f64f8c9-96s66"] Nov 21 11:24:07 crc kubenswrapper[4972]: I1121 11:24:07.223898 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-794f64f8c9-96s66"] Nov 21 11:24:07 crc kubenswrapper[4972]: I1121 11:24:07.776056 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" path="/var/lib/kubelet/pods/2c7b1503-7053-4ebc-b7d8-e510d25ea939/volumes" Nov 21 11:24:09 crc kubenswrapper[4972]: I1121 11:24:09.120530 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:24:09 crc kubenswrapper[4972]: I1121 11:24:09.193450 4972 generic.go:334] "Generic (PLEG): container finished" podID="b1b0d062-1781-4712-aa0d-fe59728e52e1" containerID="8baaf5fcfdcf9482ee48ca7d86f8e6f088a3d3ca25d83828703e56359ce019ac" exitCode=0 Nov 21 11:24:09 crc kubenswrapper[4972]: I1121 11:24:09.193501 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-z42h6" event={"ID":"b1b0d062-1781-4712-aa0d-fe59728e52e1","Type":"ContainerDied","Data":"8baaf5fcfdcf9482ee48ca7d86f8e6f088a3d3ca25d83828703e56359ce019ac"} Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.624427 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-z42h6" Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.680709 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th5zx\" (UniqueName: \"kubernetes.io/projected/b1b0d062-1781-4712-aa0d-fe59728e52e1-kube-api-access-th5zx\") pod \"b1b0d062-1781-4712-aa0d-fe59728e52e1\" (UID: \"b1b0d062-1781-4712-aa0d-fe59728e52e1\") " Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.680827 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b0d062-1781-4712-aa0d-fe59728e52e1-config-data\") pod \"b1b0d062-1781-4712-aa0d-fe59728e52e1\" (UID: \"b1b0d062-1781-4712-aa0d-fe59728e52e1\") " Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.680960 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b0d062-1781-4712-aa0d-fe59728e52e1-combined-ca-bundle\") pod \"b1b0d062-1781-4712-aa0d-fe59728e52e1\" (UID: \"b1b0d062-1781-4712-aa0d-fe59728e52e1\") " Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.693199 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b0d062-1781-4712-aa0d-fe59728e52e1-kube-api-access-th5zx" (OuterVolumeSpecName: "kube-api-access-th5zx") pod "b1b0d062-1781-4712-aa0d-fe59728e52e1" (UID: "b1b0d062-1781-4712-aa0d-fe59728e52e1"). InnerVolumeSpecName "kube-api-access-th5zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.731491 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b0d062-1781-4712-aa0d-fe59728e52e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1b0d062-1781-4712-aa0d-fe59728e52e1" (UID: "b1b0d062-1781-4712-aa0d-fe59728e52e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.734389 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-58456df47f-c7fp7" Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.786373 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th5zx\" (UniqueName: \"kubernetes.io/projected/b1b0d062-1781-4712-aa0d-fe59728e52e1-kube-api-access-th5zx\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.786438 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b0d062-1781-4712-aa0d-fe59728e52e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.805782 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b0d062-1781-4712-aa0d-fe59728e52e1-config-data" (OuterVolumeSpecName: "config-data") pod "b1b0d062-1781-4712-aa0d-fe59728e52e1" (UID: "b1b0d062-1781-4712-aa0d-fe59728e52e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.828624 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5866d78465-8kt6n"] Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.829081 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5866d78465-8kt6n" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerName="horizon-log" containerID="cri-o://c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1" gracePeriod=30 Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.829112 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5866d78465-8kt6n" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerName="horizon" containerID="cri-o://a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2" gracePeriod=30 Nov 21 11:24:10 crc kubenswrapper[4972]: I1121 11:24:10.891027 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b0d062-1781-4712-aa0d-fe59728e52e1-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:11 crc kubenswrapper[4972]: I1121 11:24:11.217566 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-z42h6" event={"ID":"b1b0d062-1781-4712-aa0d-fe59728e52e1","Type":"ContainerDied","Data":"55311c69404bd64751ccec45e5bf5a12015d30b79ddbb306b88faff1bf9ce817"} Nov 21 11:24:11 crc kubenswrapper[4972]: I1121 11:24:11.217925 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55311c69404bd64751ccec45e5bf5a12015d30b79ddbb306b88faff1bf9ce817" Nov 21 11:24:11 crc kubenswrapper[4972]: I1121 11:24:11.217628 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-z42h6" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.400121 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5445969844-vchhd"] Nov 21 11:24:12 crc kubenswrapper[4972]: E1121 11:24:12.400635 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerName="horizon-log" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.400648 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerName="horizon-log" Nov 21 11:24:12 crc kubenswrapper[4972]: E1121 11:24:12.400668 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b0d062-1781-4712-aa0d-fe59728e52e1" containerName="heat-db-sync" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.400675 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b0d062-1781-4712-aa0d-fe59728e52e1" containerName="heat-db-sync" Nov 21 11:24:12 crc kubenswrapper[4972]: E1121 11:24:12.400706 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerName="horizon" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.400712 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerName="horizon" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.400915 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerName="horizon-log" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.400937 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c7b1503-7053-4ebc-b7d8-e510d25ea939" containerName="horizon" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.400945 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b0d062-1781-4712-aa0d-fe59728e52e1" containerName="heat-db-sync" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.404018 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.408240 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-xdc44" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.408408 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.408476 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.430083 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5445969844-vchhd"] Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.531888 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a00d0391-8581-4b9d-810e-1e68dedc2718-config-data-custom\") pod \"heat-engine-5445969844-vchhd\" (UID: \"a00d0391-8581-4b9d-810e-1e68dedc2718\") " pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.531992 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00d0391-8581-4b9d-810e-1e68dedc2718-config-data\") pod \"heat-engine-5445969844-vchhd\" (UID: \"a00d0391-8581-4b9d-810e-1e68dedc2718\") " pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.532046 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmrqh\" (UniqueName: \"kubernetes.io/projected/a00d0391-8581-4b9d-810e-1e68dedc2718-kube-api-access-lmrqh\") pod \"heat-engine-5445969844-vchhd\" (UID: \"a00d0391-8581-4b9d-810e-1e68dedc2718\") " pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.532085 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00d0391-8581-4b9d-810e-1e68dedc2718-combined-ca-bundle\") pod \"heat-engine-5445969844-vchhd\" (UID: \"a00d0391-8581-4b9d-810e-1e68dedc2718\") " pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.607481 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7f5df49c6f-7t5gf"] Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.614460 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.619953 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f5df49c6f-7t5gf"] Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.626107 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.633879 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a00d0391-8581-4b9d-810e-1e68dedc2718-config-data-custom\") pod \"heat-engine-5445969844-vchhd\" (UID: \"a00d0391-8581-4b9d-810e-1e68dedc2718\") " pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.633972 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00d0391-8581-4b9d-810e-1e68dedc2718-config-data\") pod \"heat-engine-5445969844-vchhd\" (UID: \"a00d0391-8581-4b9d-810e-1e68dedc2718\") " pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.634024 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmrqh\" (UniqueName: \"kubernetes.io/projected/a00d0391-8581-4b9d-810e-1e68dedc2718-kube-api-access-lmrqh\") pod \"heat-engine-5445969844-vchhd\" (UID: \"a00d0391-8581-4b9d-810e-1e68dedc2718\") " pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.634061 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00d0391-8581-4b9d-810e-1e68dedc2718-combined-ca-bundle\") pod \"heat-engine-5445969844-vchhd\" (UID: \"a00d0391-8581-4b9d-810e-1e68dedc2718\") " pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.661533 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a00d0391-8581-4b9d-810e-1e68dedc2718-config-data-custom\") pod \"heat-engine-5445969844-vchhd\" (UID: \"a00d0391-8581-4b9d-810e-1e68dedc2718\") " pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.665182 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmrqh\" (UniqueName: \"kubernetes.io/projected/a00d0391-8581-4b9d-810e-1e68dedc2718-kube-api-access-lmrqh\") pod \"heat-engine-5445969844-vchhd\" (UID: \"a00d0391-8581-4b9d-810e-1e68dedc2718\") " pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.681391 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00d0391-8581-4b9d-810e-1e68dedc2718-combined-ca-bundle\") pod \"heat-engine-5445969844-vchhd\" (UID: \"a00d0391-8581-4b9d-810e-1e68dedc2718\") " pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.703653 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6547f5bb66-pppjx"] Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.705071 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.715169 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.720055 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00d0391-8581-4b9d-810e-1e68dedc2718-config-data\") pod \"heat-engine-5445969844-vchhd\" (UID: \"a00d0391-8581-4b9d-810e-1e68dedc2718\") " pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.741438 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.742610 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd80395-ea39-4363-9474-949cf53049aa-combined-ca-bundle\") pod \"heat-cfnapi-7f5df49c6f-7t5gf\" (UID: \"9bd80395-ea39-4363-9474-949cf53049aa\") " pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.742676 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd80395-ea39-4363-9474-949cf53049aa-config-data\") pod \"heat-cfnapi-7f5df49c6f-7t5gf\" (UID: \"9bd80395-ea39-4363-9474-949cf53049aa\") " pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.742712 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bd80395-ea39-4363-9474-949cf53049aa-config-data-custom\") pod \"heat-cfnapi-7f5df49c6f-7t5gf\" (UID: \"9bd80395-ea39-4363-9474-949cf53049aa\") " pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.742799 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t9sq\" (UniqueName: \"kubernetes.io/projected/9bd80395-ea39-4363-9474-949cf53049aa-kube-api-access-9t9sq\") pod \"heat-cfnapi-7f5df49c6f-7t5gf\" (UID: \"9bd80395-ea39-4363-9474-949cf53049aa\") " pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.783904 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6547f5bb66-pppjx"] Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.902157 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bd80395-ea39-4363-9474-949cf53049aa-config-data-custom\") pod \"heat-cfnapi-7f5df49c6f-7t5gf\" (UID: \"9bd80395-ea39-4363-9474-949cf53049aa\") " pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.903159 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t9sq\" (UniqueName: \"kubernetes.io/projected/9bd80395-ea39-4363-9474-949cf53049aa-kube-api-access-9t9sq\") pod \"heat-cfnapi-7f5df49c6f-7t5gf\" (UID: \"9bd80395-ea39-4363-9474-949cf53049aa\") " pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.903353 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd80395-ea39-4363-9474-949cf53049aa-combined-ca-bundle\") pod \"heat-cfnapi-7f5df49c6f-7t5gf\" (UID: \"9bd80395-ea39-4363-9474-949cf53049aa\") " pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.903407 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a001046-7b32-4075-8e39-d4d358bba56c-config-data-custom\") pod \"heat-api-6547f5bb66-pppjx\" (UID: \"1a001046-7b32-4075-8e39-d4d358bba56c\") " pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.903477 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2vfc\" (UniqueName: \"kubernetes.io/projected/1a001046-7b32-4075-8e39-d4d358bba56c-kube-api-access-d2vfc\") pod \"heat-api-6547f5bb66-pppjx\" (UID: \"1a001046-7b32-4075-8e39-d4d358bba56c\") " pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.903606 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a001046-7b32-4075-8e39-d4d358bba56c-combined-ca-bundle\") pod \"heat-api-6547f5bb66-pppjx\" (UID: \"1a001046-7b32-4075-8e39-d4d358bba56c\") " pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.903704 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a001046-7b32-4075-8e39-d4d358bba56c-config-data\") pod \"heat-api-6547f5bb66-pppjx\" (UID: \"1a001046-7b32-4075-8e39-d4d358bba56c\") " pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.903745 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd80395-ea39-4363-9474-949cf53049aa-config-data\") pod \"heat-cfnapi-7f5df49c6f-7t5gf\" (UID: \"9bd80395-ea39-4363-9474-949cf53049aa\") " pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.934632 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bd80395-ea39-4363-9474-949cf53049aa-config-data-custom\") pod \"heat-cfnapi-7f5df49c6f-7t5gf\" (UID: \"9bd80395-ea39-4363-9474-949cf53049aa\") " pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.936072 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd80395-ea39-4363-9474-949cf53049aa-combined-ca-bundle\") pod \"heat-cfnapi-7f5df49c6f-7t5gf\" (UID: \"9bd80395-ea39-4363-9474-949cf53049aa\") " pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.942301 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd80395-ea39-4363-9474-949cf53049aa-config-data\") pod \"heat-cfnapi-7f5df49c6f-7t5gf\" (UID: \"9bd80395-ea39-4363-9474-949cf53049aa\") " pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:12 crc kubenswrapper[4972]: I1121 11:24:12.955786 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t9sq\" (UniqueName: \"kubernetes.io/projected/9bd80395-ea39-4363-9474-949cf53049aa-kube-api-access-9t9sq\") pod \"heat-cfnapi-7f5df49c6f-7t5gf\" (UID: \"9bd80395-ea39-4363-9474-949cf53049aa\") " pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.005781 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a001046-7b32-4075-8e39-d4d358bba56c-config-data-custom\") pod \"heat-api-6547f5bb66-pppjx\" (UID: \"1a001046-7b32-4075-8e39-d4d358bba56c\") " pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.005850 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2vfc\" (UniqueName: \"kubernetes.io/projected/1a001046-7b32-4075-8e39-d4d358bba56c-kube-api-access-d2vfc\") pod \"heat-api-6547f5bb66-pppjx\" (UID: \"1a001046-7b32-4075-8e39-d4d358bba56c\") " pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.005890 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a001046-7b32-4075-8e39-d4d358bba56c-combined-ca-bundle\") pod \"heat-api-6547f5bb66-pppjx\" (UID: \"1a001046-7b32-4075-8e39-d4d358bba56c\") " pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.005921 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a001046-7b32-4075-8e39-d4d358bba56c-config-data\") pod \"heat-api-6547f5bb66-pppjx\" (UID: \"1a001046-7b32-4075-8e39-d4d358bba56c\") " pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.012503 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a001046-7b32-4075-8e39-d4d358bba56c-combined-ca-bundle\") pod \"heat-api-6547f5bb66-pppjx\" (UID: \"1a001046-7b32-4075-8e39-d4d358bba56c\") " pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.012522 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a001046-7b32-4075-8e39-d4d358bba56c-config-data-custom\") pod \"heat-api-6547f5bb66-pppjx\" (UID: \"1a001046-7b32-4075-8e39-d4d358bba56c\") " pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.012851 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a001046-7b32-4075-8e39-d4d358bba56c-config-data\") pod \"heat-api-6547f5bb66-pppjx\" (UID: \"1a001046-7b32-4075-8e39-d4d358bba56c\") " pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.023891 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2vfc\" (UniqueName: \"kubernetes.io/projected/1a001046-7b32-4075-8e39-d4d358bba56c-kube-api-access-d2vfc\") pod \"heat-api-6547f5bb66-pppjx\" (UID: \"1a001046-7b32-4075-8e39-d4d358bba56c\") " pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.043105 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.232301 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.320055 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5445969844-vchhd"] Nov 21 11:24:13 crc kubenswrapper[4972]: W1121 11:24:13.336765 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda00d0391_8581_4b9d_810e_1e68dedc2718.slice/crio-2ff8a828c97c3235a906961fe8965ec97f0a4a5c1a0820b1f1b303114009809c WatchSource:0}: Error finding container 2ff8a828c97c3235a906961fe8965ec97f0a4a5c1a0820b1f1b303114009809c: Status 404 returned error can't find the container with id 2ff8a828c97c3235a906961fe8965ec97f0a4a5c1a0820b1f1b303114009809c Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.545633 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6547f5bb66-pppjx"] Nov 21 11:24:13 crc kubenswrapper[4972]: I1121 11:24:13.770653 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f5df49c6f-7t5gf"] Nov 21 11:24:14 crc kubenswrapper[4972]: I1121 11:24:14.269947 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5445969844-vchhd" event={"ID":"a00d0391-8581-4b9d-810e-1e68dedc2718","Type":"ContainerStarted","Data":"f60c24799e76d17ae1b4f2461ad64c40a00e604064a541ad9462762bffe24d5a"} Nov 21 11:24:14 crc kubenswrapper[4972]: I1121 11:24:14.272944 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5445969844-vchhd" event={"ID":"a00d0391-8581-4b9d-810e-1e68dedc2718","Type":"ContainerStarted","Data":"2ff8a828c97c3235a906961fe8965ec97f0a4a5c1a0820b1f1b303114009809c"} Nov 21 11:24:14 crc kubenswrapper[4972]: I1121 11:24:14.272992 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:14 crc kubenswrapper[4972]: I1121 11:24:14.275336 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6547f5bb66-pppjx" event={"ID":"1a001046-7b32-4075-8e39-d4d358bba56c","Type":"ContainerStarted","Data":"196f5bfb4d96205cb481e43ef8f166bd6952e9bdb3c5cd0681af4f971716d96b"} Nov 21 11:24:14 crc kubenswrapper[4972]: I1121 11:24:14.276936 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" event={"ID":"9bd80395-ea39-4363-9474-949cf53049aa","Type":"ContainerStarted","Data":"3792808b773c61e6c2bd45d4033635f195d854b12aa6efea68e1c6091463976d"} Nov 21 11:24:14 crc kubenswrapper[4972]: I1121 11:24:14.279445 4972 generic.go:334] "Generic (PLEG): container finished" podID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerID="a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2" exitCode=0 Nov 21 11:24:14 crc kubenswrapper[4972]: I1121 11:24:14.279490 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5866d78465-8kt6n" event={"ID":"482ac932-3c0a-43e7-8878-9989d75c9e29","Type":"ContainerDied","Data":"a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2"} Nov 21 11:24:14 crc kubenswrapper[4972]: I1121 11:24:14.298170 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5445969844-vchhd" podStartSLOduration=2.29812356 podStartE2EDuration="2.29812356s" podCreationTimestamp="2025-11-21 11:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:24:14.288634809 +0000 UTC m=+6199.397777307" watchObservedRunningTime="2025-11-21 11:24:14.29812356 +0000 UTC m=+6199.407266058" Nov 21 11:24:16 crc kubenswrapper[4972]: I1121 11:24:16.044172 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9b8c-account-create-ms545"] Nov 21 11:24:16 crc kubenswrapper[4972]: I1121 11:24:16.055203 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-m66t6"] Nov 21 11:24:16 crc kubenswrapper[4972]: I1121 11:24:16.065309 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9b8c-account-create-ms545"] Nov 21 11:24:16 crc kubenswrapper[4972]: I1121 11:24:16.073857 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-m66t6"] Nov 21 11:24:16 crc kubenswrapper[4972]: I1121 11:24:16.302293 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" event={"ID":"9bd80395-ea39-4363-9474-949cf53049aa","Type":"ContainerStarted","Data":"26b04d53172b2d1ddba6970ac0d7876de5ba9a1b952c4d8a6f60ba31008c1860"} Nov 21 11:24:16 crc kubenswrapper[4972]: I1121 11:24:16.302736 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:16 crc kubenswrapper[4972]: I1121 11:24:16.303997 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6547f5bb66-pppjx" event={"ID":"1a001046-7b32-4075-8e39-d4d358bba56c","Type":"ContainerStarted","Data":"afd61f3b8ecdcb5c7534eb3c4bbd115e8eb679681601a39d7e5bba1bbc8aa3a4"} Nov 21 11:24:16 crc kubenswrapper[4972]: I1121 11:24:16.322427 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" podStartSLOduration=2.7484299549999998 podStartE2EDuration="4.322413264s" podCreationTimestamp="2025-11-21 11:24:12 +0000 UTC" firstStartedPulling="2025-11-21 11:24:13.766159503 +0000 UTC m=+6198.875302001" lastFinishedPulling="2025-11-21 11:24:15.340142812 +0000 UTC m=+6200.449285310" observedRunningTime="2025-11-21 11:24:16.318768357 +0000 UTC m=+6201.427910885" watchObservedRunningTime="2025-11-21 11:24:16.322413264 +0000 UTC m=+6201.431555762" Nov 21 11:24:16 crc kubenswrapper[4972]: I1121 11:24:16.340982 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6547f5bb66-pppjx" podStartSLOduration=2.577454446 podStartE2EDuration="4.340956925s" podCreationTimestamp="2025-11-21 11:24:12 +0000 UTC" firstStartedPulling="2025-11-21 11:24:13.571168859 +0000 UTC m=+6198.680311357" lastFinishedPulling="2025-11-21 11:24:15.334671338 +0000 UTC m=+6200.443813836" observedRunningTime="2025-11-21 11:24:16.3354767 +0000 UTC m=+6201.444619208" watchObservedRunningTime="2025-11-21 11:24:16.340956925 +0000 UTC m=+6201.450099433" Nov 21 11:24:17 crc kubenswrapper[4972]: I1121 11:24:17.317037 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:17 crc kubenswrapper[4972]: I1121 11:24:17.776047 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60e15a23-4cc6-4c73-a4a3-008c59898063" path="/var/lib/kubelet/pods/60e15a23-4cc6-4c73-a4a3-008c59898063/volumes" Nov 21 11:24:17 crc kubenswrapper[4972]: I1121 11:24:17.777080 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2bfaed9-6d3e-4c14-b729-c8543e84abdc" path="/var/lib/kubelet/pods/e2bfaed9-6d3e-4c14-b729-c8543e84abdc/volumes" Nov 21 11:24:20 crc kubenswrapper[4972]: I1121 11:24:20.201373 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5866d78465-8kt6n" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.111:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8080: connect: connection refused" Nov 21 11:24:24 crc kubenswrapper[4972]: I1121 11:24:24.036888 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-2t6tp"] Nov 21 11:24:24 crc kubenswrapper[4972]: I1121 11:24:24.051237 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-2t6tp"] Nov 21 11:24:24 crc kubenswrapper[4972]: I1121 11:24:24.441636 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6547f5bb66-pppjx" Nov 21 11:24:24 crc kubenswrapper[4972]: I1121 11:24:24.706370 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7f5df49c6f-7t5gf" Nov 21 11:24:25 crc kubenswrapper[4972]: I1121 11:24:25.778602 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b97204f4-1052-458f-8b04-4802e5fa78ad" path="/var/lib/kubelet/pods/b97204f4-1052-458f-8b04-4802e5fa78ad/volumes" Nov 21 11:24:26 crc kubenswrapper[4972]: I1121 11:24:26.179176 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:24:26 crc kubenswrapper[4972]: I1121 11:24:26.179266 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:24:26 crc kubenswrapper[4972]: I1121 11:24:26.179344 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 11:24:26 crc kubenswrapper[4972]: I1121 11:24:26.180628 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ec025f5e23fdc9483086d949001e8977fb85a6d1c335c571eb4db2f58dafa45"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 11:24:26 crc kubenswrapper[4972]: I1121 11:24:26.180742 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://8ec025f5e23fdc9483086d949001e8977fb85a6d1c335c571eb4db2f58dafa45" gracePeriod=600 Nov 21 11:24:26 crc kubenswrapper[4972]: I1121 11:24:26.450141 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="8ec025f5e23fdc9483086d949001e8977fb85a6d1c335c571eb4db2f58dafa45" exitCode=0 Nov 21 11:24:26 crc kubenswrapper[4972]: I1121 11:24:26.450186 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"8ec025f5e23fdc9483086d949001e8977fb85a6d1c335c571eb4db2f58dafa45"} Nov 21 11:24:26 crc kubenswrapper[4972]: I1121 11:24:26.450217 4972 scope.go:117] "RemoveContainer" containerID="94f8f4cad97d9383aa8938b39fe1e589614cfdabe55a1b608cc59b989df9a3b1" Nov 21 11:24:27 crc kubenswrapper[4972]: I1121 11:24:27.470055 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453"} Nov 21 11:24:30 crc kubenswrapper[4972]: I1121 11:24:30.200774 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5866d78465-8kt6n" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.111:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8080: connect: connection refused" Nov 21 11:24:32 crc kubenswrapper[4972]: I1121 11:24:32.785599 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5445969844-vchhd" Nov 21 11:24:33 crc kubenswrapper[4972]: I1121 11:24:33.818998 4972 scope.go:117] "RemoveContainer" containerID="df8ab6020cb43d59d35da9929b4c2835e5d8ed124c4d62c1e5d258930fc03bba" Nov 21 11:24:33 crc kubenswrapper[4972]: I1121 11:24:33.874297 4972 scope.go:117] "RemoveContainer" containerID="564fdaef9d54b73719d7b5389493ae0e18511631a530b489a1ec15bae564f5ec" Nov 21 11:24:33 crc kubenswrapper[4972]: I1121 11:24:33.916317 4972 scope.go:117] "RemoveContainer" containerID="f5f6b3799a11c77973f7a3cc73d0ca7a15c8c615e9d9be51892258b6a3cd5697" Nov 21 11:24:33 crc kubenswrapper[4972]: I1121 11:24:33.962588 4972 scope.go:117] "RemoveContainer" containerID="616198d0d0d7868fb49309684d39ff18e2be04f210e6496b20fe5e4e353799c9" Nov 21 11:24:40 crc kubenswrapper[4972]: I1121 11:24:40.201467 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5866d78465-8kt6n" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.111:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8080: connect: connection refused" Nov 21 11:24:40 crc kubenswrapper[4972]: I1121 11:24:40.202426 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.455642 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.530023 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/482ac932-3c0a-43e7-8878-9989d75c9e29-horizon-secret-key\") pod \"482ac932-3c0a-43e7-8878-9989d75c9e29\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.530576 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482ac932-3c0a-43e7-8878-9989d75c9e29-logs\") pod \"482ac932-3c0a-43e7-8878-9989d75c9e29\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.530616 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5vxq\" (UniqueName: \"kubernetes.io/projected/482ac932-3c0a-43e7-8878-9989d75c9e29-kube-api-access-k5vxq\") pod \"482ac932-3c0a-43e7-8878-9989d75c9e29\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.530658 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/482ac932-3c0a-43e7-8878-9989d75c9e29-config-data\") pod \"482ac932-3c0a-43e7-8878-9989d75c9e29\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.530731 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/482ac932-3c0a-43e7-8878-9989d75c9e29-scripts\") pod \"482ac932-3c0a-43e7-8878-9989d75c9e29\" (UID: \"482ac932-3c0a-43e7-8878-9989d75c9e29\") " Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.531037 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/482ac932-3c0a-43e7-8878-9989d75c9e29-logs" (OuterVolumeSpecName: "logs") pod "482ac932-3c0a-43e7-8878-9989d75c9e29" (UID: "482ac932-3c0a-43e7-8878-9989d75c9e29"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.531527 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482ac932-3c0a-43e7-8878-9989d75c9e29-logs\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.538223 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482ac932-3c0a-43e7-8878-9989d75c9e29-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "482ac932-3c0a-43e7-8878-9989d75c9e29" (UID: "482ac932-3c0a-43e7-8878-9989d75c9e29"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.540877 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/482ac932-3c0a-43e7-8878-9989d75c9e29-kube-api-access-k5vxq" (OuterVolumeSpecName: "kube-api-access-k5vxq") pod "482ac932-3c0a-43e7-8878-9989d75c9e29" (UID: "482ac932-3c0a-43e7-8878-9989d75c9e29"). InnerVolumeSpecName "kube-api-access-k5vxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.563762 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/482ac932-3c0a-43e7-8878-9989d75c9e29-config-data" (OuterVolumeSpecName: "config-data") pod "482ac932-3c0a-43e7-8878-9989d75c9e29" (UID: "482ac932-3c0a-43e7-8878-9989d75c9e29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.564133 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/482ac932-3c0a-43e7-8878-9989d75c9e29-scripts" (OuterVolumeSpecName: "scripts") pod "482ac932-3c0a-43e7-8878-9989d75c9e29" (UID: "482ac932-3c0a-43e7-8878-9989d75c9e29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.633167 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5vxq\" (UniqueName: \"kubernetes.io/projected/482ac932-3c0a-43e7-8878-9989d75c9e29-kube-api-access-k5vxq\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.633210 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/482ac932-3c0a-43e7-8878-9989d75c9e29-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.633223 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/482ac932-3c0a-43e7-8878-9989d75c9e29-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.633233 4972 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/482ac932-3c0a-43e7-8878-9989d75c9e29-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.640033 4972 generic.go:334] "Generic (PLEG): container finished" podID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerID="c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1" exitCode=137 Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.640072 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5866d78465-8kt6n" event={"ID":"482ac932-3c0a-43e7-8878-9989d75c9e29","Type":"ContainerDied","Data":"c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1"} Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.640098 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5866d78465-8kt6n" event={"ID":"482ac932-3c0a-43e7-8878-9989d75c9e29","Type":"ContainerDied","Data":"ce331d3beae4bff595d17a5100511659559d6933c7a0552d90b70a010f07bad3"} Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.640113 4972 scope.go:117] "RemoveContainer" containerID="a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.640235 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5866d78465-8kt6n" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.670021 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5866d78465-8kt6n"] Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.679768 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5866d78465-8kt6n"] Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.771257 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" path="/var/lib/kubelet/pods/482ac932-3c0a-43e7-8878-9989d75c9e29/volumes" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.803696 4972 scope.go:117] "RemoveContainer" containerID="c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.824823 4972 scope.go:117] "RemoveContainer" containerID="a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2" Nov 21 11:24:41 crc kubenswrapper[4972]: E1121 11:24:41.825247 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2\": container with ID starting with a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2 not found: ID does not exist" containerID="a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.825339 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2"} err="failed to get container status \"a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2\": rpc error: code = NotFound desc = could not find container \"a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2\": container with ID starting with a9ebcf389d418c73c3d4d561b2ba3b23235f4d522cd2cacb2f0b990a12f2bfe2 not found: ID does not exist" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.825409 4972 scope.go:117] "RemoveContainer" containerID="c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1" Nov 21 11:24:41 crc kubenswrapper[4972]: E1121 11:24:41.825885 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1\": container with ID starting with c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1 not found: ID does not exist" containerID="c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1" Nov 21 11:24:41 crc kubenswrapper[4972]: I1121 11:24:41.825974 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1"} err="failed to get container status \"c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1\": rpc error: code = NotFound desc = could not find container \"c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1\": container with ID starting with c769dd18f10a20f53e31878690b3ab34f13b6ff6c1b5cef5523d056c6803d8e1 not found: ID does not exist" Nov 21 11:24:50 crc kubenswrapper[4972]: I1121 11:24:50.869925 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x"] Nov 21 11:24:50 crc kubenswrapper[4972]: E1121 11:24:50.871025 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerName="horizon" Nov 21 11:24:50 crc kubenswrapper[4972]: I1121 11:24:50.871043 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerName="horizon" Nov 21 11:24:50 crc kubenswrapper[4972]: E1121 11:24:50.871074 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerName="horizon-log" Nov 21 11:24:50 crc kubenswrapper[4972]: I1121 11:24:50.871084 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerName="horizon-log" Nov 21 11:24:50 crc kubenswrapper[4972]: I1121 11:24:50.871352 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerName="horizon-log" Nov 21 11:24:50 crc kubenswrapper[4972]: I1121 11:24:50.871377 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="482ac932-3c0a-43e7-8878-9989d75c9e29" containerName="horizon" Nov 21 11:24:50 crc kubenswrapper[4972]: I1121 11:24:50.873213 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:50 crc kubenswrapper[4972]: I1121 11:24:50.877436 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 11:24:50 crc kubenswrapper[4972]: I1121 11:24:50.891731 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x"] Nov 21 11:24:50 crc kubenswrapper[4972]: I1121 11:24:50.978511 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x\" (UID: \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:50 crc kubenswrapper[4972]: I1121 11:24:50.978592 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjvr5\" (UniqueName: \"kubernetes.io/projected/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-kube-api-access-cjvr5\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x\" (UID: \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:50 crc kubenswrapper[4972]: I1121 11:24:50.978697 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x\" (UID: \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:51 crc kubenswrapper[4972]: I1121 11:24:51.081424 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x\" (UID: \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:51 crc kubenswrapper[4972]: I1121 11:24:51.081544 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x\" (UID: \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:51 crc kubenswrapper[4972]: I1121 11:24:51.081581 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjvr5\" (UniqueName: \"kubernetes.io/projected/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-kube-api-access-cjvr5\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x\" (UID: \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:51 crc kubenswrapper[4972]: I1121 11:24:51.082311 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x\" (UID: \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:51 crc kubenswrapper[4972]: I1121 11:24:51.082519 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x\" (UID: \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:51 crc kubenswrapper[4972]: I1121 11:24:51.108617 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjvr5\" (UniqueName: \"kubernetes.io/projected/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-kube-api-access-cjvr5\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x\" (UID: \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:51 crc kubenswrapper[4972]: I1121 11:24:51.197008 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:51 crc kubenswrapper[4972]: W1121 11:24:51.702294 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00d4fc91_afa5_4d86_ab17_1f1d77fba16a.slice/crio-a2dc8a8319b27d648da850221381e999862ddaa610288984df0f3ee1902f8f79 WatchSource:0}: Error finding container a2dc8a8319b27d648da850221381e999862ddaa610288984df0f3ee1902f8f79: Status 404 returned error can't find the container with id a2dc8a8319b27d648da850221381e999862ddaa610288984df0f3ee1902f8f79 Nov 21 11:24:51 crc kubenswrapper[4972]: I1121 11:24:51.703763 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x"] Nov 21 11:24:51 crc kubenswrapper[4972]: I1121 11:24:51.776661 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" event={"ID":"00d4fc91-afa5-4d86-ab17-1f1d77fba16a","Type":"ContainerStarted","Data":"a2dc8a8319b27d648da850221381e999862ddaa610288984df0f3ee1902f8f79"} Nov 21 11:24:52 crc kubenswrapper[4972]: I1121 11:24:52.087823 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c7de-account-create-rx642"] Nov 21 11:24:52 crc kubenswrapper[4972]: I1121 11:24:52.105578 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5dgr8"] Nov 21 11:24:52 crc kubenswrapper[4972]: I1121 11:24:52.115631 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c7de-account-create-rx642"] Nov 21 11:24:52 crc kubenswrapper[4972]: I1121 11:24:52.124136 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5dgr8"] Nov 21 11:24:52 crc kubenswrapper[4972]: I1121 11:24:52.775407 4972 generic.go:334] "Generic (PLEG): container finished" podID="00d4fc91-afa5-4d86-ab17-1f1d77fba16a" containerID="d3c43228fcf81888b5dad6f494da03db0c183c395f019d0f8f01129ca938f43a" exitCode=0 Nov 21 11:24:52 crc kubenswrapper[4972]: I1121 11:24:52.775468 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" event={"ID":"00d4fc91-afa5-4d86-ab17-1f1d77fba16a","Type":"ContainerDied","Data":"d3c43228fcf81888b5dad6f494da03db0c183c395f019d0f8f01129ca938f43a"} Nov 21 11:24:53 crc kubenswrapper[4972]: I1121 11:24:53.780431 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ab013d1-8865-43f7-a76b-0b6b5383aa23" path="/var/lib/kubelet/pods/6ab013d1-8865-43f7-a76b-0b6b5383aa23/volumes" Nov 21 11:24:53 crc kubenswrapper[4972]: I1121 11:24:53.782348 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f4a9500-7a33-48a5-949a-3bc12cf0ed61" path="/var/lib/kubelet/pods/8f4a9500-7a33-48a5-949a-3bc12cf0ed61/volumes" Nov 21 11:24:54 crc kubenswrapper[4972]: I1121 11:24:54.799388 4972 generic.go:334] "Generic (PLEG): container finished" podID="00d4fc91-afa5-4d86-ab17-1f1d77fba16a" containerID="e8957ff6045c6dd234debdc094c53a6a7c3709fead5a6837c9c9af1e133a19a4" exitCode=0 Nov 21 11:24:54 crc kubenswrapper[4972]: I1121 11:24:54.799489 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" event={"ID":"00d4fc91-afa5-4d86-ab17-1f1d77fba16a","Type":"ContainerDied","Data":"e8957ff6045c6dd234debdc094c53a6a7c3709fead5a6837c9c9af1e133a19a4"} Nov 21 11:24:55 crc kubenswrapper[4972]: I1121 11:24:55.826594 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" event={"ID":"00d4fc91-afa5-4d86-ab17-1f1d77fba16a","Type":"ContainerStarted","Data":"64504f12217ae2a78bf6164ac3b1c649aeef2d97c686c22013dbc8c940be9ede"} Nov 21 11:24:55 crc kubenswrapper[4972]: I1121 11:24:55.866317 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" podStartSLOduration=4.430760748 podStartE2EDuration="5.866292121s" podCreationTimestamp="2025-11-21 11:24:50 +0000 UTC" firstStartedPulling="2025-11-21 11:24:52.778392961 +0000 UTC m=+6237.887535499" lastFinishedPulling="2025-11-21 11:24:54.213924364 +0000 UTC m=+6239.323066872" observedRunningTime="2025-11-21 11:24:55.859688516 +0000 UTC m=+6240.968831054" watchObservedRunningTime="2025-11-21 11:24:55.866292121 +0000 UTC m=+6240.975434619" Nov 21 11:24:56 crc kubenswrapper[4972]: I1121 11:24:56.845359 4972 generic.go:334] "Generic (PLEG): container finished" podID="00d4fc91-afa5-4d86-ab17-1f1d77fba16a" containerID="64504f12217ae2a78bf6164ac3b1c649aeef2d97c686c22013dbc8c940be9ede" exitCode=0 Nov 21 11:24:56 crc kubenswrapper[4972]: I1121 11:24:56.845486 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" event={"ID":"00d4fc91-afa5-4d86-ab17-1f1d77fba16a","Type":"ContainerDied","Data":"64504f12217ae2a78bf6164ac3b1c649aeef2d97c686c22013dbc8c940be9ede"} Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.695234 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.802442 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-bundle\") pod \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\" (UID: \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\") " Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.802787 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-util\") pod \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\" (UID: \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\") " Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.802980 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjvr5\" (UniqueName: \"kubernetes.io/projected/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-kube-api-access-cjvr5\") pod \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\" (UID: \"00d4fc91-afa5-4d86-ab17-1f1d77fba16a\") " Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.806441 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-bundle" (OuterVolumeSpecName: "bundle") pod "00d4fc91-afa5-4d86-ab17-1f1d77fba16a" (UID: "00d4fc91-afa5-4d86-ab17-1f1d77fba16a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.810393 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-kube-api-access-cjvr5" (OuterVolumeSpecName: "kube-api-access-cjvr5") pod "00d4fc91-afa5-4d86-ab17-1f1d77fba16a" (UID: "00d4fc91-afa5-4d86-ab17-1f1d77fba16a"). InnerVolumeSpecName "kube-api-access-cjvr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.825167 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-util" (OuterVolumeSpecName: "util") pod "00d4fc91-afa5-4d86-ab17-1f1d77fba16a" (UID: "00d4fc91-afa5-4d86-ab17-1f1d77fba16a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.875639 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" event={"ID":"00d4fc91-afa5-4d86-ab17-1f1d77fba16a","Type":"ContainerDied","Data":"a2dc8a8319b27d648da850221381e999862ddaa610288984df0f3ee1902f8f79"} Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.875679 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2dc8a8319b27d648da850221381e999862ddaa610288984df0f3ee1902f8f79" Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.875740 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x" Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.907753 4972 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.907808 4972 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-util\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:58 crc kubenswrapper[4972]: I1121 11:24:58.907857 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjvr5\" (UniqueName: \"kubernetes.io/projected/00d4fc91-afa5-4d86-ab17-1f1d77fba16a-kube-api-access-cjvr5\") on node \"crc\" DevicePath \"\"" Nov 21 11:24:59 crc kubenswrapper[4972]: I1121 11:24:59.045582 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-bjtgf"] Nov 21 11:24:59 crc kubenswrapper[4972]: I1121 11:24:59.063067 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-bjtgf"] Nov 21 11:24:59 crc kubenswrapper[4972]: I1121 11:24:59.779695 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e55123-6d03-4c7c-aa57-40d72627784e" path="/var/lib/kubelet/pods/26e55123-6d03-4c7c-aa57-40d72627784e/volumes" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.375497 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-s2rhw"] Nov 21 11:25:09 crc kubenswrapper[4972]: E1121 11:25:09.376419 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d4fc91-afa5-4d86-ab17-1f1d77fba16a" containerName="pull" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.376433 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d4fc91-afa5-4d86-ab17-1f1d77fba16a" containerName="pull" Nov 21 11:25:09 crc kubenswrapper[4972]: E1121 11:25:09.376451 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d4fc91-afa5-4d86-ab17-1f1d77fba16a" containerName="extract" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.376458 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d4fc91-afa5-4d86-ab17-1f1d77fba16a" containerName="extract" Nov 21 11:25:09 crc kubenswrapper[4972]: E1121 11:25:09.376487 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d4fc91-afa5-4d86-ab17-1f1d77fba16a" containerName="util" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.376494 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d4fc91-afa5-4d86-ab17-1f1d77fba16a" containerName="util" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.376697 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d4fc91-afa5-4d86-ab17-1f1d77fba16a" containerName="extract" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.377389 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-s2rhw" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.380847 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.381004 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.381572 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-8h4r5" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.387349 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-s2rhw"] Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.446808 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncgd7\" (UniqueName: \"kubernetes.io/projected/c0081501-c78f-4c85-9d02-643b0f84c963-kube-api-access-ncgd7\") pod \"obo-prometheus-operator-668cf9dfbb-s2rhw\" (UID: \"c0081501-c78f-4c85-9d02-643b0f84c963\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-s2rhw" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.546812 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh"] Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.549375 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncgd7\" (UniqueName: \"kubernetes.io/projected/c0081501-c78f-4c85-9d02-643b0f84c963-kube-api-access-ncgd7\") pod \"obo-prometheus-operator-668cf9dfbb-s2rhw\" (UID: \"c0081501-c78f-4c85-9d02-643b0f84c963\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-s2rhw" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.555668 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.558917 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-llbwt" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.559090 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.582972 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8"] Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.584742 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.594100 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncgd7\" (UniqueName: \"kubernetes.io/projected/c0081501-c78f-4c85-9d02-643b0f84c963-kube-api-access-ncgd7\") pod \"obo-prometheus-operator-668cf9dfbb-s2rhw\" (UID: \"c0081501-c78f-4c85-9d02-643b0f84c963\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-s2rhw" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.604867 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh"] Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.634969 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8"] Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.651463 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2262607f-d34e-4a47-877a-d04cbf0e72c6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57589db548-cqfwh\" (UID: \"2262607f-d34e-4a47-877a-d04cbf0e72c6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.651548 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/093e7801-c881-4ace-9120-e0495a2b1dc8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57589db548-pw8l8\" (UID: \"093e7801-c881-4ace-9120-e0495a2b1dc8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.651621 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2262607f-d34e-4a47-877a-d04cbf0e72c6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57589db548-cqfwh\" (UID: \"2262607f-d34e-4a47-877a-d04cbf0e72c6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.651646 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/093e7801-c881-4ace-9120-e0495a2b1dc8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57589db548-pw8l8\" (UID: \"093e7801-c881-4ace-9120-e0495a2b1dc8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.695342 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-xlg99"] Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.697601 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.702529 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-xlg99"] Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.703167 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-s2rhw" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.703703 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.703995 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-mpnnx" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.753571 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2262607f-d34e-4a47-877a-d04cbf0e72c6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57589db548-cqfwh\" (UID: \"2262607f-d34e-4a47-877a-d04cbf0e72c6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.753646 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/093e7801-c881-4ace-9120-e0495a2b1dc8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57589db548-pw8l8\" (UID: \"093e7801-c881-4ace-9120-e0495a2b1dc8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.753677 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj5zx\" (UniqueName: \"kubernetes.io/projected/f313fdb1-cc0d-4907-baee-4e61a4b7e209-kube-api-access-vj5zx\") pod \"observability-operator-d8bb48f5d-xlg99\" (UID: \"f313fdb1-cc0d-4907-baee-4e61a4b7e209\") " pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.753801 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2262607f-d34e-4a47-877a-d04cbf0e72c6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57589db548-cqfwh\" (UID: \"2262607f-d34e-4a47-877a-d04cbf0e72c6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.753886 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f313fdb1-cc0d-4907-baee-4e61a4b7e209-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-xlg99\" (UID: \"f313fdb1-cc0d-4907-baee-4e61a4b7e209\") " pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.753942 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/093e7801-c881-4ace-9120-e0495a2b1dc8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57589db548-pw8l8\" (UID: \"093e7801-c881-4ace-9120-e0495a2b1dc8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.758177 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2262607f-d34e-4a47-877a-d04cbf0e72c6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57589db548-cqfwh\" (UID: \"2262607f-d34e-4a47-877a-d04cbf0e72c6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.760943 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2262607f-d34e-4a47-877a-d04cbf0e72c6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57589db548-cqfwh\" (UID: \"2262607f-d34e-4a47-877a-d04cbf0e72c6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.770447 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/093e7801-c881-4ace-9120-e0495a2b1dc8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57589db548-pw8l8\" (UID: \"093e7801-c881-4ace-9120-e0495a2b1dc8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.774889 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/093e7801-c881-4ace-9120-e0495a2b1dc8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57589db548-pw8l8\" (UID: \"093e7801-c881-4ace-9120-e0495a2b1dc8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.810596 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-v6fh4"] Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.812278 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-v6fh4" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.817678 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-xsqsm" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.826082 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-v6fh4"] Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.855251 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f313fdb1-cc0d-4907-baee-4e61a4b7e209-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-xlg99\" (UID: \"f313fdb1-cc0d-4907-baee-4e61a4b7e209\") " pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.855369 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f4486942-dcd4-4a64-8490-190bb54e8fd9-openshift-service-ca\") pod \"perses-operator-5446b9c989-v6fh4\" (UID: \"f4486942-dcd4-4a64-8490-190bb54e8fd9\") " pod="openshift-operators/perses-operator-5446b9c989-v6fh4" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.855405 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj5zx\" (UniqueName: \"kubernetes.io/projected/f313fdb1-cc0d-4907-baee-4e61a4b7e209-kube-api-access-vj5zx\") pod \"observability-operator-d8bb48f5d-xlg99\" (UID: \"f313fdb1-cc0d-4907-baee-4e61a4b7e209\") " pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.855423 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nvb5\" (UniqueName: \"kubernetes.io/projected/f4486942-dcd4-4a64-8490-190bb54e8fd9-kube-api-access-4nvb5\") pod \"perses-operator-5446b9c989-v6fh4\" (UID: \"f4486942-dcd4-4a64-8490-190bb54e8fd9\") " pod="openshift-operators/perses-operator-5446b9c989-v6fh4" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.862675 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f313fdb1-cc0d-4907-baee-4e61a4b7e209-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-xlg99\" (UID: \"f313fdb1-cc0d-4907-baee-4e61a4b7e209\") " pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.874422 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj5zx\" (UniqueName: \"kubernetes.io/projected/f313fdb1-cc0d-4907-baee-4e61a4b7e209-kube-api-access-vj5zx\") pod \"observability-operator-d8bb48f5d-xlg99\" (UID: \"f313fdb1-cc0d-4907-baee-4e61a4b7e209\") " pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.958672 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.959783 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f4486942-dcd4-4a64-8490-190bb54e8fd9-openshift-service-ca\") pod \"perses-operator-5446b9c989-v6fh4\" (UID: \"f4486942-dcd4-4a64-8490-190bb54e8fd9\") " pod="openshift-operators/perses-operator-5446b9c989-v6fh4" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.959858 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nvb5\" (UniqueName: \"kubernetes.io/projected/f4486942-dcd4-4a64-8490-190bb54e8fd9-kube-api-access-4nvb5\") pod \"perses-operator-5446b9c989-v6fh4\" (UID: \"f4486942-dcd4-4a64-8490-190bb54e8fd9\") " pod="openshift-operators/perses-operator-5446b9c989-v6fh4" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.960854 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f4486942-dcd4-4a64-8490-190bb54e8fd9-openshift-service-ca\") pod \"perses-operator-5446b9c989-v6fh4\" (UID: \"f4486942-dcd4-4a64-8490-190bb54e8fd9\") " pod="openshift-operators/perses-operator-5446b9c989-v6fh4" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.983763 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nvb5\" (UniqueName: \"kubernetes.io/projected/f4486942-dcd4-4a64-8490-190bb54e8fd9-kube-api-access-4nvb5\") pod \"perses-operator-5446b9c989-v6fh4\" (UID: \"f4486942-dcd4-4a64-8490-190bb54e8fd9\") " pod="openshift-operators/perses-operator-5446b9c989-v6fh4" Nov 21 11:25:09 crc kubenswrapper[4972]: I1121 11:25:09.995204 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8" Nov 21 11:25:10 crc kubenswrapper[4972]: I1121 11:25:10.048567 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" Nov 21 11:25:10 crc kubenswrapper[4972]: I1121 11:25:10.258379 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-v6fh4" Nov 21 11:25:10 crc kubenswrapper[4972]: I1121 11:25:10.421116 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-s2rhw"] Nov 21 11:25:10 crc kubenswrapper[4972]: I1121 11:25:10.718948 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh"] Nov 21 11:25:10 crc kubenswrapper[4972]: W1121 11:25:10.735070 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2262607f_d34e_4a47_877a_d04cbf0e72c6.slice/crio-6f5683adad537184f1b66bdaea31c422df4f1ec374e99eeac35605af1caf2a80 WatchSource:0}: Error finding container 6f5683adad537184f1b66bdaea31c422df4f1ec374e99eeac35605af1caf2a80: Status 404 returned error can't find the container with id 6f5683adad537184f1b66bdaea31c422df4f1ec374e99eeac35605af1caf2a80 Nov 21 11:25:10 crc kubenswrapper[4972]: I1121 11:25:10.838497 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8"] Nov 21 11:25:10 crc kubenswrapper[4972]: I1121 11:25:10.986129 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-xlg99"] Nov 21 11:25:11 crc kubenswrapper[4972]: I1121 11:25:11.027732 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-s2rhw" event={"ID":"c0081501-c78f-4c85-9d02-643b0f84c963","Type":"ContainerStarted","Data":"7ebddad5ff77fb1ee09a7c51e7f9ab3651d7dc35d483e2f8ed4cdfbb3d50986d"} Nov 21 11:25:11 crc kubenswrapper[4972]: I1121 11:25:11.033314 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh" event={"ID":"2262607f-d34e-4a47-877a-d04cbf0e72c6","Type":"ContainerStarted","Data":"6f5683adad537184f1b66bdaea31c422df4f1ec374e99eeac35605af1caf2a80"} Nov 21 11:25:11 crc kubenswrapper[4972]: I1121 11:25:11.034476 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" event={"ID":"f313fdb1-cc0d-4907-baee-4e61a4b7e209","Type":"ContainerStarted","Data":"a03bf48f444de392f3fc54e66a6ed46ee8f68e5e0d244ea0d7b053160a5a5e67"} Nov 21 11:25:11 crc kubenswrapper[4972]: I1121 11:25:11.035525 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8" event={"ID":"093e7801-c881-4ace-9120-e0495a2b1dc8","Type":"ContainerStarted","Data":"3544058aee20005579699615ea62b954d7997cdaaa7901460d9a66cba7797282"} Nov 21 11:25:11 crc kubenswrapper[4972]: I1121 11:25:11.270173 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-v6fh4"] Nov 21 11:25:12 crc kubenswrapper[4972]: I1121 11:25:12.075699 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-v6fh4" event={"ID":"f4486942-dcd4-4a64-8490-190bb54e8fd9","Type":"ContainerStarted","Data":"66fabd27606de1ddd35f2a91939f4f7274f2fcad7eb6c349411a056b26ad0066"} Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.170729 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8" event={"ID":"093e7801-c881-4ace-9120-e0495a2b1dc8","Type":"ContainerStarted","Data":"a98c95c181bea406b177e512acd7f35ea3da02d4875da05801a9919c2348699c"} Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.194976 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-s2rhw" event={"ID":"c0081501-c78f-4c85-9d02-643b0f84c963","Type":"ContainerStarted","Data":"8db1d107d9aca96df5a201eb755aa1d2a9ccac5fc24c28b3ff26abbb60f5df13"} Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.196756 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh" event={"ID":"2262607f-d34e-4a47-877a-d04cbf0e72c6","Type":"ContainerStarted","Data":"4b76f7c5db2138121ecb5bb0afcaa4fe7bc943ba1017359d6913fb568e7f5fb1"} Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.200158 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" event={"ID":"f313fdb1-cc0d-4907-baee-4e61a4b7e209","Type":"ContainerStarted","Data":"9ec4349ae3a0d1a0167d2e241aeb5963434dd691f5045effdb7e7703dfdc5adc"} Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.201457 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.207815 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-v6fh4" event={"ID":"f4486942-dcd4-4a64-8490-190bb54e8fd9","Type":"ContainerStarted","Data":"b792f5baa8be4d51e891db9d9077aa25b360f1cdfbaa4cfa8d751d8b8409be90"} Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.208310 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-v6fh4" Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.208969 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-pw8l8" podStartSLOduration=2.786648023 podStartE2EDuration="11.20895716s" podCreationTimestamp="2025-11-21 11:25:09 +0000 UTC" firstStartedPulling="2025-11-21 11:25:10.854663273 +0000 UTC m=+6255.963805771" lastFinishedPulling="2025-11-21 11:25:19.27697241 +0000 UTC m=+6264.386114908" observedRunningTime="2025-11-21 11:25:20.194449155 +0000 UTC m=+6265.303591683" watchObservedRunningTime="2025-11-21 11:25:20.20895716 +0000 UTC m=+6265.318099648" Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.229688 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-s2rhw" podStartSLOduration=2.489280428 podStartE2EDuration="11.229669398s" podCreationTimestamp="2025-11-21 11:25:09 +0000 UTC" firstStartedPulling="2025-11-21 11:25:10.536701653 +0000 UTC m=+6255.645844151" lastFinishedPulling="2025-11-21 11:25:19.277090623 +0000 UTC m=+6264.386233121" observedRunningTime="2025-11-21 11:25:20.216059708 +0000 UTC m=+6265.325202206" watchObservedRunningTime="2025-11-21 11:25:20.229669398 +0000 UTC m=+6265.338811896" Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.240400 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57589db548-cqfwh" podStartSLOduration=2.789343174 podStartE2EDuration="11.240381812s" podCreationTimestamp="2025-11-21 11:25:09 +0000 UTC" firstStartedPulling="2025-11-21 11:25:10.7397779 +0000 UTC m=+6255.848920398" lastFinishedPulling="2025-11-21 11:25:19.190816498 +0000 UTC m=+6264.299959036" observedRunningTime="2025-11-21 11:25:20.238506922 +0000 UTC m=+6265.347649430" watchObservedRunningTime="2025-11-21 11:25:20.240381812 +0000 UTC m=+6265.349524310" Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.252017 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.272093 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-xlg99" podStartSLOduration=2.882229773 podStartE2EDuration="11.272073731s" podCreationTimestamp="2025-11-21 11:25:09 +0000 UTC" firstStartedPulling="2025-11-21 11:25:10.989888393 +0000 UTC m=+6256.099030891" lastFinishedPulling="2025-11-21 11:25:19.379732351 +0000 UTC m=+6264.488874849" observedRunningTime="2025-11-21 11:25:20.270312954 +0000 UTC m=+6265.379455482" watchObservedRunningTime="2025-11-21 11:25:20.272073731 +0000 UTC m=+6265.381216239" Nov 21 11:25:20 crc kubenswrapper[4972]: I1121 11:25:20.308113 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-v6fh4" podStartSLOduration=3.316499214 podStartE2EDuration="11.308092585s" podCreationTimestamp="2025-11-21 11:25:09 +0000 UTC" firstStartedPulling="2025-11-21 11:25:11.285621475 +0000 UTC m=+6256.394763973" lastFinishedPulling="2025-11-21 11:25:19.277214806 +0000 UTC m=+6264.386357344" observedRunningTime="2025-11-21 11:25:20.293493908 +0000 UTC m=+6265.402636406" watchObservedRunningTime="2025-11-21 11:25:20.308092585 +0000 UTC m=+6265.417235083" Nov 21 11:25:30 crc kubenswrapper[4972]: I1121 11:25:30.261895 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-v6fh4" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.033886 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.034569 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="b41adfd0-57cb-4109-bcff-6594da13bf09" containerName="openstackclient" containerID="cri-o://57e6e52ceef13dc38b963dce1bf562cded2c4580091c1508938b28a1f91e375b" gracePeriod=2 Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.043313 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.194925 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 21 11:25:33 crc kubenswrapper[4972]: E1121 11:25:33.195796 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41adfd0-57cb-4109-bcff-6594da13bf09" containerName="openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.195809 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41adfd0-57cb-4109-bcff-6594da13bf09" containerName="openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.196938 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41adfd0-57cb-4109-bcff-6594da13bf09" containerName="openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.198600 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.219235 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.314919 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 21 11:25:33 crc kubenswrapper[4972]: E1121 11:25:33.316521 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-ht2hx openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="b6cc43d5-919e-44ba-a245-a0d7e70fb124" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.360010 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.385352 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6cc43d5-919e-44ba-a245-a0d7e70fb124-openstack-config-secret\") pod \"openstackclient\" (UID: \"b6cc43d5-919e-44ba-a245-a0d7e70fb124\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.385519 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht2hx\" (UniqueName: \"kubernetes.io/projected/b6cc43d5-919e-44ba-a245-a0d7e70fb124-kube-api-access-ht2hx\") pod \"openstackclient\" (UID: \"b6cc43d5-919e-44ba-a245-a0d7e70fb124\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.385542 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6cc43d5-919e-44ba-a245-a0d7e70fb124-openstack-config\") pod \"openstackclient\" (UID: \"b6cc43d5-919e-44ba-a245-a0d7e70fb124\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.393045 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.394928 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.396333 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.400867 4972 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b41adfd0-57cb-4109-bcff-6594da13bf09" podUID="874ea433-29fb-461e-825b-deba8bce7e6d" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.409744 4972 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b6cc43d5-919e-44ba-a245-a0d7e70fb124" podUID="874ea433-29fb-461e-825b-deba8bce7e6d" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.420634 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.437593 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.448982 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.450371 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.457114 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-8gxmp" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.472093 4972 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b6cc43d5-919e-44ba-a245-a0d7e70fb124" podUID="874ea433-29fb-461e-825b-deba8bce7e6d" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.483887 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.488023 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6cc43d5-919e-44ba-a245-a0d7e70fb124-openstack-config-secret\") pod \"openstackclient\" (UID: \"b6cc43d5-919e-44ba-a245-a0d7e70fb124\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.488136 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfklx\" (UniqueName: \"kubernetes.io/projected/874ea433-29fb-461e-825b-deba8bce7e6d-kube-api-access-qfklx\") pod \"openstackclient\" (UID: \"874ea433-29fb-461e-825b-deba8bce7e6d\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.488189 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/874ea433-29fb-461e-825b-deba8bce7e6d-openstack-config\") pod \"openstackclient\" (UID: \"874ea433-29fb-461e-825b-deba8bce7e6d\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.488237 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht2hx\" (UniqueName: \"kubernetes.io/projected/b6cc43d5-919e-44ba-a245-a0d7e70fb124-kube-api-access-ht2hx\") pod \"openstackclient\" (UID: \"b6cc43d5-919e-44ba-a245-a0d7e70fb124\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.488260 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6cc43d5-919e-44ba-a245-a0d7e70fb124-openstack-config\") pod \"openstackclient\" (UID: \"b6cc43d5-919e-44ba-a245-a0d7e70fb124\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.488283 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/874ea433-29fb-461e-825b-deba8bce7e6d-openstack-config-secret\") pod \"openstackclient\" (UID: \"874ea433-29fb-461e-825b-deba8bce7e6d\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.500099 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6cc43d5-919e-44ba-a245-a0d7e70fb124-openstack-config\") pod \"openstackclient\" (UID: \"b6cc43d5-919e-44ba-a245-a0d7e70fb124\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: E1121 11:25:33.509644 4972 projected.go:194] Error preparing data for projected volume kube-api-access-ht2hx for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (b6cc43d5-919e-44ba-a245-a0d7e70fb124) does not match the UID in record. The object might have been deleted and then recreated Nov 21 11:25:33 crc kubenswrapper[4972]: E1121 11:25:33.509721 4972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6cc43d5-919e-44ba-a245-a0d7e70fb124-kube-api-access-ht2hx podName:b6cc43d5-919e-44ba-a245-a0d7e70fb124 nodeName:}" failed. No retries permitted until 2025-11-21 11:25:34.009703291 +0000 UTC m=+6279.118845789 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ht2hx" (UniqueName: "kubernetes.io/projected/b6cc43d5-919e-44ba-a245-a0d7e70fb124-kube-api-access-ht2hx") pod "openstackclient" (UID: "b6cc43d5-919e-44ba-a245-a0d7e70fb124") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (b6cc43d5-919e-44ba-a245-a0d7e70fb124) does not match the UID in record. The object might have been deleted and then recreated Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.524506 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6cc43d5-919e-44ba-a245-a0d7e70fb124-openstack-config-secret\") pod \"openstackclient\" (UID: \"b6cc43d5-919e-44ba-a245-a0d7e70fb124\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.590685 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6cc43d5-919e-44ba-a245-a0d7e70fb124-openstack-config-secret\") pod \"b6cc43d5-919e-44ba-a245-a0d7e70fb124\" (UID: \"b6cc43d5-919e-44ba-a245-a0d7e70fb124\") " Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.590810 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6cc43d5-919e-44ba-a245-a0d7e70fb124-openstack-config\") pod \"b6cc43d5-919e-44ba-a245-a0d7e70fb124\" (UID: \"b6cc43d5-919e-44ba-a245-a0d7e70fb124\") " Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.591379 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfklx\" (UniqueName: \"kubernetes.io/projected/874ea433-29fb-461e-825b-deba8bce7e6d-kube-api-access-qfklx\") pod \"openstackclient\" (UID: \"874ea433-29fb-461e-825b-deba8bce7e6d\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.591450 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/874ea433-29fb-461e-825b-deba8bce7e6d-openstack-config\") pod \"openstackclient\" (UID: \"874ea433-29fb-461e-825b-deba8bce7e6d\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.591520 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/874ea433-29fb-461e-825b-deba8bce7e6d-openstack-config-secret\") pod \"openstackclient\" (UID: \"874ea433-29fb-461e-825b-deba8bce7e6d\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.591557 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjk2\" (UniqueName: \"kubernetes.io/projected/2c758da9-ebfb-4259-877b-c0cdd0aadad5-kube-api-access-gkjk2\") pod \"kube-state-metrics-0\" (UID: \"2c758da9-ebfb-4259-877b-c0cdd0aadad5\") " pod="openstack/kube-state-metrics-0" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.591629 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht2hx\" (UniqueName: \"kubernetes.io/projected/b6cc43d5-919e-44ba-a245-a0d7e70fb124-kube-api-access-ht2hx\") on node \"crc\" DevicePath \"\"" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.594160 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/874ea433-29fb-461e-825b-deba8bce7e6d-openstack-config\") pod \"openstackclient\" (UID: \"874ea433-29fb-461e-825b-deba8bce7e6d\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.594436 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cc43d5-919e-44ba-a245-a0d7e70fb124-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b6cc43d5-919e-44ba-a245-a0d7e70fb124" (UID: "b6cc43d5-919e-44ba-a245-a0d7e70fb124"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.620743 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cc43d5-919e-44ba-a245-a0d7e70fb124-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b6cc43d5-919e-44ba-a245-a0d7e70fb124" (UID: "b6cc43d5-919e-44ba-a245-a0d7e70fb124"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.622414 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/874ea433-29fb-461e-825b-deba8bce7e6d-openstack-config-secret\") pod \"openstackclient\" (UID: \"874ea433-29fb-461e-825b-deba8bce7e6d\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.646770 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfklx\" (UniqueName: \"kubernetes.io/projected/874ea433-29fb-461e-825b-deba8bce7e6d-kube-api-access-qfklx\") pod \"openstackclient\" (UID: \"874ea433-29fb-461e-825b-deba8bce7e6d\") " pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.692980 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkjk2\" (UniqueName: \"kubernetes.io/projected/2c758da9-ebfb-4259-877b-c0cdd0aadad5-kube-api-access-gkjk2\") pod \"kube-state-metrics-0\" (UID: \"2c758da9-ebfb-4259-877b-c0cdd0aadad5\") " pod="openstack/kube-state-metrics-0" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.693108 4972 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6cc43d5-919e-44ba-a245-a0d7e70fb124-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.693120 4972 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6cc43d5-919e-44ba-a245-a0d7e70fb124-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.739254 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkjk2\" (UniqueName: \"kubernetes.io/projected/2c758da9-ebfb-4259-877b-c0cdd0aadad5-kube-api-access-gkjk2\") pod \"kube-state-metrics-0\" (UID: \"2c758da9-ebfb-4259-877b-c0cdd0aadad5\") " pod="openstack/kube-state-metrics-0" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.791543 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cc43d5-919e-44ba-a245-a0d7e70fb124" path="/var/lib/kubelet/pods/b6cc43d5-919e-44ba-a245-a0d7e70fb124/volumes" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.868322 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:25:33 crc kubenswrapper[4972]: I1121 11:25:33.882947 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.048891 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.051229 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.055558 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.065430 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.065657 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.065768 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-x54lf" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.065892 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.066071 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.106049 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.106091 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.106135 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.106158 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.106238 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.106271 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.106296 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb6td\" (UniqueName: \"kubernetes.io/projected/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-kube-api-access-vb6td\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.141680 4972 scope.go:117] "RemoveContainer" containerID="43eb3b1024884ace9144303797d7dfc88b383e6bb8ee77f641ced4f2e723c67d" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.208163 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.208208 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.208238 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb6td\" (UniqueName: \"kubernetes.io/projected/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-kube-api-access-vb6td\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.208307 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.208328 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.208364 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.208381 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.231089 4972 scope.go:117] "RemoveContainer" containerID="83e6ffa81950d7d1a1705f6f604258d788d90dbd3c48b97804b4fd001a869cc4" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.231898 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.232267 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.232490 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.236605 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.237174 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.245711 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb6td\" (UniqueName: \"kubernetes.io/projected/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-kube-api-access-vb6td\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.249485 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6\") " pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.416603 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.427936 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.437200 4972 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b6cc43d5-919e-44ba-a245-a0d7e70fb124" podUID="874ea433-29fb-461e-825b-deba8bce7e6d" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.444207 4972 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b6cc43d5-919e-44ba-a245-a0d7e70fb124" podUID="874ea433-29fb-461e-825b-deba8bce7e6d" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.679363 4972 scope.go:117] "RemoveContainer" containerID="ad1e37b6435ae9a805bda8848f14287a5626de47f09d8a583a46f7d82157b38c" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.685100 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.687934 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.690051 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.690220 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.692675 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.692873 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.692961 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.693174 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-lhjs5" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.704595 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.738402 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3a9908d4-118e-43f8-8042-354780fe4db1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.738464 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a9908d4-118e-43f8-8042-354780fe4db1-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.738506 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3a9908d4-118e-43f8-8042-354780fe4db1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.738577 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bd42ab70-264e-4221-a222-8ee79e10d64f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd42ab70-264e-4221-a222-8ee79e10d64f\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.738637 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3a9908d4-118e-43f8-8042-354780fe4db1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.738709 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-246n2\" (UniqueName: \"kubernetes.io/projected/3a9908d4-118e-43f8-8042-354780fe4db1-kube-api-access-246n2\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.738754 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3a9908d4-118e-43f8-8042-354780fe4db1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.738777 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3a9908d4-118e-43f8-8042-354780fe4db1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.841122 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a9908d4-118e-43f8-8042-354780fe4db1-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.841178 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3a9908d4-118e-43f8-8042-354780fe4db1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.841233 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bd42ab70-264e-4221-a222-8ee79e10d64f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd42ab70-264e-4221-a222-8ee79e10d64f\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.842025 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3a9908d4-118e-43f8-8042-354780fe4db1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.842201 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3a9908d4-118e-43f8-8042-354780fe4db1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.842329 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-246n2\" (UniqueName: \"kubernetes.io/projected/3a9908d4-118e-43f8-8042-354780fe4db1-kube-api-access-246n2\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.842393 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3a9908d4-118e-43f8-8042-354780fe4db1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.842418 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3a9908d4-118e-43f8-8042-354780fe4db1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.842521 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3a9908d4-118e-43f8-8042-354780fe4db1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.849349 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3a9908d4-118e-43f8-8042-354780fe4db1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.875045 4972 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.875093 4972 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bd42ab70-264e-4221-a222-8ee79e10d64f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd42ab70-264e-4221-a222-8ee79e10d64f\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d3ba2914bdebed47a306278c4da6e5592d1de0c758dc0b0d082e3d96120135e0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.877473 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a9908d4-118e-43f8-8042-354780fe4db1-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.879564 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3a9908d4-118e-43f8-8042-354780fe4db1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.880556 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3a9908d4-118e-43f8-8042-354780fe4db1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.881238 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-246n2\" (UniqueName: \"kubernetes.io/projected/3a9908d4-118e-43f8-8042-354780fe4db1-kube-api-access-246n2\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.884370 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3a9908d4-118e-43f8-8042-354780fe4db1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.908376 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.918119 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 11:25:34 crc kubenswrapper[4972]: I1121 11:25:34.960072 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bd42ab70-264e-4221-a222-8ee79e10d64f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd42ab70-264e-4221-a222-8ee79e10d64f\") pod \"prometheus-metric-storage-0\" (UID: \"3a9908d4-118e-43f8-8042-354780fe4db1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.056081 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.437860 4972 generic.go:334] "Generic (PLEG): container finished" podID="b41adfd0-57cb-4109-bcff-6594da13bf09" containerID="57e6e52ceef13dc38b963dce1bf562cded2c4580091c1508938b28a1f91e375b" exitCode=137 Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.438126 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4528bc355ec9cde887749c7c818bc8db509e048219864c2c92e4d744b7a9854e" Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.438205 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.439252 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2c758da9-ebfb-4259-877b-c0cdd0aadad5","Type":"ContainerStarted","Data":"04572a4cec123b7b18db778258a06d555340f4d66f4b2c338c9683df6f757e14"} Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.442377 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"874ea433-29fb-461e-825b-deba8bce7e6d","Type":"ContainerStarted","Data":"7b586737977fb531a5ce1fbe7eb2f249a84ccfb60b2cd1bc5bc42f9d479ecd27"} Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.556523 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b41adfd0-57cb-4109-bcff-6594da13bf09-openstack-config-secret\") pod \"b41adfd0-57cb-4109-bcff-6594da13bf09\" (UID: \"b41adfd0-57cb-4109-bcff-6594da13bf09\") " Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.556621 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b41adfd0-57cb-4109-bcff-6594da13bf09-openstack-config\") pod \"b41adfd0-57cb-4109-bcff-6594da13bf09\" (UID: \"b41adfd0-57cb-4109-bcff-6594da13bf09\") " Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.556677 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frrjm\" (UniqueName: \"kubernetes.io/projected/b41adfd0-57cb-4109-bcff-6594da13bf09-kube-api-access-frrjm\") pod \"b41adfd0-57cb-4109-bcff-6594da13bf09\" (UID: \"b41adfd0-57cb-4109-bcff-6594da13bf09\") " Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.578127 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41adfd0-57cb-4109-bcff-6594da13bf09-kube-api-access-frrjm" (OuterVolumeSpecName: "kube-api-access-frrjm") pod "b41adfd0-57cb-4109-bcff-6594da13bf09" (UID: "b41adfd0-57cb-4109-bcff-6594da13bf09"). InnerVolumeSpecName "kube-api-access-frrjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.602449 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41adfd0-57cb-4109-bcff-6594da13bf09-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b41adfd0-57cb-4109-bcff-6594da13bf09" (UID: "b41adfd0-57cb-4109-bcff-6594da13bf09"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.719280 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frrjm\" (UniqueName: \"kubernetes.io/projected/b41adfd0-57cb-4109-bcff-6594da13bf09-kube-api-access-frrjm\") on node \"crc\" DevicePath \"\"" Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.719319 4972 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b41adfd0-57cb-4109-bcff-6594da13bf09-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.746121 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.837576 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41adfd0-57cb-4109-bcff-6594da13bf09-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b41adfd0-57cb-4109-bcff-6594da13bf09" (UID: "b41adfd0-57cb-4109-bcff-6594da13bf09"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.881239 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 11:25:35 crc kubenswrapper[4972]: I1121 11:25:35.930388 4972 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b41adfd0-57cb-4109-bcff-6594da13bf09-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 21 11:25:36 crc kubenswrapper[4972]: I1121 11:25:36.454287 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2c758da9-ebfb-4259-877b-c0cdd0aadad5","Type":"ContainerStarted","Data":"44e5ea158d67e7544b524379250f6ad0fb2391519addfe9ca621786ad316a6cd"} Nov 21 11:25:36 crc kubenswrapper[4972]: I1121 11:25:36.455788 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a9908d4-118e-43f8-8042-354780fe4db1","Type":"ContainerStarted","Data":"f7c3c174bf7b276b7c70ceede6f8c5e6daedd3f811fde6f2ca5bc9c8e7d0eb4a"} Nov 21 11:25:36 crc kubenswrapper[4972]: I1121 11:25:36.457217 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"874ea433-29fb-461e-825b-deba8bce7e6d","Type":"ContainerStarted","Data":"ecbf8713f056766fd9ee4daf1e6e5014cb8158b63f8cbbd1cfac121fcd325acc"} Nov 21 11:25:36 crc kubenswrapper[4972]: I1121 11:25:36.458340 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6","Type":"ContainerStarted","Data":"39ab0a2f14c0d112c2b824f00a252e69c532bb8f2851181553ec0c1bc8e4e6ca"} Nov 21 11:25:36 crc kubenswrapper[4972]: I1121 11:25:36.458378 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 11:25:36 crc kubenswrapper[4972]: I1121 11:25:36.483441 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.081542046 podStartE2EDuration="3.483412768s" podCreationTimestamp="2025-11-21 11:25:33 +0000 UTC" firstStartedPulling="2025-11-21 11:25:35.018055284 +0000 UTC m=+6280.127197782" lastFinishedPulling="2025-11-21 11:25:35.419926006 +0000 UTC m=+6280.529068504" observedRunningTime="2025-11-21 11:25:36.467200328 +0000 UTC m=+6281.576342896" watchObservedRunningTime="2025-11-21 11:25:36.483412768 +0000 UTC m=+6281.592555276" Nov 21 11:25:37 crc kubenswrapper[4972]: I1121 11:25:37.465667 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 21 11:25:37 crc kubenswrapper[4972]: I1121 11:25:37.775263 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41adfd0-57cb-4109-bcff-6594da13bf09" path="/var/lib/kubelet/pods/b41adfd0-57cb-4109-bcff-6594da13bf09/volumes" Nov 21 11:25:42 crc kubenswrapper[4972]: I1121 11:25:42.525407 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6","Type":"ContainerStarted","Data":"25e20433621f2cb743ea23cf4cb672d6021108d2f4c38948023e9a97995cd344"} Nov 21 11:25:42 crc kubenswrapper[4972]: I1121 11:25:42.561489 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=9.561464561 podStartE2EDuration="9.561464561s" podCreationTimestamp="2025-11-21 11:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:25:36.493219927 +0000 UTC m=+6281.602362425" watchObservedRunningTime="2025-11-21 11:25:42.561464561 +0000 UTC m=+6287.670607069" Nov 21 11:25:43 crc kubenswrapper[4972]: I1121 11:25:43.540787 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a9908d4-118e-43f8-8042-354780fe4db1","Type":"ContainerStarted","Data":"ece8ea1f462f9d7e4f07d7d6ca9041bc52ad0cca0f309fe512fd22d9ec0d0521"} Nov 21 11:25:43 crc kubenswrapper[4972]: I1121 11:25:43.886851 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 21 11:25:51 crc kubenswrapper[4972]: I1121 11:25:51.646198 4972 generic.go:334] "Generic (PLEG): container finished" podID="7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6" containerID="25e20433621f2cb743ea23cf4cb672d6021108d2f4c38948023e9a97995cd344" exitCode=0 Nov 21 11:25:51 crc kubenswrapper[4972]: I1121 11:25:51.646323 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6","Type":"ContainerDied","Data":"25e20433621f2cb743ea23cf4cb672d6021108d2f4c38948023e9a97995cd344"} Nov 21 11:25:51 crc kubenswrapper[4972]: I1121 11:25:51.650000 4972 generic.go:334] "Generic (PLEG): container finished" podID="3a9908d4-118e-43f8-8042-354780fe4db1" containerID="ece8ea1f462f9d7e4f07d7d6ca9041bc52ad0cca0f309fe512fd22d9ec0d0521" exitCode=0 Nov 21 11:25:51 crc kubenswrapper[4972]: I1121 11:25:51.650038 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a9908d4-118e-43f8-8042-354780fe4db1","Type":"ContainerDied","Data":"ece8ea1f462f9d7e4f07d7d6ca9041bc52ad0cca0f309fe512fd22d9ec0d0521"} Nov 21 11:25:58 crc kubenswrapper[4972]: I1121 11:25:58.043178 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-rbsnw"] Nov 21 11:25:58 crc kubenswrapper[4972]: I1121 11:25:58.050311 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-rbsnw"] Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.075791 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1013-account-create-7s4nl"] Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.096795 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tn2cn"] Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.110128 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-k5w5d"] Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.125592 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-40e0-account-create-bqmgl"] Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.138352 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tn2cn"] Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.147106 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-40e0-account-create-bqmgl"] Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.156133 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1013-account-create-7s4nl"] Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.178380 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-k5w5d"] Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.191682 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-49d4-account-create-gj62l"] Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.202027 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-49d4-account-create-gj62l"] Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.748346 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a9908d4-118e-43f8-8042-354780fe4db1","Type":"ContainerStarted","Data":"699b42d92555315c494b801319e509f001c9f8be5987338aebf24924eeb4b097"} Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.751185 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6","Type":"ContainerStarted","Data":"08c0b0a6833bfa9a6d4c5865ffb05089575b70bc205e2b9d6bd6cfa46f91842c"} Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.775093 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a67f3f2-1df1-4150-b686-397ce5c67721" path="/var/lib/kubelet/pods/1a67f3f2-1df1-4150-b686-397ce5c67721/volumes" Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.775701 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3202a840-5409-4ab4-8905-994551c69dd8" path="/var/lib/kubelet/pods/3202a840-5409-4ab4-8905-994551c69dd8/volumes" Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.776240 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a0d08f4-50a4-417d-aa99-c18f80db60d8" path="/var/lib/kubelet/pods/5a0d08f4-50a4-417d-aa99-c18f80db60d8/volumes" Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.776760 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1756680-029d-4523-935d-48c659172cc2" path="/var/lib/kubelet/pods/a1756680-029d-4523-935d-48c659172cc2/volumes" Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.777851 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a32d0c1a-b8ac-4785-aea6-b9a76765b559" path="/var/lib/kubelet/pods/a32d0c1a-b8ac-4785-aea6-b9a76765b559/volumes" Nov 21 11:25:59 crc kubenswrapper[4972]: I1121 11:25:59.778363 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc141f37-940b-4971-a219-f63fc76d7489" path="/var/lib/kubelet/pods/dc141f37-940b-4971-a219-f63fc76d7489/volumes" Nov 21 11:26:04 crc kubenswrapper[4972]: I1121 11:26:04.846187 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a9908d4-118e-43f8-8042-354780fe4db1","Type":"ContainerStarted","Data":"96c5715c3c2084ef23616f4378301521b9ed9451560a784d213003350b3341cb"} Nov 21 11:26:04 crc kubenswrapper[4972]: I1121 11:26:04.849941 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6","Type":"ContainerStarted","Data":"34bc777849224899588d941131bd6a941cba1aa0d011875651b196f0a319eef8"} Nov 21 11:26:04 crc kubenswrapper[4972]: I1121 11:26:04.850621 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 21 11:26:04 crc kubenswrapper[4972]: I1121 11:26:04.854460 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 21 11:26:04 crc kubenswrapper[4972]: I1121 11:26:04.890380 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=8.783294407 podStartE2EDuration="31.890356201s" podCreationTimestamp="2025-11-21 11:25:33 +0000 UTC" firstStartedPulling="2025-11-21 11:25:35.749557985 +0000 UTC m=+6280.858700483" lastFinishedPulling="2025-11-21 11:25:58.856619739 +0000 UTC m=+6303.965762277" observedRunningTime="2025-11-21 11:26:04.876814404 +0000 UTC m=+6309.985956982" watchObservedRunningTime="2025-11-21 11:26:04.890356201 +0000 UTC m=+6309.999498729" Nov 21 11:26:07 crc kubenswrapper[4972]: I1121 11:26:07.900563 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a9908d4-118e-43f8-8042-354780fe4db1","Type":"ContainerStarted","Data":"90b447c703ff2b2c3de0b555b532c493216c730df4877a7e18e215ec94b59547"} Nov 21 11:26:07 crc kubenswrapper[4972]: I1121 11:26:07.946571 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.594670626 podStartE2EDuration="34.946549336s" podCreationTimestamp="2025-11-21 11:25:33 +0000 UTC" firstStartedPulling="2025-11-21 11:25:35.896188358 +0000 UTC m=+6281.005330856" lastFinishedPulling="2025-11-21 11:26:07.248067058 +0000 UTC m=+6312.357209566" observedRunningTime="2025-11-21 11:26:07.936477151 +0000 UTC m=+6313.045619679" watchObservedRunningTime="2025-11-21 11:26:07.946549336 +0000 UTC m=+6313.055691844" Nov 21 11:26:08 crc kubenswrapper[4972]: I1121 11:26:08.037238 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptxgz"] Nov 21 11:26:08 crc kubenswrapper[4972]: I1121 11:26:08.045248 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptxgz"] Nov 21 11:26:09 crc kubenswrapper[4972]: I1121 11:26:09.781088 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="189cef8d-65b6-4c2d-a7db-313fa6399a08" path="/var/lib/kubelet/pods/189cef8d-65b6-4c2d-a7db-313fa6399a08/volumes" Nov 21 11:26:10 crc kubenswrapper[4972]: I1121 11:26:10.056756 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.449011 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.452865 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.462963 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.487413 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.487893 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.583098 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.583151 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.583385 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-config-data\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.583471 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c5326d6-e3d9-4d8d-a839-78c68fe74851-run-httpd\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.583527 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-scripts\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.583728 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsnrq\" (UniqueName: \"kubernetes.io/projected/7c5326d6-e3d9-4d8d-a839-78c68fe74851-kube-api-access-lsnrq\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.583768 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c5326d6-e3d9-4d8d-a839-78c68fe74851-log-httpd\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.685958 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c5326d6-e3d9-4d8d-a839-78c68fe74851-run-httpd\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.686005 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-scripts\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.686075 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsnrq\" (UniqueName: \"kubernetes.io/projected/7c5326d6-e3d9-4d8d-a839-78c68fe74851-kube-api-access-lsnrq\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.686110 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c5326d6-e3d9-4d8d-a839-78c68fe74851-log-httpd\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.686147 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.686175 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.686249 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-config-data\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.686659 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c5326d6-e3d9-4d8d-a839-78c68fe74851-run-httpd\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.686700 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c5326d6-e3d9-4d8d-a839-78c68fe74851-log-httpd\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.702898 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.703073 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.703248 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-scripts\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.705520 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsnrq\" (UniqueName: \"kubernetes.io/projected/7c5326d6-e3d9-4d8d-a839-78c68fe74851-kube-api-access-lsnrq\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.705549 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-config-data\") pod \"ceilometer-0\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " pod="openstack/ceilometer-0" Nov 21 11:26:13 crc kubenswrapper[4972]: I1121 11:26:13.799120 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:26:14 crc kubenswrapper[4972]: I1121 11:26:14.295906 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:26:14 crc kubenswrapper[4972]: I1121 11:26:14.968873 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c5326d6-e3d9-4d8d-a839-78c68fe74851","Type":"ContainerStarted","Data":"b6a6e9cab04e4dc9838cc1666818fef8f9b33473c0874a75e9fe1dae76d2ca33"} Nov 21 11:26:15 crc kubenswrapper[4972]: I1121 11:26:15.984387 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c5326d6-e3d9-4d8d-a839-78c68fe74851","Type":"ContainerStarted","Data":"8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9"} Nov 21 11:26:17 crc kubenswrapper[4972]: I1121 11:26:17.002941 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c5326d6-e3d9-4d8d-a839-78c68fe74851","Type":"ContainerStarted","Data":"9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4"} Nov 21 11:26:18 crc kubenswrapper[4972]: I1121 11:26:18.016979 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c5326d6-e3d9-4d8d-a839-78c68fe74851","Type":"ContainerStarted","Data":"167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b"} Nov 21 11:26:20 crc kubenswrapper[4972]: I1121 11:26:20.056746 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 21 11:26:20 crc kubenswrapper[4972]: I1121 11:26:20.060283 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 21 11:26:21 crc kubenswrapper[4972]: I1121 11:26:21.061654 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c5326d6-e3d9-4d8d-a839-78c68fe74851","Type":"ContainerStarted","Data":"d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543"} Nov 21 11:26:21 crc kubenswrapper[4972]: I1121 11:26:21.064270 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 21 11:26:21 crc kubenswrapper[4972]: I1121 11:26:21.068864 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t8fss"] Nov 21 11:26:21 crc kubenswrapper[4972]: I1121 11:26:21.093626 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t8fss"] Nov 21 11:26:21 crc kubenswrapper[4972]: I1121 11:26:21.096971 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.44737935 podStartE2EDuration="8.096948758s" podCreationTimestamp="2025-11-21 11:26:13 +0000 UTC" firstStartedPulling="2025-11-21 11:26:14.297226561 +0000 UTC m=+6319.406369059" lastFinishedPulling="2025-11-21 11:26:19.946795969 +0000 UTC m=+6325.055938467" observedRunningTime="2025-11-21 11:26:21.088222378 +0000 UTC m=+6326.197364896" watchObservedRunningTime="2025-11-21 11:26:21.096948758 +0000 UTC m=+6326.206091256" Nov 21 11:26:21 crc kubenswrapper[4972]: I1121 11:26:21.771062 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca3ab65-ceb2-4d2f-8310-31573a28f17b" path="/var/lib/kubelet/pods/aca3ab65-ceb2-4d2f-8310-31573a28f17b/volumes" Nov 21 11:26:22 crc kubenswrapper[4972]: I1121 11:26:22.028194 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-fmtr6"] Nov 21 11:26:22 crc kubenswrapper[4972]: I1121 11:26:22.039339 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-fmtr6"] Nov 21 11:26:22 crc kubenswrapper[4972]: I1121 11:26:22.072770 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 11:26:23 crc kubenswrapper[4972]: I1121 11:26:23.776279 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adefeeef-e030-49b7-ade0-f4b728b3de7a" path="/var/lib/kubelet/pods/adefeeef-e030-49b7-ade0-f4b728b3de7a/volumes" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.030495 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-slr78"] Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.032507 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-slr78" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.044950 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-slr78"] Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.126180 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499-operator-scripts\") pod \"aodh-db-create-slr78\" (UID: \"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499\") " pod="openstack/aodh-db-create-slr78" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.126268 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92pqj\" (UniqueName: \"kubernetes.io/projected/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499-kube-api-access-92pqj\") pod \"aodh-db-create-slr78\" (UID: \"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499\") " pod="openstack/aodh-db-create-slr78" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.141812 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-d5da-account-create-xlssc"] Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.144151 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d5da-account-create-xlssc" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.147160 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.156718 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-d5da-account-create-xlssc"] Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.228427 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92pqj\" (UniqueName: \"kubernetes.io/projected/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499-kube-api-access-92pqj\") pod \"aodh-db-create-slr78\" (UID: \"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499\") " pod="openstack/aodh-db-create-slr78" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.228472 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10-operator-scripts\") pod \"aodh-d5da-account-create-xlssc\" (UID: \"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10\") " pod="openstack/aodh-d5da-account-create-xlssc" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.228508 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw7k7\" (UniqueName: \"kubernetes.io/projected/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10-kube-api-access-cw7k7\") pod \"aodh-d5da-account-create-xlssc\" (UID: \"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10\") " pod="openstack/aodh-d5da-account-create-xlssc" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.229078 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499-operator-scripts\") pod \"aodh-db-create-slr78\" (UID: \"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499\") " pod="openstack/aodh-db-create-slr78" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.230048 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499-operator-scripts\") pod \"aodh-db-create-slr78\" (UID: \"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499\") " pod="openstack/aodh-db-create-slr78" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.255008 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92pqj\" (UniqueName: \"kubernetes.io/projected/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499-kube-api-access-92pqj\") pod \"aodh-db-create-slr78\" (UID: \"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499\") " pod="openstack/aodh-db-create-slr78" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.330531 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10-operator-scripts\") pod \"aodh-d5da-account-create-xlssc\" (UID: \"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10\") " pod="openstack/aodh-d5da-account-create-xlssc" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.330580 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw7k7\" (UniqueName: \"kubernetes.io/projected/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10-kube-api-access-cw7k7\") pod \"aodh-d5da-account-create-xlssc\" (UID: \"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10\") " pod="openstack/aodh-d5da-account-create-xlssc" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.331363 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10-operator-scripts\") pod \"aodh-d5da-account-create-xlssc\" (UID: \"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10\") " pod="openstack/aodh-d5da-account-create-xlssc" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.356762 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw7k7\" (UniqueName: \"kubernetes.io/projected/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10-kube-api-access-cw7k7\") pod \"aodh-d5da-account-create-xlssc\" (UID: \"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10\") " pod="openstack/aodh-d5da-account-create-xlssc" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.432489 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-slr78" Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.465240 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d5da-account-create-xlssc" Nov 21 11:26:24 crc kubenswrapper[4972]: W1121 11:26:24.988445 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bc4ff6c_b869_4c9d_b8a5_4c53ae802499.slice/crio-91f933b8b69671858cbeebe3ff1ee529d366c0a313bdd26140dcb667bc31c277 WatchSource:0}: Error finding container 91f933b8b69671858cbeebe3ff1ee529d366c0a313bdd26140dcb667bc31c277: Status 404 returned error can't find the container with id 91f933b8b69671858cbeebe3ff1ee529d366c0a313bdd26140dcb667bc31c277 Nov 21 11:26:24 crc kubenswrapper[4972]: I1121 11:26:24.988458 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-slr78"] Nov 21 11:26:25 crc kubenswrapper[4972]: I1121 11:26:25.087654 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-d5da-account-create-xlssc"] Nov 21 11:26:25 crc kubenswrapper[4972]: W1121 11:26:25.114302 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8f56c97_4a4b_4243_9cef_0a0cfcfc7f10.slice/crio-4f84f558f513cad6e305c27a883fcc96fc03c81b00eac68185d8c8cf80853402 WatchSource:0}: Error finding container 4f84f558f513cad6e305c27a883fcc96fc03c81b00eac68185d8c8cf80853402: Status 404 returned error can't find the container with id 4f84f558f513cad6e305c27a883fcc96fc03c81b00eac68185d8c8cf80853402 Nov 21 11:26:25 crc kubenswrapper[4972]: I1121 11:26:25.118465 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-slr78" event={"ID":"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499","Type":"ContainerStarted","Data":"91f933b8b69671858cbeebe3ff1ee529d366c0a313bdd26140dcb667bc31c277"} Nov 21 11:26:26 crc kubenswrapper[4972]: I1121 11:26:26.136604 4972 generic.go:334] "Generic (PLEG): container finished" podID="9bc4ff6c-b869-4c9d-b8a5-4c53ae802499" containerID="a91b4938b29e220872e608e376f91f574284e53be09c0f358c1acea7b4244c32" exitCode=0 Nov 21 11:26:26 crc kubenswrapper[4972]: I1121 11:26:26.136704 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-slr78" event={"ID":"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499","Type":"ContainerDied","Data":"a91b4938b29e220872e608e376f91f574284e53be09c0f358c1acea7b4244c32"} Nov 21 11:26:26 crc kubenswrapper[4972]: I1121 11:26:26.139096 4972 generic.go:334] "Generic (PLEG): container finished" podID="e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10" containerID="4d1c7146b08d1b5dea1474e0b1e7463cbff363070ea43f94866e242adc3ae65c" exitCode=0 Nov 21 11:26:26 crc kubenswrapper[4972]: I1121 11:26:26.139158 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d5da-account-create-xlssc" event={"ID":"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10","Type":"ContainerDied","Data":"4d1c7146b08d1b5dea1474e0b1e7463cbff363070ea43f94866e242adc3ae65c"} Nov 21 11:26:26 crc kubenswrapper[4972]: I1121 11:26:26.139252 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d5da-account-create-xlssc" event={"ID":"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10","Type":"ContainerStarted","Data":"4f84f558f513cad6e305c27a883fcc96fc03c81b00eac68185d8c8cf80853402"} Nov 21 11:26:26 crc kubenswrapper[4972]: I1121 11:26:26.178806 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:26:26 crc kubenswrapper[4972]: I1121 11:26:26.178944 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.536637 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d5da-account-create-xlssc" Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.602766 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw7k7\" (UniqueName: \"kubernetes.io/projected/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10-kube-api-access-cw7k7\") pod \"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10\" (UID: \"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10\") " Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.602875 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10-operator-scripts\") pod \"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10\" (UID: \"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10\") " Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.603862 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10" (UID: "e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.608523 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10-kube-api-access-cw7k7" (OuterVolumeSpecName: "kube-api-access-cw7k7") pod "e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10" (UID: "e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10"). InnerVolumeSpecName "kube-api-access-cw7k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.688966 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-slr78" Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.705308 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw7k7\" (UniqueName: \"kubernetes.io/projected/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10-kube-api-access-cw7k7\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.705335 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.806411 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499-operator-scripts\") pod \"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499\" (UID: \"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499\") " Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.806643 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92pqj\" (UniqueName: \"kubernetes.io/projected/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499-kube-api-access-92pqj\") pod \"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499\" (UID: \"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499\") " Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.807040 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9bc4ff6c-b869-4c9d-b8a5-4c53ae802499" (UID: "9bc4ff6c-b869-4c9d-b8a5-4c53ae802499"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.807547 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.812208 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499-kube-api-access-92pqj" (OuterVolumeSpecName: "kube-api-access-92pqj") pod "9bc4ff6c-b869-4c9d-b8a5-4c53ae802499" (UID: "9bc4ff6c-b869-4c9d-b8a5-4c53ae802499"). InnerVolumeSpecName "kube-api-access-92pqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:26:27 crc kubenswrapper[4972]: I1121 11:26:27.909710 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92pqj\" (UniqueName: \"kubernetes.io/projected/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499-kube-api-access-92pqj\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:28 crc kubenswrapper[4972]: I1121 11:26:28.171861 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-slr78" event={"ID":"9bc4ff6c-b869-4c9d-b8a5-4c53ae802499","Type":"ContainerDied","Data":"91f933b8b69671858cbeebe3ff1ee529d366c0a313bdd26140dcb667bc31c277"} Nov 21 11:26:28 crc kubenswrapper[4972]: I1121 11:26:28.171911 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91f933b8b69671858cbeebe3ff1ee529d366c0a313bdd26140dcb667bc31c277" Nov 21 11:26:28 crc kubenswrapper[4972]: I1121 11:26:28.171957 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-slr78" Nov 21 11:26:28 crc kubenswrapper[4972]: I1121 11:26:28.174331 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d5da-account-create-xlssc" event={"ID":"e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10","Type":"ContainerDied","Data":"4f84f558f513cad6e305c27a883fcc96fc03c81b00eac68185d8c8cf80853402"} Nov 21 11:26:28 crc kubenswrapper[4972]: I1121 11:26:28.174399 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f84f558f513cad6e305c27a883fcc96fc03c81b00eac68185d8c8cf80853402" Nov 21 11:26:28 crc kubenswrapper[4972]: I1121 11:26:28.174413 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d5da-account-create-xlssc" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.558358 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-z5wc9"] Nov 21 11:26:29 crc kubenswrapper[4972]: E1121 11:26:29.558761 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc4ff6c-b869-4c9d-b8a5-4c53ae802499" containerName="mariadb-database-create" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.558774 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc4ff6c-b869-4c9d-b8a5-4c53ae802499" containerName="mariadb-database-create" Nov 21 11:26:29 crc kubenswrapper[4972]: E1121 11:26:29.558809 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10" containerName="mariadb-account-create" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.558815 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10" containerName="mariadb-account-create" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.559103 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc4ff6c-b869-4c9d-b8a5-4c53ae802499" containerName="mariadb-database-create" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.559131 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10" containerName="mariadb-account-create" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.559930 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.562473 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jzrfv" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.562970 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.563716 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.564413 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.569583 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-z5wc9"] Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.687060 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-config-data\") pod \"aodh-db-sync-z5wc9\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.687427 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-combined-ca-bundle\") pod \"aodh-db-sync-z5wc9\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.687920 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxf54\" (UniqueName: \"kubernetes.io/projected/ffac6a90-89db-46b9-8c66-96177f795af2-kube-api-access-fxf54\") pod \"aodh-db-sync-z5wc9\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.688209 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-scripts\") pod \"aodh-db-sync-z5wc9\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.790754 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-combined-ca-bundle\") pod \"aodh-db-sync-z5wc9\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.791005 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxf54\" (UniqueName: \"kubernetes.io/projected/ffac6a90-89db-46b9-8c66-96177f795af2-kube-api-access-fxf54\") pod \"aodh-db-sync-z5wc9\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.791224 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-scripts\") pod \"aodh-db-sync-z5wc9\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.791430 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-config-data\") pod \"aodh-db-sync-z5wc9\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.798251 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-combined-ca-bundle\") pod \"aodh-db-sync-z5wc9\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.798778 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-scripts\") pod \"aodh-db-sync-z5wc9\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.809370 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-config-data\") pod \"aodh-db-sync-z5wc9\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.818567 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxf54\" (UniqueName: \"kubernetes.io/projected/ffac6a90-89db-46b9-8c66-96177f795af2-kube-api-access-fxf54\") pod \"aodh-db-sync-z5wc9\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:29 crc kubenswrapper[4972]: I1121 11:26:29.883277 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:30 crc kubenswrapper[4972]: I1121 11:26:30.482755 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-z5wc9"] Nov 21 11:26:30 crc kubenswrapper[4972]: W1121 11:26:30.486038 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffac6a90_89db_46b9_8c66_96177f795af2.slice/crio-687e4505b36c38c14f6e4c3136050fcdb85421319e307d812e1f21a95447a013 WatchSource:0}: Error finding container 687e4505b36c38c14f6e4c3136050fcdb85421319e307d812e1f21a95447a013: Status 404 returned error can't find the container with id 687e4505b36c38c14f6e4c3136050fcdb85421319e307d812e1f21a95447a013 Nov 21 11:26:31 crc kubenswrapper[4972]: I1121 11:26:31.206148 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-z5wc9" event={"ID":"ffac6a90-89db-46b9-8c66-96177f795af2","Type":"ContainerStarted","Data":"687e4505b36c38c14f6e4c3136050fcdb85421319e307d812e1f21a95447a013"} Nov 21 11:26:35 crc kubenswrapper[4972]: I1121 11:26:35.223945 4972 scope.go:117] "RemoveContainer" containerID="2ac79c90e11a93881e44b12a64e4b7fe99824c2949d7bd4e8f7380ffe95d6500" Nov 21 11:26:35 crc kubenswrapper[4972]: I1121 11:26:35.259181 4972 scope.go:117] "RemoveContainer" containerID="9ab750ed9ef66c0fc52d6551c1c182aeb67ed253a65038d638b4ab811a858128" Nov 21 11:26:35 crc kubenswrapper[4972]: I1121 11:26:35.260985 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-z5wc9" event={"ID":"ffac6a90-89db-46b9-8c66-96177f795af2","Type":"ContainerStarted","Data":"22bb5f415d5c075e5eb476be643bec8e4a9a09a2be654d695b3a104477c7b16f"} Nov 21 11:26:35 crc kubenswrapper[4972]: I1121 11:26:35.292679 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-z5wc9" podStartSLOduration=1.925383037 podStartE2EDuration="6.292659987s" podCreationTimestamp="2025-11-21 11:26:29 +0000 UTC" firstStartedPulling="2025-11-21 11:26:30.488500986 +0000 UTC m=+6335.597643494" lastFinishedPulling="2025-11-21 11:26:34.855777906 +0000 UTC m=+6339.964920444" observedRunningTime="2025-11-21 11:26:35.283169007 +0000 UTC m=+6340.392311505" watchObservedRunningTime="2025-11-21 11:26:35.292659987 +0000 UTC m=+6340.401802485" Nov 21 11:26:35 crc kubenswrapper[4972]: I1121 11:26:35.349489 4972 scope.go:117] "RemoveContainer" containerID="23656fea8eda7e69489448748bd2fc8239b5c9530fa9ed22c4dd977fefd0b3d9" Nov 21 11:26:35 crc kubenswrapper[4972]: I1121 11:26:35.384456 4972 scope.go:117] "RemoveContainer" containerID="aa1611f54350d3bb2eefd79418e24029f6d7059877fc6e744d2a0c523b308e96" Nov 21 11:26:35 crc kubenswrapper[4972]: I1121 11:26:35.425927 4972 scope.go:117] "RemoveContainer" containerID="fa5388b849b0af756eb934797c853c4869053c4f9985bccbe633da234e108278" Nov 21 11:26:35 crc kubenswrapper[4972]: I1121 11:26:35.469937 4972 scope.go:117] "RemoveContainer" containerID="eb35947f3c5d99fa806dab5337412caa70c41f5ac9689e1a2ff8d39dd1d89601" Nov 21 11:26:35 crc kubenswrapper[4972]: I1121 11:26:35.505401 4972 scope.go:117] "RemoveContainer" containerID="87377a21a0caa5160ffbfe980690cdb412d303c1fb285c4fb153d814f435eb96" Nov 21 11:26:35 crc kubenswrapper[4972]: I1121 11:26:35.535422 4972 scope.go:117] "RemoveContainer" containerID="eabc9c7cb994c59b59ed7c9793f3bd33583370edbbc9fccfd3711edca267d83d" Nov 21 11:26:35 crc kubenswrapper[4972]: I1121 11:26:35.564736 4972 scope.go:117] "RemoveContainer" containerID="585d35533198d77197bb6e79bd5aa5dc2fa72cc3cabd84fc2d52d2db35f47e83" Nov 21 11:26:35 crc kubenswrapper[4972]: I1121 11:26:35.592823 4972 scope.go:117] "RemoveContainer" containerID="57e6e52ceef13dc38b963dce1bf562cded2c4580091c1508938b28a1f91e375b" Nov 21 11:26:36 crc kubenswrapper[4972]: I1121 11:26:36.046559 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qrs7s"] Nov 21 11:26:36 crc kubenswrapper[4972]: I1121 11:26:36.064546 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qrs7s"] Nov 21 11:26:37 crc kubenswrapper[4972]: I1121 11:26:37.302488 4972 generic.go:334] "Generic (PLEG): container finished" podID="ffac6a90-89db-46b9-8c66-96177f795af2" containerID="22bb5f415d5c075e5eb476be643bec8e4a9a09a2be654d695b3a104477c7b16f" exitCode=0 Nov 21 11:26:37 crc kubenswrapper[4972]: I1121 11:26:37.302656 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-z5wc9" event={"ID":"ffac6a90-89db-46b9-8c66-96177f795af2","Type":"ContainerDied","Data":"22bb5f415d5c075e5eb476be643bec8e4a9a09a2be654d695b3a104477c7b16f"} Nov 21 11:26:37 crc kubenswrapper[4972]: I1121 11:26:37.774894 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="377e513a-7c18-4d50-a092-5942a3bd679a" path="/var/lib/kubelet/pods/377e513a-7c18-4d50-a092-5942a3bd679a/volumes" Nov 21 11:26:38 crc kubenswrapper[4972]: I1121 11:26:38.832414 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:38 crc kubenswrapper[4972]: I1121 11:26:38.898271 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-scripts\") pod \"ffac6a90-89db-46b9-8c66-96177f795af2\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " Nov 21 11:26:38 crc kubenswrapper[4972]: I1121 11:26:38.898361 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxf54\" (UniqueName: \"kubernetes.io/projected/ffac6a90-89db-46b9-8c66-96177f795af2-kube-api-access-fxf54\") pod \"ffac6a90-89db-46b9-8c66-96177f795af2\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " Nov 21 11:26:38 crc kubenswrapper[4972]: I1121 11:26:38.898454 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-config-data\") pod \"ffac6a90-89db-46b9-8c66-96177f795af2\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " Nov 21 11:26:38 crc kubenswrapper[4972]: I1121 11:26:38.898730 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-combined-ca-bundle\") pod \"ffac6a90-89db-46b9-8c66-96177f795af2\" (UID: \"ffac6a90-89db-46b9-8c66-96177f795af2\") " Nov 21 11:26:38 crc kubenswrapper[4972]: I1121 11:26:38.912026 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-scripts" (OuterVolumeSpecName: "scripts") pod "ffac6a90-89db-46b9-8c66-96177f795af2" (UID: "ffac6a90-89db-46b9-8c66-96177f795af2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:26:38 crc kubenswrapper[4972]: I1121 11:26:38.912180 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffac6a90-89db-46b9-8c66-96177f795af2-kube-api-access-fxf54" (OuterVolumeSpecName: "kube-api-access-fxf54") pod "ffac6a90-89db-46b9-8c66-96177f795af2" (UID: "ffac6a90-89db-46b9-8c66-96177f795af2"). InnerVolumeSpecName "kube-api-access-fxf54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:26:38 crc kubenswrapper[4972]: I1121 11:26:38.927796 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffac6a90-89db-46b9-8c66-96177f795af2" (UID: "ffac6a90-89db-46b9-8c66-96177f795af2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:26:38 crc kubenswrapper[4972]: I1121 11:26:38.943063 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-config-data" (OuterVolumeSpecName: "config-data") pod "ffac6a90-89db-46b9-8c66-96177f795af2" (UID: "ffac6a90-89db-46b9-8c66-96177f795af2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.001601 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.001654 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.001674 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxf54\" (UniqueName: \"kubernetes.io/projected/ffac6a90-89db-46b9-8c66-96177f795af2-kube-api-access-fxf54\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.001695 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffac6a90-89db-46b9-8c66-96177f795af2-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.346343 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-z5wc9" event={"ID":"ffac6a90-89db-46b9-8c66-96177f795af2","Type":"ContainerDied","Data":"687e4505b36c38c14f6e4c3136050fcdb85421319e307d812e1f21a95447a013"} Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.346378 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="687e4505b36c38c14f6e4c3136050fcdb85421319e307d812e1f21a95447a013" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.346406 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-z5wc9" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.619475 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 21 11:26:39 crc kubenswrapper[4972]: E1121 11:26:39.620299 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffac6a90-89db-46b9-8c66-96177f795af2" containerName="aodh-db-sync" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.620411 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffac6a90-89db-46b9-8c66-96177f795af2" containerName="aodh-db-sync" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.620812 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffac6a90-89db-46b9-8c66-96177f795af2" containerName="aodh-db-sync" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.623333 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.631161 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.631905 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jzrfv" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.631984 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.651482 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.716460 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef56310-c266-4fdc-b4c1-fb03319b5196-combined-ca-bundle\") pod \"aodh-0\" (UID: \"eef56310-c266-4fdc-b4c1-fb03319b5196\") " pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.716609 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjlqb\" (UniqueName: \"kubernetes.io/projected/eef56310-c266-4fdc-b4c1-fb03319b5196-kube-api-access-fjlqb\") pod \"aodh-0\" (UID: \"eef56310-c266-4fdc-b4c1-fb03319b5196\") " pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.716663 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef56310-c266-4fdc-b4c1-fb03319b5196-config-data\") pod \"aodh-0\" (UID: \"eef56310-c266-4fdc-b4c1-fb03319b5196\") " pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.716726 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef56310-c266-4fdc-b4c1-fb03319b5196-scripts\") pod \"aodh-0\" (UID: \"eef56310-c266-4fdc-b4c1-fb03319b5196\") " pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.819298 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef56310-c266-4fdc-b4c1-fb03319b5196-combined-ca-bundle\") pod \"aodh-0\" (UID: \"eef56310-c266-4fdc-b4c1-fb03319b5196\") " pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.819462 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjlqb\" (UniqueName: \"kubernetes.io/projected/eef56310-c266-4fdc-b4c1-fb03319b5196-kube-api-access-fjlqb\") pod \"aodh-0\" (UID: \"eef56310-c266-4fdc-b4c1-fb03319b5196\") " pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.819514 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef56310-c266-4fdc-b4c1-fb03319b5196-config-data\") pod \"aodh-0\" (UID: \"eef56310-c266-4fdc-b4c1-fb03319b5196\") " pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.819584 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef56310-c266-4fdc-b4c1-fb03319b5196-scripts\") pod \"aodh-0\" (UID: \"eef56310-c266-4fdc-b4c1-fb03319b5196\") " pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.824131 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef56310-c266-4fdc-b4c1-fb03319b5196-combined-ca-bundle\") pod \"aodh-0\" (UID: \"eef56310-c266-4fdc-b4c1-fb03319b5196\") " pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.825747 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef56310-c266-4fdc-b4c1-fb03319b5196-config-data\") pod \"aodh-0\" (UID: \"eef56310-c266-4fdc-b4c1-fb03319b5196\") " pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.844210 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef56310-c266-4fdc-b4c1-fb03319b5196-scripts\") pod \"aodh-0\" (UID: \"eef56310-c266-4fdc-b4c1-fb03319b5196\") " pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.850118 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjlqb\" (UniqueName: \"kubernetes.io/projected/eef56310-c266-4fdc-b4c1-fb03319b5196-kube-api-access-fjlqb\") pod \"aodh-0\" (UID: \"eef56310-c266-4fdc-b4c1-fb03319b5196\") " pod="openstack/aodh-0" Nov 21 11:26:39 crc kubenswrapper[4972]: I1121 11:26:39.958419 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 11:26:40 crc kubenswrapper[4972]: I1121 11:26:40.507408 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 11:26:41 crc kubenswrapper[4972]: I1121 11:26:41.361642 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eef56310-c266-4fdc-b4c1-fb03319b5196","Type":"ContainerStarted","Data":"c32f7215270f8d22bff6e0bc5640fca6d58dd555754620b07ef4a973eed653e4"} Nov 21 11:26:41 crc kubenswrapper[4972]: I1121 11:26:41.361924 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eef56310-c266-4fdc-b4c1-fb03319b5196","Type":"ContainerStarted","Data":"15f5d834ee97816ec40b02946e1bb42863793d41c69e842f23ddfe49d2e07f25"} Nov 21 11:26:42 crc kubenswrapper[4972]: I1121 11:26:42.041838 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:26:42 crc kubenswrapper[4972]: I1121 11:26:42.042365 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="ceilometer-central-agent" containerID="cri-o://8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9" gracePeriod=30 Nov 21 11:26:42 crc kubenswrapper[4972]: I1121 11:26:42.042504 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="proxy-httpd" containerID="cri-o://d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543" gracePeriod=30 Nov 21 11:26:42 crc kubenswrapper[4972]: I1121 11:26:42.042565 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="sg-core" containerID="cri-o://167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b" gracePeriod=30 Nov 21 11:26:42 crc kubenswrapper[4972]: I1121 11:26:42.042603 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="ceilometer-notification-agent" containerID="cri-o://9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4" gracePeriod=30 Nov 21 11:26:42 crc kubenswrapper[4972]: I1121 11:26:42.057910 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.1.132:3000/\": EOF" Nov 21 11:26:42 crc kubenswrapper[4972]: I1121 11:26:42.381548 4972 generic.go:334] "Generic (PLEG): container finished" podID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerID="d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543" exitCode=0 Nov 21 11:26:42 crc kubenswrapper[4972]: I1121 11:26:42.381930 4972 generic.go:334] "Generic (PLEG): container finished" podID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerID="167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b" exitCode=2 Nov 21 11:26:42 crc kubenswrapper[4972]: I1121 11:26:42.381971 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c5326d6-e3d9-4d8d-a839-78c68fe74851","Type":"ContainerDied","Data":"d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543"} Nov 21 11:26:42 crc kubenswrapper[4972]: I1121 11:26:42.382007 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c5326d6-e3d9-4d8d-a839-78c68fe74851","Type":"ContainerDied","Data":"167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b"} Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.033885 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.184250 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-combined-ca-bundle\") pod \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.184310 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c5326d6-e3d9-4d8d-a839-78c68fe74851-run-httpd\") pod \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.184342 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c5326d6-e3d9-4d8d-a839-78c68fe74851-log-httpd\") pod \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.184418 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-scripts\") pod \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.184446 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-config-data\") pod \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.184553 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsnrq\" (UniqueName: \"kubernetes.io/projected/7c5326d6-e3d9-4d8d-a839-78c68fe74851-kube-api-access-lsnrq\") pod \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.184636 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-sg-core-conf-yaml\") pod \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\" (UID: \"7c5326d6-e3d9-4d8d-a839-78c68fe74851\") " Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.185279 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c5326d6-e3d9-4d8d-a839-78c68fe74851-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7c5326d6-e3d9-4d8d-a839-78c68fe74851" (UID: "7c5326d6-e3d9-4d8d-a839-78c68fe74851"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.185370 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c5326d6-e3d9-4d8d-a839-78c68fe74851-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7c5326d6-e3d9-4d8d-a839-78c68fe74851" (UID: "7c5326d6-e3d9-4d8d-a839-78c68fe74851"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.188590 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-scripts" (OuterVolumeSpecName: "scripts") pod "7c5326d6-e3d9-4d8d-a839-78c68fe74851" (UID: "7c5326d6-e3d9-4d8d-a839-78c68fe74851"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.190416 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c5326d6-e3d9-4d8d-a839-78c68fe74851-kube-api-access-lsnrq" (OuterVolumeSpecName: "kube-api-access-lsnrq") pod "7c5326d6-e3d9-4d8d-a839-78c68fe74851" (UID: "7c5326d6-e3d9-4d8d-a839-78c68fe74851"). InnerVolumeSpecName "kube-api-access-lsnrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.239855 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7c5326d6-e3d9-4d8d-a839-78c68fe74851" (UID: "7c5326d6-e3d9-4d8d-a839-78c68fe74851"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.286592 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsnrq\" (UniqueName: \"kubernetes.io/projected/7c5326d6-e3d9-4d8d-a839-78c68fe74851-kube-api-access-lsnrq\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.286625 4972 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.286634 4972 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c5326d6-e3d9-4d8d-a839-78c68fe74851-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.286644 4972 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c5326d6-e3d9-4d8d-a839-78c68fe74851-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.286654 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.348585 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c5326d6-e3d9-4d8d-a839-78c68fe74851" (UID: "7c5326d6-e3d9-4d8d-a839-78c68fe74851"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.363509 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-config-data" (OuterVolumeSpecName: "config-data") pod "7c5326d6-e3d9-4d8d-a839-78c68fe74851" (UID: "7c5326d6-e3d9-4d8d-a839-78c68fe74851"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.388725 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.388758 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5326d6-e3d9-4d8d-a839-78c68fe74851-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.400799 4972 generic.go:334] "Generic (PLEG): container finished" podID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerID="9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4" exitCode=0 Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.401055 4972 generic.go:334] "Generic (PLEG): container finished" podID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerID="8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9" exitCode=0 Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.401190 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c5326d6-e3d9-4d8d-a839-78c68fe74851","Type":"ContainerDied","Data":"9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4"} Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.401348 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c5326d6-e3d9-4d8d-a839-78c68fe74851","Type":"ContainerDied","Data":"8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9"} Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.401505 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c5326d6-e3d9-4d8d-a839-78c68fe74851","Type":"ContainerDied","Data":"b6a6e9cab04e4dc9838cc1666818fef8f9b33473c0874a75e9fe1dae76d2ca33"} Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.401425 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.401413 4972 scope.go:117] "RemoveContainer" containerID="d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.403342 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eef56310-c266-4fdc-b4c1-fb03319b5196","Type":"ContainerStarted","Data":"ffd699207a7e2f38e9b806ae50fc9541dcbead591aa03aa6cdcf1cc766f8a320"} Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.433101 4972 scope.go:117] "RemoveContainer" containerID="167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.444382 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.471466 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.478708 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:26:43 crc kubenswrapper[4972]: E1121 11:26:43.479638 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="sg-core" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.479737 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="sg-core" Nov 21 11:26:43 crc kubenswrapper[4972]: E1121 11:26:43.479792 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="ceilometer-notification-agent" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.479808 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="ceilometer-notification-agent" Nov 21 11:26:43 crc kubenswrapper[4972]: E1121 11:26:43.479854 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="proxy-httpd" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.479867 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="proxy-httpd" Nov 21 11:26:43 crc kubenswrapper[4972]: E1121 11:26:43.479897 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="ceilometer-central-agent" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.479909 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="ceilometer-central-agent" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.480120 4972 scope.go:117] "RemoveContainer" containerID="9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.480280 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="ceilometer-central-agent" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.480304 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="sg-core" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.480329 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="ceilometer-notification-agent" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.480378 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" containerName="proxy-httpd" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.483820 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.486125 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.486342 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.490965 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.491869 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqjpw\" (UniqueName: \"kubernetes.io/projected/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-kube-api-access-vqjpw\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.491982 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.492124 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-config-data\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.492281 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.492391 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-log-httpd\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.492503 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-run-httpd\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.492609 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-scripts\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.560439 4972 scope.go:117] "RemoveContainer" containerID="8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.585084 4972 scope.go:117] "RemoveContainer" containerID="d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543" Nov 21 11:26:43 crc kubenswrapper[4972]: E1121 11:26:43.586967 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543\": container with ID starting with d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543 not found: ID does not exist" containerID="d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.587014 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543"} err="failed to get container status \"d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543\": rpc error: code = NotFound desc = could not find container \"d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543\": container with ID starting with d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543 not found: ID does not exist" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.587037 4972 scope.go:117] "RemoveContainer" containerID="167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b" Nov 21 11:26:43 crc kubenswrapper[4972]: E1121 11:26:43.587467 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b\": container with ID starting with 167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b not found: ID does not exist" containerID="167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.587487 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b"} err="failed to get container status \"167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b\": rpc error: code = NotFound desc = could not find container \"167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b\": container with ID starting with 167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b not found: ID does not exist" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.587502 4972 scope.go:117] "RemoveContainer" containerID="9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4" Nov 21 11:26:43 crc kubenswrapper[4972]: E1121 11:26:43.587976 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4\": container with ID starting with 9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4 not found: ID does not exist" containerID="9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.587997 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4"} err="failed to get container status \"9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4\": rpc error: code = NotFound desc = could not find container \"9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4\": container with ID starting with 9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4 not found: ID does not exist" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.588011 4972 scope.go:117] "RemoveContainer" containerID="8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9" Nov 21 11:26:43 crc kubenswrapper[4972]: E1121 11:26:43.588210 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9\": container with ID starting with 8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9 not found: ID does not exist" containerID="8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.588249 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9"} err="failed to get container status \"8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9\": rpc error: code = NotFound desc = could not find container \"8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9\": container with ID starting with 8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9 not found: ID does not exist" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.588263 4972 scope.go:117] "RemoveContainer" containerID="d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.588429 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543"} err="failed to get container status \"d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543\": rpc error: code = NotFound desc = could not find container \"d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543\": container with ID starting with d52604038789171debf6615150e3644f90db952dd51b96a522f3dd48a454e543 not found: ID does not exist" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.588449 4972 scope.go:117] "RemoveContainer" containerID="167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.588611 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b"} err="failed to get container status \"167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b\": rpc error: code = NotFound desc = could not find container \"167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b\": container with ID starting with 167eb734f0c9a02179a6a578e2007923bb7759eb9452bffcd36bd91761c6ba8b not found: ID does not exist" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.588632 4972 scope.go:117] "RemoveContainer" containerID="9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.588779 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4"} err="failed to get container status \"9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4\": rpc error: code = NotFound desc = could not find container \"9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4\": container with ID starting with 9995bad969cabaf903cce9eb5cc376c138fd16700b75dac10528a221bb7306f4 not found: ID does not exist" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.588800 4972 scope.go:117] "RemoveContainer" containerID="8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.589027 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9"} err="failed to get container status \"8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9\": rpc error: code = NotFound desc = could not find container \"8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9\": container with ID starting with 8f358bf3cc6bbb9d3255aee47a03042bcf9ec80bc6af8e61a2bbb25d0bb566c9 not found: ID does not exist" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.593348 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.593388 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-log-httpd\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.593407 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-run-httpd\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.593427 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-scripts\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.593936 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-run-httpd\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.594001 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqjpw\" (UniqueName: \"kubernetes.io/projected/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-kube-api-access-vqjpw\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.594008 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-log-httpd\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.594866 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.594953 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-config-data\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.599792 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.600957 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-scripts\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.601009 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-config-data\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.604891 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.617203 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqjpw\" (UniqueName: \"kubernetes.io/projected/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-kube-api-access-vqjpw\") pod \"ceilometer-0\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " pod="openstack/ceilometer-0" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.774495 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c5326d6-e3d9-4d8d-a839-78c68fe74851" path="/var/lib/kubelet/pods/7c5326d6-e3d9-4d8d-a839-78c68fe74851/volumes" Nov 21 11:26:43 crc kubenswrapper[4972]: I1121 11:26:43.837135 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:26:44 crc kubenswrapper[4972]: I1121 11:26:44.319508 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:26:44 crc kubenswrapper[4972]: I1121 11:26:44.420167 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4","Type":"ContainerStarted","Data":"01f9fd83a09607d74f0bc6f252e5dd53f246348cdc197280180be216f761c898"} Nov 21 11:26:45 crc kubenswrapper[4972]: I1121 11:26:45.453215 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4","Type":"ContainerStarted","Data":"6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439"} Nov 21 11:26:45 crc kubenswrapper[4972]: I1121 11:26:45.460156 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eef56310-c266-4fdc-b4c1-fb03319b5196","Type":"ContainerStarted","Data":"f377b226d3993368459bfe3afe24af4e4746b9a8b4da9d65f265b579d9c37de7"} Nov 21 11:26:46 crc kubenswrapper[4972]: I1121 11:26:46.475593 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4","Type":"ContainerStarted","Data":"818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545"} Nov 21 11:26:46 crc kubenswrapper[4972]: I1121 11:26:46.479295 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"eef56310-c266-4fdc-b4c1-fb03319b5196","Type":"ContainerStarted","Data":"7d393f53245f987f21ad06635bef3e859964128379ce04c91702e16621da3ec7"} Nov 21 11:26:46 crc kubenswrapper[4972]: I1121 11:26:46.513804 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.980735239 podStartE2EDuration="7.513772239s" podCreationTimestamp="2025-11-21 11:26:39 +0000 UTC" firstStartedPulling="2025-11-21 11:26:40.465936637 +0000 UTC m=+6345.575079135" lastFinishedPulling="2025-11-21 11:26:45.998973637 +0000 UTC m=+6351.108116135" observedRunningTime="2025-11-21 11:26:46.498919218 +0000 UTC m=+6351.608061716" watchObservedRunningTime="2025-11-21 11:26:46.513772239 +0000 UTC m=+6351.622914737" Nov 21 11:26:47 crc kubenswrapper[4972]: I1121 11:26:47.493795 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4","Type":"ContainerStarted","Data":"ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973"} Nov 21 11:26:48 crc kubenswrapper[4972]: I1121 11:26:48.504273 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4","Type":"ContainerStarted","Data":"7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9"} Nov 21 11:26:48 crc kubenswrapper[4972]: I1121 11:26:48.504566 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 11:26:48 crc kubenswrapper[4972]: I1121 11:26:48.534337 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.954222253 podStartE2EDuration="5.534310311s" podCreationTimestamp="2025-11-21 11:26:43 +0000 UTC" firstStartedPulling="2025-11-21 11:26:44.379898634 +0000 UTC m=+6349.489041132" lastFinishedPulling="2025-11-21 11:26:47.959986692 +0000 UTC m=+6353.069129190" observedRunningTime="2025-11-21 11:26:48.530368837 +0000 UTC m=+6353.639511345" watchObservedRunningTime="2025-11-21 11:26:48.534310311 +0000 UTC m=+6353.643452819" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.674144 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-b66hk"] Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.679297 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-b66hk" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.689928 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-b66hk"] Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.777192 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-0907-account-create-sr28f"] Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.778953 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-0907-account-create-sr28f" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.782144 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.785965 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drwgb\" (UniqueName: \"kubernetes.io/projected/c2a1e27a-2332-4812-8449-5dc621f868be-kube-api-access-drwgb\") pod \"manila-db-create-b66hk\" (UID: \"c2a1e27a-2332-4812-8449-5dc621f868be\") " pod="openstack/manila-db-create-b66hk" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.786044 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2a1e27a-2332-4812-8449-5dc621f868be-operator-scripts\") pod \"manila-db-create-b66hk\" (UID: \"c2a1e27a-2332-4812-8449-5dc621f868be\") " pod="openstack/manila-db-create-b66hk" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.803253 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-0907-account-create-sr28f"] Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.889576 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2a1e27a-2332-4812-8449-5dc621f868be-operator-scripts\") pod \"manila-db-create-b66hk\" (UID: \"c2a1e27a-2332-4812-8449-5dc621f868be\") " pod="openstack/manila-db-create-b66hk" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.889752 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfpdw\" (UniqueName: \"kubernetes.io/projected/db4e2872-38ce-4a6d-95cb-a96009c830eb-kube-api-access-vfpdw\") pod \"manila-0907-account-create-sr28f\" (UID: \"db4e2872-38ce-4a6d-95cb-a96009c830eb\") " pod="openstack/manila-0907-account-create-sr28f" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.890211 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4e2872-38ce-4a6d-95cb-a96009c830eb-operator-scripts\") pod \"manila-0907-account-create-sr28f\" (UID: \"db4e2872-38ce-4a6d-95cb-a96009c830eb\") " pod="openstack/manila-0907-account-create-sr28f" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.890517 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drwgb\" (UniqueName: \"kubernetes.io/projected/c2a1e27a-2332-4812-8449-5dc621f868be-kube-api-access-drwgb\") pod \"manila-db-create-b66hk\" (UID: \"c2a1e27a-2332-4812-8449-5dc621f868be\") " pod="openstack/manila-db-create-b66hk" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.890748 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2a1e27a-2332-4812-8449-5dc621f868be-operator-scripts\") pod \"manila-db-create-b66hk\" (UID: \"c2a1e27a-2332-4812-8449-5dc621f868be\") " pod="openstack/manila-db-create-b66hk" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.921526 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drwgb\" (UniqueName: \"kubernetes.io/projected/c2a1e27a-2332-4812-8449-5dc621f868be-kube-api-access-drwgb\") pod \"manila-db-create-b66hk\" (UID: \"c2a1e27a-2332-4812-8449-5dc621f868be\") " pod="openstack/manila-db-create-b66hk" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.992592 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4e2872-38ce-4a6d-95cb-a96009c830eb-operator-scripts\") pod \"manila-0907-account-create-sr28f\" (UID: \"db4e2872-38ce-4a6d-95cb-a96009c830eb\") " pod="openstack/manila-0907-account-create-sr28f" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.993218 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfpdw\" (UniqueName: \"kubernetes.io/projected/db4e2872-38ce-4a6d-95cb-a96009c830eb-kube-api-access-vfpdw\") pod \"manila-0907-account-create-sr28f\" (UID: \"db4e2872-38ce-4a6d-95cb-a96009c830eb\") " pod="openstack/manila-0907-account-create-sr28f" Nov 21 11:26:54 crc kubenswrapper[4972]: I1121 11:26:54.994199 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4e2872-38ce-4a6d-95cb-a96009c830eb-operator-scripts\") pod \"manila-0907-account-create-sr28f\" (UID: \"db4e2872-38ce-4a6d-95cb-a96009c830eb\") " pod="openstack/manila-0907-account-create-sr28f" Nov 21 11:26:55 crc kubenswrapper[4972]: I1121 11:26:55.002335 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-b66hk" Nov 21 11:26:55 crc kubenswrapper[4972]: I1121 11:26:55.025520 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfpdw\" (UniqueName: \"kubernetes.io/projected/db4e2872-38ce-4a6d-95cb-a96009c830eb-kube-api-access-vfpdw\") pod \"manila-0907-account-create-sr28f\" (UID: \"db4e2872-38ce-4a6d-95cb-a96009c830eb\") " pod="openstack/manila-0907-account-create-sr28f" Nov 21 11:26:55 crc kubenswrapper[4972]: I1121 11:26:55.099553 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-0907-account-create-sr28f" Nov 21 11:26:55 crc kubenswrapper[4972]: I1121 11:26:55.614678 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-b66hk"] Nov 21 11:26:55 crc kubenswrapper[4972]: W1121 11:26:55.629286 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2a1e27a_2332_4812_8449_5dc621f868be.slice/crio-1bd2c980dd9eb32b6acb12edf846a0e52cbe1e5c7396e59b37d516bcba822a82 WatchSource:0}: Error finding container 1bd2c980dd9eb32b6acb12edf846a0e52cbe1e5c7396e59b37d516bcba822a82: Status 404 returned error can't find the container with id 1bd2c980dd9eb32b6acb12edf846a0e52cbe1e5c7396e59b37d516bcba822a82 Nov 21 11:26:55 crc kubenswrapper[4972]: I1121 11:26:55.835462 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-0907-account-create-sr28f"] Nov 21 11:26:55 crc kubenswrapper[4972]: I1121 11:26:55.838423 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 21 11:26:56 crc kubenswrapper[4972]: I1121 11:26:56.178763 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:26:56 crc kubenswrapper[4972]: I1121 11:26:56.179112 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:26:56 crc kubenswrapper[4972]: I1121 11:26:56.601403 4972 generic.go:334] "Generic (PLEG): container finished" podID="db4e2872-38ce-4a6d-95cb-a96009c830eb" containerID="f4cf196adb4448c010da6887decd6675960faeef07239438e4891e23f5112fbc" exitCode=0 Nov 21 11:26:56 crc kubenswrapper[4972]: I1121 11:26:56.601525 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-0907-account-create-sr28f" event={"ID":"db4e2872-38ce-4a6d-95cb-a96009c830eb","Type":"ContainerDied","Data":"f4cf196adb4448c010da6887decd6675960faeef07239438e4891e23f5112fbc"} Nov 21 11:26:56 crc kubenswrapper[4972]: I1121 11:26:56.601571 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-0907-account-create-sr28f" event={"ID":"db4e2872-38ce-4a6d-95cb-a96009c830eb","Type":"ContainerStarted","Data":"56802d1067e466c637cbbfc412030f9c5cacd10ea1a7748dcf72e7fb42807e70"} Nov 21 11:26:56 crc kubenswrapper[4972]: I1121 11:26:56.604292 4972 generic.go:334] "Generic (PLEG): container finished" podID="c2a1e27a-2332-4812-8449-5dc621f868be" containerID="ad7c4617fc6180c3575468abe5dfd86753c63ba53c51af061fdeb35e10443512" exitCode=0 Nov 21 11:26:56 crc kubenswrapper[4972]: I1121 11:26:56.604361 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-b66hk" event={"ID":"c2a1e27a-2332-4812-8449-5dc621f868be","Type":"ContainerDied","Data":"ad7c4617fc6180c3575468abe5dfd86753c63ba53c51af061fdeb35e10443512"} Nov 21 11:26:56 crc kubenswrapper[4972]: I1121 11:26:56.604402 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-b66hk" event={"ID":"c2a1e27a-2332-4812-8449-5dc621f868be","Type":"ContainerStarted","Data":"1bd2c980dd9eb32b6acb12edf846a0e52cbe1e5c7396e59b37d516bcba822a82"} Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.176992 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-0907-account-create-sr28f" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.187365 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfpdw\" (UniqueName: \"kubernetes.io/projected/db4e2872-38ce-4a6d-95cb-a96009c830eb-kube-api-access-vfpdw\") pod \"db4e2872-38ce-4a6d-95cb-a96009c830eb\" (UID: \"db4e2872-38ce-4a6d-95cb-a96009c830eb\") " Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.187456 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4e2872-38ce-4a6d-95cb-a96009c830eb-operator-scripts\") pod \"db4e2872-38ce-4a6d-95cb-a96009c830eb\" (UID: \"db4e2872-38ce-4a6d-95cb-a96009c830eb\") " Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.188374 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db4e2872-38ce-4a6d-95cb-a96009c830eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db4e2872-38ce-4a6d-95cb-a96009c830eb" (UID: "db4e2872-38ce-4a6d-95cb-a96009c830eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.199130 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4e2872-38ce-4a6d-95cb-a96009c830eb-kube-api-access-vfpdw" (OuterVolumeSpecName: "kube-api-access-vfpdw") pod "db4e2872-38ce-4a6d-95cb-a96009c830eb" (UID: "db4e2872-38ce-4a6d-95cb-a96009c830eb"). InnerVolumeSpecName "kube-api-access-vfpdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.290283 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfpdw\" (UniqueName: \"kubernetes.io/projected/db4e2872-38ce-4a6d-95cb-a96009c830eb-kube-api-access-vfpdw\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.290311 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4e2872-38ce-4a6d-95cb-a96009c830eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.297273 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-b66hk" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.391436 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2a1e27a-2332-4812-8449-5dc621f868be-operator-scripts\") pod \"c2a1e27a-2332-4812-8449-5dc621f868be\" (UID: \"c2a1e27a-2332-4812-8449-5dc621f868be\") " Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.391642 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drwgb\" (UniqueName: \"kubernetes.io/projected/c2a1e27a-2332-4812-8449-5dc621f868be-kube-api-access-drwgb\") pod \"c2a1e27a-2332-4812-8449-5dc621f868be\" (UID: \"c2a1e27a-2332-4812-8449-5dc621f868be\") " Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.392044 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2a1e27a-2332-4812-8449-5dc621f868be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c2a1e27a-2332-4812-8449-5dc621f868be" (UID: "c2a1e27a-2332-4812-8449-5dc621f868be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.392348 4972 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2a1e27a-2332-4812-8449-5dc621f868be-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.395482 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2a1e27a-2332-4812-8449-5dc621f868be-kube-api-access-drwgb" (OuterVolumeSpecName: "kube-api-access-drwgb") pod "c2a1e27a-2332-4812-8449-5dc621f868be" (UID: "c2a1e27a-2332-4812-8449-5dc621f868be"). InnerVolumeSpecName "kube-api-access-drwgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.493533 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drwgb\" (UniqueName: \"kubernetes.io/projected/c2a1e27a-2332-4812-8449-5dc621f868be-kube-api-access-drwgb\") on node \"crc\" DevicePath \"\"" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.625210 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-b66hk" event={"ID":"c2a1e27a-2332-4812-8449-5dc621f868be","Type":"ContainerDied","Data":"1bd2c980dd9eb32b6acb12edf846a0e52cbe1e5c7396e59b37d516bcba822a82"} Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.625282 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd2c980dd9eb32b6acb12edf846a0e52cbe1e5c7396e59b37d516bcba822a82" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.625231 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-b66hk" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.626372 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-0907-account-create-sr28f" event={"ID":"db4e2872-38ce-4a6d-95cb-a96009c830eb","Type":"ContainerDied","Data":"56802d1067e466c637cbbfc412030f9c5cacd10ea1a7748dcf72e7fb42807e70"} Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.626408 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56802d1067e466c637cbbfc412030f9c5cacd10ea1a7748dcf72e7fb42807e70" Nov 21 11:26:58 crc kubenswrapper[4972]: I1121 11:26:58.626391 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-0907-account-create-sr28f" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.087455 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-fzq6w"] Nov 21 11:27:00 crc kubenswrapper[4972]: E1121 11:27:00.088517 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4e2872-38ce-4a6d-95cb-a96009c830eb" containerName="mariadb-account-create" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.088542 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4e2872-38ce-4a6d-95cb-a96009c830eb" containerName="mariadb-account-create" Nov 21 11:27:00 crc kubenswrapper[4972]: E1121 11:27:00.088566 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a1e27a-2332-4812-8449-5dc621f868be" containerName="mariadb-database-create" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.088577 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a1e27a-2332-4812-8449-5dc621f868be" containerName="mariadb-database-create" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.089224 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4e2872-38ce-4a6d-95cb-a96009c830eb" containerName="mariadb-account-create" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.089344 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a1e27a-2332-4812-8449-5dc621f868be" containerName="mariadb-database-create" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.090751 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.094173 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-vqf4v" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.096086 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.101872 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-fzq6w"] Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.137123 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqdtc\" (UniqueName: \"kubernetes.io/projected/fc89a5eb-45b7-4f6f-83f9-0e3930716033-kube-api-access-mqdtc\") pod \"manila-db-sync-fzq6w\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.137285 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-combined-ca-bundle\") pod \"manila-db-sync-fzq6w\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.137400 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-config-data\") pod \"manila-db-sync-fzq6w\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.137641 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-job-config-data\") pod \"manila-db-sync-fzq6w\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.239960 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqdtc\" (UniqueName: \"kubernetes.io/projected/fc89a5eb-45b7-4f6f-83f9-0e3930716033-kube-api-access-mqdtc\") pod \"manila-db-sync-fzq6w\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.240120 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-combined-ca-bundle\") pod \"manila-db-sync-fzq6w\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.240238 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-config-data\") pod \"manila-db-sync-fzq6w\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.240438 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-job-config-data\") pod \"manila-db-sync-fzq6w\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.251487 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-job-config-data\") pod \"manila-db-sync-fzq6w\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.251697 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-combined-ca-bundle\") pod \"manila-db-sync-fzq6w\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.251861 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-config-data\") pod \"manila-db-sync-fzq6w\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.270589 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqdtc\" (UniqueName: \"kubernetes.io/projected/fc89a5eb-45b7-4f6f-83f9-0e3930716033-kube-api-access-mqdtc\") pod \"manila-db-sync-fzq6w\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:00 crc kubenswrapper[4972]: I1121 11:27:00.422217 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:01 crc kubenswrapper[4972]: I1121 11:27:01.538174 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-fzq6w"] Nov 21 11:27:01 crc kubenswrapper[4972]: I1121 11:27:01.704389 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fzq6w" event={"ID":"fc89a5eb-45b7-4f6f-83f9-0e3930716033","Type":"ContainerStarted","Data":"75083b9868c95ee7a19728bbb083b9b091d4d0be9887f343873b26e923a8e7c3"} Nov 21 11:27:08 crc kubenswrapper[4972]: I1121 11:27:08.792519 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fzq6w" event={"ID":"fc89a5eb-45b7-4f6f-83f9-0e3930716033","Type":"ContainerStarted","Data":"782b7593fe90a9e77ed9f4af936d2a089484907834cdde791139d04a327117d0"} Nov 21 11:27:08 crc kubenswrapper[4972]: I1121 11:27:08.815521 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-fzq6w" podStartSLOduration=3.028916421 podStartE2EDuration="8.815492345s" podCreationTimestamp="2025-11-21 11:27:00 +0000 UTC" firstStartedPulling="2025-11-21 11:27:01.495442791 +0000 UTC m=+6366.604585309" lastFinishedPulling="2025-11-21 11:27:07.282018735 +0000 UTC m=+6372.391161233" observedRunningTime="2025-11-21 11:27:08.808760428 +0000 UTC m=+6373.917902956" watchObservedRunningTime="2025-11-21 11:27:08.815492345 +0000 UTC m=+6373.924634853" Nov 21 11:27:10 crc kubenswrapper[4972]: I1121 11:27:10.826552 4972 generic.go:334] "Generic (PLEG): container finished" podID="fc89a5eb-45b7-4f6f-83f9-0e3930716033" containerID="782b7593fe90a9e77ed9f4af936d2a089484907834cdde791139d04a327117d0" exitCode=0 Nov 21 11:27:10 crc kubenswrapper[4972]: I1121 11:27:10.827787 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fzq6w" event={"ID":"fc89a5eb-45b7-4f6f-83f9-0e3930716033","Type":"ContainerDied","Data":"782b7593fe90a9e77ed9f4af936d2a089484907834cdde791139d04a327117d0"} Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.600643 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.740778 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-combined-ca-bundle\") pod \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.740885 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-config-data\") pod \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.740923 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqdtc\" (UniqueName: \"kubernetes.io/projected/fc89a5eb-45b7-4f6f-83f9-0e3930716033-kube-api-access-mqdtc\") pod \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.741034 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-job-config-data\") pod \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\" (UID: \"fc89a5eb-45b7-4f6f-83f9-0e3930716033\") " Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.749234 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "fc89a5eb-45b7-4f6f-83f9-0e3930716033" (UID: "fc89a5eb-45b7-4f6f-83f9-0e3930716033"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.750387 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc89a5eb-45b7-4f6f-83f9-0e3930716033-kube-api-access-mqdtc" (OuterVolumeSpecName: "kube-api-access-mqdtc") pod "fc89a5eb-45b7-4f6f-83f9-0e3930716033" (UID: "fc89a5eb-45b7-4f6f-83f9-0e3930716033"). InnerVolumeSpecName "kube-api-access-mqdtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.777212 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-config-data" (OuterVolumeSpecName: "config-data") pod "fc89a5eb-45b7-4f6f-83f9-0e3930716033" (UID: "fc89a5eb-45b7-4f6f-83f9-0e3930716033"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.786132 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc89a5eb-45b7-4f6f-83f9-0e3930716033" (UID: "fc89a5eb-45b7-4f6f-83f9-0e3930716033"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.843945 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.843996 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.844016 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqdtc\" (UniqueName: \"kubernetes.io/projected/fc89a5eb-45b7-4f6f-83f9-0e3930716033-kube-api-access-mqdtc\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.844035 4972 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fc89a5eb-45b7-4f6f-83f9-0e3930716033-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.857514 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fzq6w" event={"ID":"fc89a5eb-45b7-4f6f-83f9-0e3930716033","Type":"ContainerDied","Data":"75083b9868c95ee7a19728bbb083b9b091d4d0be9887f343873b26e923a8e7c3"} Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.857562 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75083b9868c95ee7a19728bbb083b9b091d4d0be9887f343873b26e923a8e7c3" Nov 21 11:27:12 crc kubenswrapper[4972]: I1121 11:27:12.857623 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fzq6w" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.371858 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 21 11:27:13 crc kubenswrapper[4972]: E1121 11:27:13.372505 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc89a5eb-45b7-4f6f-83f9-0e3930716033" containerName="manila-db-sync" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.372523 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc89a5eb-45b7-4f6f-83f9-0e3930716033" containerName="manila-db-sync" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.372733 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc89a5eb-45b7-4f6f-83f9-0e3930716033" containerName="manila-db-sync" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.373942 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.380906 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.381118 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.387985 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.388203 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-vqf4v" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.404257 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.407010 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.412449 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.414603 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.429935 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.481151 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566dd75fd7-4bkbp"] Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.504781 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.570346 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566dd75fd7-4bkbp"] Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.577771 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ed9e47f-280b-42d2-916c-ae5c437794ed-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.577883 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db826\" (UniqueName: \"kubernetes.io/projected/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-kube-api-access-db826\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.577912 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed9e47f-280b-42d2-916c-ae5c437794ed-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.578103 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed9e47f-280b-42d2-916c-ae5c437794ed-scripts\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.578137 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.578164 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.578221 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ed9e47f-280b-42d2-916c-ae5c437794ed-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.578280 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-scripts\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.578301 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2ed9e47f-280b-42d2-916c-ae5c437794ed-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.578344 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-config-data\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.578423 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.578460 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cbzs\" (UniqueName: \"kubernetes.io/projected/2ed9e47f-280b-42d2-916c-ae5c437794ed-kube-api-access-6cbzs\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.580169 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed9e47f-280b-42d2-916c-ae5c437794ed-config-data\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.582166 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2ed9e47f-280b-42d2-916c-ae5c437794ed-ceph\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.666149 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.668027 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.670554 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686213 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed9e47f-280b-42d2-916c-ae5c437794ed-scripts\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686275 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686303 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686379 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ed9e47f-280b-42d2-916c-ae5c437794ed-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686406 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-ovsdbserver-sb\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686455 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqgtf\" (UniqueName: \"kubernetes.io/projected/1df96663-72a8-444d-af18-be73e7c8e955-kube-api-access-sqgtf\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686487 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-scripts\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686509 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2ed9e47f-280b-42d2-916c-ae5c437794ed-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686532 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5nh\" (UniqueName: \"kubernetes.io/projected/256a4f9c-eb68-4be2-b6a9-eb534faa149d-kube-api-access-vp5nh\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686556 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-dns-svc\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686594 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-config-data\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686626 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-config\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686652 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256a4f9c-eb68-4be2-b6a9-eb534faa149d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686688 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256a4f9c-eb68-4be2-b6a9-eb534faa149d-logs\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686713 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686749 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cbzs\" (UniqueName: \"kubernetes.io/projected/2ed9e47f-280b-42d2-916c-ae5c437794ed-kube-api-access-6cbzs\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686772 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed9e47f-280b-42d2-916c-ae5c437794ed-config-data\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686820 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/256a4f9c-eb68-4be2-b6a9-eb534faa149d-etc-machine-id\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686865 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2ed9e47f-280b-42d2-916c-ae5c437794ed-ceph\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686892 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ed9e47f-280b-42d2-916c-ae5c437794ed-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686929 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256a4f9c-eb68-4be2-b6a9-eb534faa149d-scripts\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686949 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-ovsdbserver-nb\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.686983 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db826\" (UniqueName: \"kubernetes.io/projected/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-kube-api-access-db826\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.687012 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed9e47f-280b-42d2-916c-ae5c437794ed-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.687042 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/256a4f9c-eb68-4be2-b6a9-eb534faa149d-config-data-custom\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.687071 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256a4f9c-eb68-4be2-b6a9-eb534faa149d-config-data\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.688133 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.693823 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2ed9e47f-280b-42d2-916c-ae5c437794ed-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.694718 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ed9e47f-280b-42d2-916c-ae5c437794ed-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.694739 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.696482 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-scripts\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.696913 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed9e47f-280b-42d2-916c-ae5c437794ed-scripts\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.698395 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-config-data\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.699856 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.700071 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2ed9e47f-280b-42d2-916c-ae5c437794ed-ceph\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.706247 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ed9e47f-280b-42d2-916c-ae5c437794ed-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.712399 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.712631 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed9e47f-280b-42d2-916c-ae5c437794ed-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.715078 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed9e47f-280b-42d2-916c-ae5c437794ed-config-data\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.716146 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cbzs\" (UniqueName: \"kubernetes.io/projected/2ed9e47f-280b-42d2-916c-ae5c437794ed-kube-api-access-6cbzs\") pod \"manila-share-share1-0\" (UID: \"2ed9e47f-280b-42d2-916c-ae5c437794ed\") " pod="openstack/manila-share-share1-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.716814 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db826\" (UniqueName: \"kubernetes.io/projected/d1a24497-62c6-4d99-8b51-cab3f4dbcdf6-kube-api-access-db826\") pod \"manila-scheduler-0\" (UID: \"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6\") " pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.737935 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.788971 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/256a4f9c-eb68-4be2-b6a9-eb534faa149d-etc-machine-id\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.789356 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256a4f9c-eb68-4be2-b6a9-eb534faa149d-scripts\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.789378 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-ovsdbserver-nb\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.789412 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/256a4f9c-eb68-4be2-b6a9-eb534faa149d-config-data-custom\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.789430 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256a4f9c-eb68-4be2-b6a9-eb534faa149d-config-data\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.789519 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-ovsdbserver-sb\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.789547 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqgtf\" (UniqueName: \"kubernetes.io/projected/1df96663-72a8-444d-af18-be73e7c8e955-kube-api-access-sqgtf\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.789580 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5nh\" (UniqueName: \"kubernetes.io/projected/256a4f9c-eb68-4be2-b6a9-eb534faa149d-kube-api-access-vp5nh\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.789601 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-dns-svc\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.789634 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-config\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.789663 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256a4f9c-eb68-4be2-b6a9-eb534faa149d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.789685 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256a4f9c-eb68-4be2-b6a9-eb534faa149d-logs\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.790248 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256a4f9c-eb68-4be2-b6a9-eb534faa149d-logs\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.791596 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-ovsdbserver-sb\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.792887 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-dns-svc\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.793441 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-config\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.794047 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/256a4f9c-eb68-4be2-b6a9-eb534faa149d-etc-machine-id\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.794802 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-ovsdbserver-nb\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.800135 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/256a4f9c-eb68-4be2-b6a9-eb534faa149d-config-data-custom\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.805779 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256a4f9c-eb68-4be2-b6a9-eb534faa149d-scripts\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.806650 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256a4f9c-eb68-4be2-b6a9-eb534faa149d-config-data\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.808426 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256a4f9c-eb68-4be2-b6a9-eb534faa149d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.809331 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5nh\" (UniqueName: \"kubernetes.io/projected/256a4f9c-eb68-4be2-b6a9-eb534faa149d-kube-api-access-vp5nh\") pod \"manila-api-0\" (UID: \"256a4f9c-eb68-4be2-b6a9-eb534faa149d\") " pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.812881 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqgtf\" (UniqueName: \"kubernetes.io/projected/1df96663-72a8-444d-af18-be73e7c8e955-kube-api-access-sqgtf\") pod \"dnsmasq-dns-566dd75fd7-4bkbp\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.850664 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.877594 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.878040 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 11:27:13 crc kubenswrapper[4972]: I1121 11:27:13.992143 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 21 11:27:14 crc kubenswrapper[4972]: I1121 11:27:14.449244 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 21 11:27:14 crc kubenswrapper[4972]: I1121 11:27:14.570707 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566dd75fd7-4bkbp"] Nov 21 11:27:14 crc kubenswrapper[4972]: I1121 11:27:14.666547 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 21 11:27:14 crc kubenswrapper[4972]: W1121 11:27:14.678802 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod256a4f9c_eb68_4be2_b6a9_eb534faa149d.slice/crio-cfa1dc05c79e769b01ad7733af2948830baf5ff19a0bec47388bba5642a93083 WatchSource:0}: Error finding container cfa1dc05c79e769b01ad7733af2948830baf5ff19a0bec47388bba5642a93083: Status 404 returned error can't find the container with id cfa1dc05c79e769b01ad7733af2948830baf5ff19a0bec47388bba5642a93083 Nov 21 11:27:14 crc kubenswrapper[4972]: I1121 11:27:14.811971 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 21 11:27:14 crc kubenswrapper[4972]: I1121 11:27:14.891603 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2ed9e47f-280b-42d2-916c-ae5c437794ed","Type":"ContainerStarted","Data":"d87cd8928ebadeb71cc87792202b1cf86a07dc72e25d2a0b6c06891fa31d59bd"} Nov 21 11:27:14 crc kubenswrapper[4972]: I1121 11:27:14.893203 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" event={"ID":"1df96663-72a8-444d-af18-be73e7c8e955","Type":"ContainerStarted","Data":"475a741c8203b766c98bdfe3afa886223d5b1378c0a000049fe798bce8066091"} Nov 21 11:27:14 crc kubenswrapper[4972]: I1121 11:27:14.901944 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6","Type":"ContainerStarted","Data":"d1de914cf54f216ebc5cdcd619834f56fd9a73769bba1946bc36cd6441729d4d"} Nov 21 11:27:14 crc kubenswrapper[4972]: I1121 11:27:14.907898 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"256a4f9c-eb68-4be2-b6a9-eb534faa149d","Type":"ContainerStarted","Data":"cfa1dc05c79e769b01ad7733af2948830baf5ff19a0bec47388bba5642a93083"} Nov 21 11:27:15 crc kubenswrapper[4972]: I1121 11:27:15.921770 4972 generic.go:334] "Generic (PLEG): container finished" podID="1df96663-72a8-444d-af18-be73e7c8e955" containerID="75b3112c94a1f4261d88b2d8449e1fac880b940f34d396b6c08cce2cb37f4890" exitCode=0 Nov 21 11:27:15 crc kubenswrapper[4972]: I1121 11:27:15.922265 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" event={"ID":"1df96663-72a8-444d-af18-be73e7c8e955","Type":"ContainerDied","Data":"75b3112c94a1f4261d88b2d8449e1fac880b940f34d396b6c08cce2cb37f4890"} Nov 21 11:27:15 crc kubenswrapper[4972]: I1121 11:27:15.926564 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6","Type":"ContainerStarted","Data":"f5cb9940c51d0d9730e11b91c6b861121449394f8013802ee4e2ea6f62c4b606"} Nov 21 11:27:15 crc kubenswrapper[4972]: I1121 11:27:15.930342 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"256a4f9c-eb68-4be2-b6a9-eb534faa149d","Type":"ContainerStarted","Data":"bd949a6a68ec853888535da88a181f5114241a464a4b71b425ba461da53acfe4"} Nov 21 11:27:16 crc kubenswrapper[4972]: I1121 11:27:16.939985 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"256a4f9c-eb68-4be2-b6a9-eb534faa149d","Type":"ContainerStarted","Data":"1615208e0b5d2c8719d27b23b3dc073d9b71ab9edfdc50b2520bee0b4ed3dfaa"} Nov 21 11:27:16 crc kubenswrapper[4972]: I1121 11:27:16.940310 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 21 11:27:16 crc kubenswrapper[4972]: I1121 11:27:16.943158 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" event={"ID":"1df96663-72a8-444d-af18-be73e7c8e955","Type":"ContainerStarted","Data":"b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0"} Nov 21 11:27:16 crc kubenswrapper[4972]: I1121 11:27:16.943256 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:16 crc kubenswrapper[4972]: I1121 11:27:16.954203 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d1a24497-62c6-4d99-8b51-cab3f4dbcdf6","Type":"ContainerStarted","Data":"6f33b54360cb556d5e4c002f234fd4c6160e83290ec6412640cc938171e8e395"} Nov 21 11:27:16 crc kubenswrapper[4972]: I1121 11:27:16.962701 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.962690385 podStartE2EDuration="3.962690385s" podCreationTimestamp="2025-11-21 11:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:27:16.960562249 +0000 UTC m=+6382.069704777" watchObservedRunningTime="2025-11-21 11:27:16.962690385 +0000 UTC m=+6382.071832883" Nov 21 11:27:16 crc kubenswrapper[4972]: I1121 11:27:16.991751 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.205391906 podStartE2EDuration="3.991735768s" podCreationTimestamp="2025-11-21 11:27:13 +0000 UTC" firstStartedPulling="2025-11-21 11:27:14.463387458 +0000 UTC m=+6379.572529956" lastFinishedPulling="2025-11-21 11:27:15.24973132 +0000 UTC m=+6380.358873818" observedRunningTime="2025-11-21 11:27:16.982327282 +0000 UTC m=+6382.091469790" watchObservedRunningTime="2025-11-21 11:27:16.991735768 +0000 UTC m=+6382.100878266" Nov 21 11:27:17 crc kubenswrapper[4972]: I1121 11:27:17.003245 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" podStartSLOduration=4.003225711 podStartE2EDuration="4.003225711s" podCreationTimestamp="2025-11-21 11:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:27:16.998897787 +0000 UTC m=+6382.108040285" watchObservedRunningTime="2025-11-21 11:27:17.003225711 +0000 UTC m=+6382.112368209" Nov 21 11:27:17 crc kubenswrapper[4972]: I1121 11:27:17.886566 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:27:17 crc kubenswrapper[4972]: I1121 11:27:17.886850 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="ceilometer-central-agent" containerID="cri-o://6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439" gracePeriod=30 Nov 21 11:27:17 crc kubenswrapper[4972]: I1121 11:27:17.887003 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="proxy-httpd" containerID="cri-o://7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9" gracePeriod=30 Nov 21 11:27:17 crc kubenswrapper[4972]: I1121 11:27:17.887048 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="sg-core" containerID="cri-o://ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973" gracePeriod=30 Nov 21 11:27:17 crc kubenswrapper[4972]: I1121 11:27:17.887090 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="ceilometer-notification-agent" containerID="cri-o://818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545" gracePeriod=30 Nov 21 11:27:18 crc kubenswrapper[4972]: I1121 11:27:18.976650 4972 generic.go:334] "Generic (PLEG): container finished" podID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerID="7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9" exitCode=0 Nov 21 11:27:18 crc kubenswrapper[4972]: I1121 11:27:18.976971 4972 generic.go:334] "Generic (PLEG): container finished" podID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerID="ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973" exitCode=2 Nov 21 11:27:18 crc kubenswrapper[4972]: I1121 11:27:18.976980 4972 generic.go:334] "Generic (PLEG): container finished" podID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerID="6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439" exitCode=0 Nov 21 11:27:18 crc kubenswrapper[4972]: I1121 11:27:18.976736 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4","Type":"ContainerDied","Data":"7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9"} Nov 21 11:27:18 crc kubenswrapper[4972]: I1121 11:27:18.977031 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4","Type":"ContainerDied","Data":"ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973"} Nov 21 11:27:18 crc kubenswrapper[4972]: I1121 11:27:18.977046 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4","Type":"ContainerDied","Data":"6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439"} Nov 21 11:27:20 crc kubenswrapper[4972]: I1121 11:27:20.055092 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-g89pr"] Nov 21 11:27:20 crc kubenswrapper[4972]: I1121 11:27:20.071174 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e73e-account-create-xfjp5"] Nov 21 11:27:20 crc kubenswrapper[4972]: I1121 11:27:20.082941 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e73e-account-create-xfjp5"] Nov 21 11:27:20 crc kubenswrapper[4972]: I1121 11:27:20.091942 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-g89pr"] Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.780612 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21211899-cf98-49a2-9711-a49cfcbafabd" path="/var/lib/kubelet/pods/21211899-cf98-49a2-9711-a49cfcbafabd/volumes" Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.782692 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8976598c-43fd-4afe-8593-4c275d67f18a" path="/var/lib/kubelet/pods/8976598c-43fd-4afe-8593-4c275d67f18a/volumes" Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.885915 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.992415 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-config-data\") pod \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.992761 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-scripts\") pod \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.992844 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-log-httpd\") pod \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.992874 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-combined-ca-bundle\") pod \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.992912 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-sg-core-conf-yaml\") pod \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.993033 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqjpw\" (UniqueName: \"kubernetes.io/projected/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-kube-api-access-vqjpw\") pod \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.993063 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-run-httpd\") pod \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\" (UID: \"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4\") " Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.993486 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" (UID: "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.993579 4972 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:21 crc kubenswrapper[4972]: I1121 11:27:21.993750 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" (UID: "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.000477 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-kube-api-access-vqjpw" (OuterVolumeSpecName: "kube-api-access-vqjpw") pod "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" (UID: "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4"). InnerVolumeSpecName "kube-api-access-vqjpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.005042 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-scripts" (OuterVolumeSpecName: "scripts") pod "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" (UID: "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.013862 4972 generic.go:334] "Generic (PLEG): container finished" podID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerID="818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545" exitCode=0 Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.013987 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4","Type":"ContainerDied","Data":"818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545"} Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.014085 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0bfb9-cbaf-49d8-99a3-84ada73972d4","Type":"ContainerDied","Data":"01f9fd83a09607d74f0bc6f252e5dd53f246348cdc197280180be216f761c898"} Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.014150 4972 scope.go:117] "RemoveContainer" containerID="7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.014350 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.030356 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" (UID: "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.047793 4972 scope.go:117] "RemoveContainer" containerID="ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.082001 4972 scope.go:117] "RemoveContainer" containerID="818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.095897 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqjpw\" (UniqueName: \"kubernetes.io/projected/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-kube-api-access-vqjpw\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.095920 4972 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.095930 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.096413 4972 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.104094 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" (UID: "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.105675 4972 scope.go:117] "RemoveContainer" containerID="6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.129641 4972 scope.go:117] "RemoveContainer" containerID="7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9" Nov 21 11:27:22 crc kubenswrapper[4972]: E1121 11:27:22.130573 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9\": container with ID starting with 7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9 not found: ID does not exist" containerID="7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.130604 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9"} err="failed to get container status \"7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9\": rpc error: code = NotFound desc = could not find container \"7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9\": container with ID starting with 7cb244b7a2d2e17e268e13a453edfb0f88ea14da8d4ec7afe76ce1d3512d4dc9 not found: ID does not exist" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.130623 4972 scope.go:117] "RemoveContainer" containerID="ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973" Nov 21 11:27:22 crc kubenswrapper[4972]: E1121 11:27:22.131261 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973\": container with ID starting with ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973 not found: ID does not exist" containerID="ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.131277 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973"} err="failed to get container status \"ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973\": rpc error: code = NotFound desc = could not find container \"ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973\": container with ID starting with ca2779d561b3e768c95e0f83428e7edf3f86ec97462e1660fe4effc5a0d2e973 not found: ID does not exist" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.131291 4972 scope.go:117] "RemoveContainer" containerID="818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545" Nov 21 11:27:22 crc kubenswrapper[4972]: E1121 11:27:22.131525 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545\": container with ID starting with 818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545 not found: ID does not exist" containerID="818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.131540 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545"} err="failed to get container status \"818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545\": rpc error: code = NotFound desc = could not find container \"818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545\": container with ID starting with 818e46e0a2bd33df5b1ecebcdde6fc3971cc9224573d212bb5198b7c85af1545 not found: ID does not exist" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.131554 4972 scope.go:117] "RemoveContainer" containerID="6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439" Nov 21 11:27:22 crc kubenswrapper[4972]: E1121 11:27:22.131710 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439\": container with ID starting with 6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439 not found: ID does not exist" containerID="6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.132124 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439"} err="failed to get container status \"6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439\": rpc error: code = NotFound desc = could not find container \"6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439\": container with ID starting with 6c4095f09d1e1e1552d0f4d2b38651f5aae2c7a34f72a564ddfb2d90ee605439 not found: ID does not exist" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.139624 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-config-data" (OuterVolumeSpecName: "config-data") pod "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" (UID: "cbf0bfb9-cbaf-49d8-99a3-84ada73972d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.198409 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.198438 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.403021 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.424430 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.440568 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:27:22 crc kubenswrapper[4972]: E1121 11:27:22.441048 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="ceilometer-central-agent" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.441060 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="ceilometer-central-agent" Nov 21 11:27:22 crc kubenswrapper[4972]: E1121 11:27:22.441083 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="ceilometer-notification-agent" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.441105 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="ceilometer-notification-agent" Nov 21 11:27:22 crc kubenswrapper[4972]: E1121 11:27:22.441125 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="proxy-httpd" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.441134 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="proxy-httpd" Nov 21 11:27:22 crc kubenswrapper[4972]: E1121 11:27:22.441153 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="sg-core" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.441161 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="sg-core" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.441365 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="ceilometer-central-agent" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.441385 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="sg-core" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.441395 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="ceilometer-notification-agent" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.441414 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" containerName="proxy-httpd" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.443283 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.445777 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.453677 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.453754 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.613772 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-run-httpd\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.614206 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-scripts\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.614265 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-log-httpd\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.614326 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.614420 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktfks\" (UniqueName: \"kubernetes.io/projected/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-kube-api-access-ktfks\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.614462 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.614492 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-config-data\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.716209 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-run-httpd\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.716273 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-scripts\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.716315 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-log-httpd\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.716360 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.716493 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktfks\" (UniqueName: \"kubernetes.io/projected/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-kube-api-access-ktfks\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.716523 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.716544 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-config-data\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.717277 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-log-httpd\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.717741 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-run-httpd\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.730234 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-scripts\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.730785 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.730952 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.731249 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-config-data\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.732161 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktfks\" (UniqueName: \"kubernetes.io/projected/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-kube-api-access-ktfks\") pod \"ceilometer-0\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " pod="openstack/ceilometer-0" Nov 21 11:27:22 crc kubenswrapper[4972]: I1121 11:27:22.844007 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:27:23 crc kubenswrapper[4972]: I1121 11:27:23.044948 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2ed9e47f-280b-42d2-916c-ae5c437794ed","Type":"ContainerStarted","Data":"be87d4e7c77798266e3c78104e4e148c14be2b1007a6a499135d6a80ac349f7d"} Nov 21 11:27:23 crc kubenswrapper[4972]: I1121 11:27:23.045267 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2ed9e47f-280b-42d2-916c-ae5c437794ed","Type":"ContainerStarted","Data":"97744d7c58fdb46b98be00ae9c1750975456a6c84a169a8f837414f90e12c47f"} Nov 21 11:27:23 crc kubenswrapper[4972]: I1121 11:27:23.075739 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.29778383 podStartE2EDuration="10.075716173s" podCreationTimestamp="2025-11-21 11:27:13 +0000 UTC" firstStartedPulling="2025-11-21 11:27:14.842736605 +0000 UTC m=+6379.951879103" lastFinishedPulling="2025-11-21 11:27:21.620668948 +0000 UTC m=+6386.729811446" observedRunningTime="2025-11-21 11:27:23.066027818 +0000 UTC m=+6388.175170336" watchObservedRunningTime="2025-11-21 11:27:23.075716173 +0000 UTC m=+6388.184858671" Nov 21 11:27:23 crc kubenswrapper[4972]: I1121 11:27:23.414393 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:27:23 crc kubenswrapper[4972]: I1121 11:27:23.738482 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 21 11:27:23 crc kubenswrapper[4972]: I1121 11:27:23.780688 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf0bfb9-cbaf-49d8-99a3-84ada73972d4" path="/var/lib/kubelet/pods/cbf0bfb9-cbaf-49d8-99a3-84ada73972d4/volumes" Nov 21 11:27:23 crc kubenswrapper[4972]: I1121 11:27:23.852289 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:27:23 crc kubenswrapper[4972]: I1121 11:27:23.925789 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6467ff5dcf-hjvxk"] Nov 21 11:27:23 crc kubenswrapper[4972]: I1121 11:27:23.926041 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" podUID="79cf0afd-514a-49cf-9e07-13efd464b0b9" containerName="dnsmasq-dns" containerID="cri-o://c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30" gracePeriod=10 Nov 21 11:27:23 crc kubenswrapper[4972]: I1121 11:27:23.993964 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 21 11:27:24 crc kubenswrapper[4972]: I1121 11:27:24.074252 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cefafa4a-2806-4c2e-ac77-2ca20b135e0a","Type":"ContainerStarted","Data":"6d7a3f9a163ae292d2e4518a990bab868649739137fc24f74314862e4d8f3c5b"} Nov 21 11:27:24 crc kubenswrapper[4972]: I1121 11:27:24.982942 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.021421 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94vqf\" (UniqueName: \"kubernetes.io/projected/79cf0afd-514a-49cf-9e07-13efd464b0b9-kube-api-access-94vqf\") pod \"79cf0afd-514a-49cf-9e07-13efd464b0b9\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.021487 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-config\") pod \"79cf0afd-514a-49cf-9e07-13efd464b0b9\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.021549 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-ovsdbserver-sb\") pod \"79cf0afd-514a-49cf-9e07-13efd464b0b9\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.021583 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-ovsdbserver-nb\") pod \"79cf0afd-514a-49cf-9e07-13efd464b0b9\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.021620 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-dns-svc\") pod \"79cf0afd-514a-49cf-9e07-13efd464b0b9\" (UID: \"79cf0afd-514a-49cf-9e07-13efd464b0b9\") " Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.029216 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79cf0afd-514a-49cf-9e07-13efd464b0b9-kube-api-access-94vqf" (OuterVolumeSpecName: "kube-api-access-94vqf") pod "79cf0afd-514a-49cf-9e07-13efd464b0b9" (UID: "79cf0afd-514a-49cf-9e07-13efd464b0b9"). InnerVolumeSpecName "kube-api-access-94vqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.089988 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-config" (OuterVolumeSpecName: "config") pod "79cf0afd-514a-49cf-9e07-13efd464b0b9" (UID: "79cf0afd-514a-49cf-9e07-13efd464b0b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.091401 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79cf0afd-514a-49cf-9e07-13efd464b0b9" (UID: "79cf0afd-514a-49cf-9e07-13efd464b0b9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.091980 4972 generic.go:334] "Generic (PLEG): container finished" podID="79cf0afd-514a-49cf-9e07-13efd464b0b9" containerID="c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30" exitCode=0 Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.092049 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" event={"ID":"79cf0afd-514a-49cf-9e07-13efd464b0b9","Type":"ContainerDied","Data":"c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30"} Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.092078 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" event={"ID":"79cf0afd-514a-49cf-9e07-13efd464b0b9","Type":"ContainerDied","Data":"1719787277dabcc71d0b3c24b5ce22b58bf7453b4de53d8de9d32135dfa59643"} Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.092096 4972 scope.go:117] "RemoveContainer" containerID="c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.092119 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6467ff5dcf-hjvxk" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.092670 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79cf0afd-514a-49cf-9e07-13efd464b0b9" (UID: "79cf0afd-514a-49cf-9e07-13efd464b0b9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.096712 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cefafa4a-2806-4c2e-ac77-2ca20b135e0a","Type":"ContainerStarted","Data":"237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d"} Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.107155 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79cf0afd-514a-49cf-9e07-13efd464b0b9" (UID: "79cf0afd-514a-49cf-9e07-13efd464b0b9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.123795 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94vqf\" (UniqueName: \"kubernetes.io/projected/79cf0afd-514a-49cf-9e07-13efd464b0b9-kube-api-access-94vqf\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.123839 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.123850 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.123860 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.123867 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79cf0afd-514a-49cf-9e07-13efd464b0b9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.192756 4972 scope.go:117] "RemoveContainer" containerID="dacf26a15e8d9a42eb01851ce85f076b5c1c412ab0928b88cb1b27faf8fe4934" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.342397 4972 scope.go:117] "RemoveContainer" containerID="c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30" Nov 21 11:27:25 crc kubenswrapper[4972]: E1121 11:27:25.342848 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30\": container with ID starting with c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30 not found: ID does not exist" containerID="c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.342899 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30"} err="failed to get container status \"c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30\": rpc error: code = NotFound desc = could not find container \"c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30\": container with ID starting with c33da2a7fdc7617f95faecf199ace086e0d1a27b9a2c08abe92a41e7a3d39a30 not found: ID does not exist" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.342935 4972 scope.go:117] "RemoveContainer" containerID="dacf26a15e8d9a42eb01851ce85f076b5c1c412ab0928b88cb1b27faf8fe4934" Nov 21 11:27:25 crc kubenswrapper[4972]: E1121 11:27:25.343370 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dacf26a15e8d9a42eb01851ce85f076b5c1c412ab0928b88cb1b27faf8fe4934\": container with ID starting with dacf26a15e8d9a42eb01851ce85f076b5c1c412ab0928b88cb1b27faf8fe4934 not found: ID does not exist" containerID="dacf26a15e8d9a42eb01851ce85f076b5c1c412ab0928b88cb1b27faf8fe4934" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.343411 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dacf26a15e8d9a42eb01851ce85f076b5c1c412ab0928b88cb1b27faf8fe4934"} err="failed to get container status \"dacf26a15e8d9a42eb01851ce85f076b5c1c412ab0928b88cb1b27faf8fe4934\": rpc error: code = NotFound desc = could not find container \"dacf26a15e8d9a42eb01851ce85f076b5c1c412ab0928b88cb1b27faf8fe4934\": container with ID starting with dacf26a15e8d9a42eb01851ce85f076b5c1c412ab0928b88cb1b27faf8fe4934 not found: ID does not exist" Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.447219 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6467ff5dcf-hjvxk"] Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.457851 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6467ff5dcf-hjvxk"] Nov 21 11:27:25 crc kubenswrapper[4972]: I1121 11:27:25.773098 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79cf0afd-514a-49cf-9e07-13efd464b0b9" path="/var/lib/kubelet/pods/79cf0afd-514a-49cf-9e07-13efd464b0b9/volumes" Nov 21 11:27:26 crc kubenswrapper[4972]: I1121 11:27:26.106896 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cefafa4a-2806-4c2e-ac77-2ca20b135e0a","Type":"ContainerStarted","Data":"1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18"} Nov 21 11:27:26 crc kubenswrapper[4972]: I1121 11:27:26.179190 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:27:26 crc kubenswrapper[4972]: I1121 11:27:26.179249 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:27:26 crc kubenswrapper[4972]: I1121 11:27:26.179294 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 11:27:26 crc kubenswrapper[4972]: I1121 11:27:26.180068 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 11:27:26 crc kubenswrapper[4972]: I1121 11:27:26.180127 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" gracePeriod=600 Nov 21 11:27:26 crc kubenswrapper[4972]: E1121 11:27:26.410103 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:27:27 crc kubenswrapper[4972]: I1121 11:27:27.033129 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-m2fhd"] Nov 21 11:27:27 crc kubenswrapper[4972]: I1121 11:27:27.041579 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-m2fhd"] Nov 21 11:27:27 crc kubenswrapper[4972]: I1121 11:27:27.123063 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" exitCode=0 Nov 21 11:27:27 crc kubenswrapper[4972]: I1121 11:27:27.123127 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453"} Nov 21 11:27:27 crc kubenswrapper[4972]: I1121 11:27:27.123159 4972 scope.go:117] "RemoveContainer" containerID="8ec025f5e23fdc9483086d949001e8977fb85a6d1c335c571eb4db2f58dafa45" Nov 21 11:27:27 crc kubenswrapper[4972]: I1121 11:27:27.123821 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:27:27 crc kubenswrapper[4972]: E1121 11:27:27.124167 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:27:27 crc kubenswrapper[4972]: I1121 11:27:27.130193 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cefafa4a-2806-4c2e-ac77-2ca20b135e0a","Type":"ContainerStarted","Data":"832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10"} Nov 21 11:27:27 crc kubenswrapper[4972]: I1121 11:27:27.369817 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:27:27 crc kubenswrapper[4972]: I1121 11:27:27.779457 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7318c041-d37d-4001-9620-6ef043e28795" path="/var/lib/kubelet/pods/7318c041-d37d-4001-9620-6ef043e28795/volumes" Nov 21 11:27:29 crc kubenswrapper[4972]: I1121 11:27:29.164177 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cefafa4a-2806-4c2e-ac77-2ca20b135e0a","Type":"ContainerStarted","Data":"c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d"} Nov 21 11:27:29 crc kubenswrapper[4972]: I1121 11:27:29.164908 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 11:27:29 crc kubenswrapper[4972]: I1121 11:27:29.164362 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="ceilometer-central-agent" containerID="cri-o://237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d" gracePeriod=30 Nov 21 11:27:29 crc kubenswrapper[4972]: I1121 11:27:29.164735 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="proxy-httpd" containerID="cri-o://c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d" gracePeriod=30 Nov 21 11:27:29 crc kubenswrapper[4972]: I1121 11:27:29.164747 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="ceilometer-notification-agent" containerID="cri-o://1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18" gracePeriod=30 Nov 21 11:27:29 crc kubenswrapper[4972]: I1121 11:27:29.164770 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="sg-core" containerID="cri-o://832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10" gracePeriod=30 Nov 21 11:27:29 crc kubenswrapper[4972]: I1121 11:27:29.203426 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.694687954 podStartE2EDuration="7.203398448s" podCreationTimestamp="2025-11-21 11:27:22 +0000 UTC" firstStartedPulling="2025-11-21 11:27:23.419164525 +0000 UTC m=+6388.528307023" lastFinishedPulling="2025-11-21 11:27:27.927875009 +0000 UTC m=+6393.037017517" observedRunningTime="2025-11-21 11:27:29.189259766 +0000 UTC m=+6394.298402304" watchObservedRunningTime="2025-11-21 11:27:29.203398448 +0000 UTC m=+6394.312540986" Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.183109 4972 generic.go:334] "Generic (PLEG): container finished" podID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerID="c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d" exitCode=0 Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.183524 4972 generic.go:334] "Generic (PLEG): container finished" podID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerID="832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10" exitCode=2 Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.183533 4972 generic.go:334] "Generic (PLEG): container finished" podID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerID="1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18" exitCode=0 Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.183160 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cefafa4a-2806-4c2e-ac77-2ca20b135e0a","Type":"ContainerDied","Data":"c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d"} Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.183572 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cefafa4a-2806-4c2e-ac77-2ca20b135e0a","Type":"ContainerDied","Data":"832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10"} Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.183586 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cefafa4a-2806-4c2e-ac77-2ca20b135e0a","Type":"ContainerDied","Data":"1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18"} Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.843941 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.986589 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-config-data\") pod \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.986649 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-sg-core-conf-yaml\") pod \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.986690 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-combined-ca-bundle\") pod \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.986741 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-log-httpd\") pod \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.986815 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-scripts\") pod \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.986903 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-run-httpd\") pod \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.986960 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktfks\" (UniqueName: \"kubernetes.io/projected/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-kube-api-access-ktfks\") pod \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\" (UID: \"cefafa4a-2806-4c2e-ac77-2ca20b135e0a\") " Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.987477 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cefafa4a-2806-4c2e-ac77-2ca20b135e0a" (UID: "cefafa4a-2806-4c2e-ac77-2ca20b135e0a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.987669 4972 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.988130 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cefafa4a-2806-4c2e-ac77-2ca20b135e0a" (UID: "cefafa4a-2806-4c2e-ac77-2ca20b135e0a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.997068 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-scripts" (OuterVolumeSpecName: "scripts") pod "cefafa4a-2806-4c2e-ac77-2ca20b135e0a" (UID: "cefafa4a-2806-4c2e-ac77-2ca20b135e0a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:27:30 crc kubenswrapper[4972]: I1121 11:27:30.997154 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-kube-api-access-ktfks" (OuterVolumeSpecName: "kube-api-access-ktfks") pod "cefafa4a-2806-4c2e-ac77-2ca20b135e0a" (UID: "cefafa4a-2806-4c2e-ac77-2ca20b135e0a"). InnerVolumeSpecName "kube-api-access-ktfks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.029233 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cefafa4a-2806-4c2e-ac77-2ca20b135e0a" (UID: "cefafa4a-2806-4c2e-ac77-2ca20b135e0a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.091663 4972 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.092081 4972 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.092095 4972 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.092110 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktfks\" (UniqueName: \"kubernetes.io/projected/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-kube-api-access-ktfks\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.092460 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cefafa4a-2806-4c2e-ac77-2ca20b135e0a" (UID: "cefafa4a-2806-4c2e-ac77-2ca20b135e0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.138309 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-config-data" (OuterVolumeSpecName: "config-data") pod "cefafa4a-2806-4c2e-ac77-2ca20b135e0a" (UID: "cefafa4a-2806-4c2e-ac77-2ca20b135e0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.194280 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.194328 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefafa4a-2806-4c2e-ac77-2ca20b135e0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.198808 4972 generic.go:334] "Generic (PLEG): container finished" podID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerID="237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d" exitCode=0 Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.198896 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cefafa4a-2806-4c2e-ac77-2ca20b135e0a","Type":"ContainerDied","Data":"237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d"} Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.198966 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cefafa4a-2806-4c2e-ac77-2ca20b135e0a","Type":"ContainerDied","Data":"6d7a3f9a163ae292d2e4518a990bab868649739137fc24f74314862e4d8f3c5b"} Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.199000 4972 scope.go:117] "RemoveContainer" containerID="c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.198913 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.248123 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.249163 4972 scope.go:117] "RemoveContainer" containerID="832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.271686 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.279319 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:27:31 crc kubenswrapper[4972]: E1121 11:27:31.279781 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="proxy-httpd" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.279801 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="proxy-httpd" Nov 21 11:27:31 crc kubenswrapper[4972]: E1121 11:27:31.279843 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79cf0afd-514a-49cf-9e07-13efd464b0b9" containerName="dnsmasq-dns" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.279853 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="79cf0afd-514a-49cf-9e07-13efd464b0b9" containerName="dnsmasq-dns" Nov 21 11:27:31 crc kubenswrapper[4972]: E1121 11:27:31.279901 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="ceilometer-notification-agent" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.279910 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="ceilometer-notification-agent" Nov 21 11:27:31 crc kubenswrapper[4972]: E1121 11:27:31.279920 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="sg-core" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.279928 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="sg-core" Nov 21 11:27:31 crc kubenswrapper[4972]: E1121 11:27:31.279946 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="ceilometer-central-agent" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.279956 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="ceilometer-central-agent" Nov 21 11:27:31 crc kubenswrapper[4972]: E1121 11:27:31.279967 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79cf0afd-514a-49cf-9e07-13efd464b0b9" containerName="init" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.279977 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="79cf0afd-514a-49cf-9e07-13efd464b0b9" containerName="init" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.280232 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="ceilometer-notification-agent" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.280251 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="proxy-httpd" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.280268 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="sg-core" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.280282 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" containerName="ceilometer-central-agent" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.280295 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="79cf0afd-514a-49cf-9e07-13efd464b0b9" containerName="dnsmasq-dns" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.282719 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.289214 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.289515 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.297917 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-scripts\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.298077 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-log-httpd\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.298119 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-config-data\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.298159 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.298212 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-run-httpd\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.298264 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.298317 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nwkl\" (UniqueName: \"kubernetes.io/projected/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-kube-api-access-9nwkl\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.308531 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.354727 4972 scope.go:117] "RemoveContainer" containerID="1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.403229 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-log-httpd\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.403306 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-config-data\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.403350 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.403400 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-run-httpd\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.403465 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.403511 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nwkl\" (UniqueName: \"kubernetes.io/projected/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-kube-api-access-9nwkl\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.403551 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-scripts\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.405102 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-log-httpd\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.412661 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-config-data\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.413026 4972 scope.go:117] "RemoveContainer" containerID="237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.415366 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-run-httpd\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.422638 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-scripts\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.424759 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.438692 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.445394 4972 scope.go:117] "RemoveContainer" containerID="c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d" Nov 21 11:27:31 crc kubenswrapper[4972]: E1121 11:27:31.446154 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d\": container with ID starting with c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d not found: ID does not exist" containerID="c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.446184 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d"} err="failed to get container status \"c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d\": rpc error: code = NotFound desc = could not find container \"c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d\": container with ID starting with c08b29400417237d8b0ab717d18483736aa514ad5a325f67bf0e30e1171d375d not found: ID does not exist" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.446204 4972 scope.go:117] "RemoveContainer" containerID="832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10" Nov 21 11:27:31 crc kubenswrapper[4972]: E1121 11:27:31.449498 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10\": container with ID starting with 832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10 not found: ID does not exist" containerID="832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.449525 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10"} err="failed to get container status \"832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10\": rpc error: code = NotFound desc = could not find container \"832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10\": container with ID starting with 832ae2ab0e6be532136f3d3a454ef7a6de26a8ae1afdff50999596243e738e10 not found: ID does not exist" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.449539 4972 scope.go:117] "RemoveContainer" containerID="1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18" Nov 21 11:27:31 crc kubenswrapper[4972]: E1121 11:27:31.454699 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18\": container with ID starting with 1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18 not found: ID does not exist" containerID="1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.454723 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18"} err="failed to get container status \"1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18\": rpc error: code = NotFound desc = could not find container \"1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18\": container with ID starting with 1fc34b7b6c83ee72f7c5bc05f649a9f864f20f2737667bddb446ac665d327e18 not found: ID does not exist" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.454739 4972 scope.go:117] "RemoveContainer" containerID="237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d" Nov 21 11:27:31 crc kubenswrapper[4972]: E1121 11:27:31.462137 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d\": container with ID starting with 237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d not found: ID does not exist" containerID="237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.462165 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d"} err="failed to get container status \"237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d\": rpc error: code = NotFound desc = could not find container \"237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d\": container with ID starting with 237b9b77e7a60bf30d523560c7e616808df05120c5dc0d035c5f041172ccd42d not found: ID does not exist" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.482637 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nwkl\" (UniqueName: \"kubernetes.io/projected/4a94fb30-1130-45e4-8ce8-9b0cdf0401b4-kube-api-access-9nwkl\") pod \"ceilometer-0\" (UID: \"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4\") " pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.648474 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 11:27:31 crc kubenswrapper[4972]: I1121 11:27:31.776672 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cefafa4a-2806-4c2e-ac77-2ca20b135e0a" path="/var/lib/kubelet/pods/cefafa4a-2806-4c2e-ac77-2ca20b135e0a/volumes" Nov 21 11:27:32 crc kubenswrapper[4972]: I1121 11:27:32.161985 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 11:27:32 crc kubenswrapper[4972]: I1121 11:27:32.220476 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4","Type":"ContainerStarted","Data":"d5f3863cb95447cda06449637e3c34e6d768299a21fe4a8e8898315fb08d2c0a"} Nov 21 11:27:35 crc kubenswrapper[4972]: I1121 11:27:35.204967 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Nov 21 11:27:35 crc kubenswrapper[4972]: I1121 11:27:35.233686 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 21 11:27:35 crc kubenswrapper[4972]: I1121 11:27:35.257277 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4","Type":"ContainerStarted","Data":"ccfca665656e14a869e7b80e1f04aff650e248ef49b9e7980b64c53903583b8f"} Nov 21 11:27:35 crc kubenswrapper[4972]: I1121 11:27:35.257314 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4","Type":"ContainerStarted","Data":"9bc571abf938af6c9e17419f973f8f62ad695695f255c3a8835de0c08caf5210"} Nov 21 11:27:35 crc kubenswrapper[4972]: I1121 11:27:35.511558 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 21 11:27:35 crc kubenswrapper[4972]: I1121 11:27:35.890294 4972 scope.go:117] "RemoveContainer" containerID="c84e744ed0f0e2062a254ad7cab31ddd0300dfdcd95d28a3f49f9af6042a06a1" Nov 21 11:27:35 crc kubenswrapper[4972]: I1121 11:27:35.959467 4972 scope.go:117] "RemoveContainer" containerID="5927faf389b0adc56c50c0100cc43aed2b97c75a4f8d4744cd0dbec48cb41983" Nov 21 11:27:35 crc kubenswrapper[4972]: I1121 11:27:35.995816 4972 scope.go:117] "RemoveContainer" containerID="9de914ad570d6b9a6029d344e70c1e284df15473cc4a37dcd322d10bb9148ebb" Nov 21 11:27:36 crc kubenswrapper[4972]: I1121 11:27:36.023260 4972 scope.go:117] "RemoveContainer" containerID="ee2664e2edd2c31a3afe2704bc402913c370ce9c3442540119419716cefa6c4e" Nov 21 11:27:36 crc kubenswrapper[4972]: I1121 11:27:36.051115 4972 scope.go:117] "RemoveContainer" containerID="bd8effcb34abbd2f6b94ff04c9e4cecadf6c3227fa51066ab0fb01cce4a89066" Nov 21 11:27:36 crc kubenswrapper[4972]: I1121 11:27:36.086667 4972 scope.go:117] "RemoveContainer" containerID="5c5a9fda0192ac8ca92b60a6bd9fdaa53ac09a9688b258629692c5593061b971" Nov 21 11:27:36 crc kubenswrapper[4972]: I1121 11:27:36.115449 4972 scope.go:117] "RemoveContainer" containerID="83ea61c56c6c92ec69b8409c787dec915511d5e725471676d6cd840c1572fbad" Nov 21 11:27:36 crc kubenswrapper[4972]: I1121 11:27:36.277039 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4","Type":"ContainerStarted","Data":"3fadc03225297b74203acf58c70694d0f3d12c547d18e6d6602283a09b6f7129"} Nov 21 11:27:38 crc kubenswrapper[4972]: I1121 11:27:38.311667 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a94fb30-1130-45e4-8ce8-9b0cdf0401b4","Type":"ContainerStarted","Data":"590b825a4c6526f741b0005ac6f1ce00ce66193fead9d62f11769aed5635c04d"} Nov 21 11:27:38 crc kubenswrapper[4972]: I1121 11:27:38.312211 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 11:27:38 crc kubenswrapper[4972]: I1121 11:27:38.328547 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.233071591 podStartE2EDuration="7.328532003s" podCreationTimestamp="2025-11-21 11:27:31 +0000 UTC" firstStartedPulling="2025-11-21 11:27:32.164003307 +0000 UTC m=+6397.273145815" lastFinishedPulling="2025-11-21 11:27:37.259463689 +0000 UTC m=+6402.368606227" observedRunningTime="2025-11-21 11:27:38.328121292 +0000 UTC m=+6403.437263790" watchObservedRunningTime="2025-11-21 11:27:38.328532003 +0000 UTC m=+6403.437674501" Nov 21 11:27:39 crc kubenswrapper[4972]: I1121 11:27:39.760525 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:27:39 crc kubenswrapper[4972]: E1121 11:27:39.761600 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:27:54 crc kubenswrapper[4972]: I1121 11:27:54.759780 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:27:54 crc kubenswrapper[4972]: E1121 11:27:54.761041 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:27:55 crc kubenswrapper[4972]: I1121 11:27:55.702237 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rqp4x"] Nov 21 11:27:55 crc kubenswrapper[4972]: I1121 11:27:55.707636 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:27:55 crc kubenswrapper[4972]: I1121 11:27:55.718778 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rqp4x"] Nov 21 11:27:55 crc kubenswrapper[4972]: I1121 11:27:55.875663 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-catalog-content\") pod \"redhat-operators-rqp4x\" (UID: \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\") " pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:27:55 crc kubenswrapper[4972]: I1121 11:27:55.875856 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v2n6\" (UniqueName: \"kubernetes.io/projected/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-kube-api-access-4v2n6\") pod \"redhat-operators-rqp4x\" (UID: \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\") " pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:27:55 crc kubenswrapper[4972]: I1121 11:27:55.876040 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-utilities\") pod \"redhat-operators-rqp4x\" (UID: \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\") " pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:27:55 crc kubenswrapper[4972]: I1121 11:27:55.978189 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-catalog-content\") pod \"redhat-operators-rqp4x\" (UID: \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\") " pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:27:55 crc kubenswrapper[4972]: I1121 11:27:55.978326 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v2n6\" (UniqueName: \"kubernetes.io/projected/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-kube-api-access-4v2n6\") pod \"redhat-operators-rqp4x\" (UID: \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\") " pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:27:55 crc kubenswrapper[4972]: I1121 11:27:55.978449 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-utilities\") pod \"redhat-operators-rqp4x\" (UID: \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\") " pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:27:55 crc kubenswrapper[4972]: I1121 11:27:55.978659 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-catalog-content\") pod \"redhat-operators-rqp4x\" (UID: \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\") " pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:27:55 crc kubenswrapper[4972]: I1121 11:27:55.978926 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-utilities\") pod \"redhat-operators-rqp4x\" (UID: \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\") " pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:27:56 crc kubenswrapper[4972]: I1121 11:27:56.013750 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v2n6\" (UniqueName: \"kubernetes.io/projected/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-kube-api-access-4v2n6\") pod \"redhat-operators-rqp4x\" (UID: \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\") " pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:27:56 crc kubenswrapper[4972]: I1121 11:27:56.045659 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:27:56 crc kubenswrapper[4972]: I1121 11:27:56.619490 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rqp4x"] Nov 21 11:27:57 crc kubenswrapper[4972]: I1121 11:27:57.576550 4972 generic.go:334] "Generic (PLEG): container finished" podID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerID="df703f833507f3382ccd85f3b2aa4dd1cdf60ec079a62d1fe50b38abb219f46f" exitCode=0 Nov 21 11:27:57 crc kubenswrapper[4972]: I1121 11:27:57.576674 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqp4x" event={"ID":"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845","Type":"ContainerDied","Data":"df703f833507f3382ccd85f3b2aa4dd1cdf60ec079a62d1fe50b38abb219f46f"} Nov 21 11:27:57 crc kubenswrapper[4972]: I1121 11:27:57.576919 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqp4x" event={"ID":"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845","Type":"ContainerStarted","Data":"01fa5802dfda9923dd6251d94201facdc2b264b316e48acfb4128c738499623e"} Nov 21 11:27:58 crc kubenswrapper[4972]: I1121 11:27:58.589320 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqp4x" event={"ID":"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845","Type":"ContainerStarted","Data":"2c1cd1b4c33013ff36c57a9e32b978ce6f04ce7d159024d7a6757e9ba70bede3"} Nov 21 11:28:01 crc kubenswrapper[4972]: I1121 11:28:01.658467 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 11:28:04 crc kubenswrapper[4972]: I1121 11:28:04.661417 4972 generic.go:334] "Generic (PLEG): container finished" podID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerID="2c1cd1b4c33013ff36c57a9e32b978ce6f04ce7d159024d7a6757e9ba70bede3" exitCode=0 Nov 21 11:28:04 crc kubenswrapper[4972]: I1121 11:28:04.661574 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqp4x" event={"ID":"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845","Type":"ContainerDied","Data":"2c1cd1b4c33013ff36c57a9e32b978ce6f04ce7d159024d7a6757e9ba70bede3"} Nov 21 11:28:04 crc kubenswrapper[4972]: I1121 11:28:04.666244 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 11:28:05 crc kubenswrapper[4972]: I1121 11:28:05.676743 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqp4x" event={"ID":"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845","Type":"ContainerStarted","Data":"66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004"} Nov 21 11:28:05 crc kubenswrapper[4972]: I1121 11:28:05.708818 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rqp4x" podStartSLOduration=2.977226585 podStartE2EDuration="10.708785673s" podCreationTimestamp="2025-11-21 11:27:55 +0000 UTC" firstStartedPulling="2025-11-21 11:27:57.581358663 +0000 UTC m=+6422.690501151" lastFinishedPulling="2025-11-21 11:28:05.312917731 +0000 UTC m=+6430.422060239" observedRunningTime="2025-11-21 11:28:05.696320845 +0000 UTC m=+6430.805463363" watchObservedRunningTime="2025-11-21 11:28:05.708785673 +0000 UTC m=+6430.817928191" Nov 21 11:28:06 crc kubenswrapper[4972]: I1121 11:28:06.046884 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:28:06 crc kubenswrapper[4972]: I1121 11:28:06.047114 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:28:06 crc kubenswrapper[4972]: I1121 11:28:06.759525 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:28:06 crc kubenswrapper[4972]: E1121 11:28:06.760052 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:28:07 crc kubenswrapper[4972]: I1121 11:28:07.113105 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rqp4x" podUID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerName="registry-server" probeResult="failure" output=< Nov 21 11:28:07 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 11:28:07 crc kubenswrapper[4972]: > Nov 21 11:28:17 crc kubenswrapper[4972]: I1121 11:28:17.128054 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rqp4x" podUID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerName="registry-server" probeResult="failure" output=< Nov 21 11:28:17 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 11:28:17 crc kubenswrapper[4972]: > Nov 21 11:28:20 crc kubenswrapper[4972]: I1121 11:28:20.759635 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:28:20 crc kubenswrapper[4972]: E1121 11:28:20.762106 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.503425 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d9db6798c-7wmdw"] Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.505694 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.509737 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.519234 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d9db6798c-7wmdw"] Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.598037 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-config\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.598104 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-ovsdbserver-sb\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.598143 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-openstack-cell1\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.598582 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tqnk\" (UniqueName: \"kubernetes.io/projected/b57d4549-e519-4791-97ab-4470b527bb43-kube-api-access-4tqnk\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.598636 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-dns-svc\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.598712 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-ovsdbserver-nb\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.701603 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tqnk\" (UniqueName: \"kubernetes.io/projected/b57d4549-e519-4791-97ab-4470b527bb43-kube-api-access-4tqnk\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.701651 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-dns-svc\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.701692 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-ovsdbserver-nb\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.701781 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-config\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.701807 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-ovsdbserver-sb\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.701890 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-openstack-cell1\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.703936 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-ovsdbserver-sb\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.704011 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-config\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.704144 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-ovsdbserver-nb\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.704262 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-dns-svc\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.704608 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-openstack-cell1\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.722591 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tqnk\" (UniqueName: \"kubernetes.io/projected/b57d4549-e519-4791-97ab-4470b527bb43-kube-api-access-4tqnk\") pod \"dnsmasq-dns-d9db6798c-7wmdw\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:23 crc kubenswrapper[4972]: I1121 11:28:23.881626 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:24 crc kubenswrapper[4972]: I1121 11:28:24.393655 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d9db6798c-7wmdw"] Nov 21 11:28:24 crc kubenswrapper[4972]: W1121 11:28:24.404777 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb57d4549_e519_4791_97ab_4470b527bb43.slice/crio-ad5072d632f3309d27515f18ec47687ae9a43b3bec331eec74f5c4398efb6ee9 WatchSource:0}: Error finding container ad5072d632f3309d27515f18ec47687ae9a43b3bec331eec74f5c4398efb6ee9: Status 404 returned error can't find the container with id ad5072d632f3309d27515f18ec47687ae9a43b3bec331eec74f5c4398efb6ee9 Nov 21 11:28:24 crc kubenswrapper[4972]: I1121 11:28:24.917750 4972 generic.go:334] "Generic (PLEG): container finished" podID="b57d4549-e519-4791-97ab-4470b527bb43" containerID="80f067dd8b1bf0c222858618b464f3340e0a38e57d51d0bc9e16af4925553aee" exitCode=0 Nov 21 11:28:24 crc kubenswrapper[4972]: I1121 11:28:24.917884 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" event={"ID":"b57d4549-e519-4791-97ab-4470b527bb43","Type":"ContainerDied","Data":"80f067dd8b1bf0c222858618b464f3340e0a38e57d51d0bc9e16af4925553aee"} Nov 21 11:28:24 crc kubenswrapper[4972]: I1121 11:28:24.918071 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" event={"ID":"b57d4549-e519-4791-97ab-4470b527bb43","Type":"ContainerStarted","Data":"ad5072d632f3309d27515f18ec47687ae9a43b3bec331eec74f5c4398efb6ee9"} Nov 21 11:28:25 crc kubenswrapper[4972]: I1121 11:28:25.938955 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" event={"ID":"b57d4549-e519-4791-97ab-4470b527bb43","Type":"ContainerStarted","Data":"78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28"} Nov 21 11:28:25 crc kubenswrapper[4972]: I1121 11:28:25.939465 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:25 crc kubenswrapper[4972]: I1121 11:28:25.978891 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" podStartSLOduration=2.978868483 podStartE2EDuration="2.978868483s" podCreationTimestamp="2025-11-21 11:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:28:25.970080481 +0000 UTC m=+6451.079223039" watchObservedRunningTime="2025-11-21 11:28:25.978868483 +0000 UTC m=+6451.088010991" Nov 21 11:28:26 crc kubenswrapper[4972]: I1121 11:28:26.125086 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:28:26 crc kubenswrapper[4972]: I1121 11:28:26.183367 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:28:26 crc kubenswrapper[4972]: I1121 11:28:26.911575 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rqp4x"] Nov 21 11:28:27 crc kubenswrapper[4972]: I1121 11:28:27.966186 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rqp4x" podUID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerName="registry-server" containerID="cri-o://66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004" gracePeriod=2 Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.491048 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.616261 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v2n6\" (UniqueName: \"kubernetes.io/projected/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-kube-api-access-4v2n6\") pod \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\" (UID: \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\") " Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.616316 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-catalog-content\") pod \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\" (UID: \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\") " Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.616556 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-utilities\") pod \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\" (UID: \"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845\") " Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.617524 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-utilities" (OuterVolumeSpecName: "utilities") pod "b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" (UID: "b4fbfefc-9515-40a2-b1d4-94fb9f7c3845"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.622237 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-kube-api-access-4v2n6" (OuterVolumeSpecName: "kube-api-access-4v2n6") pod "b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" (UID: "b4fbfefc-9515-40a2-b1d4-94fb9f7c3845"). InnerVolumeSpecName "kube-api-access-4v2n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.708879 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" (UID: "b4fbfefc-9515-40a2-b1d4-94fb9f7c3845"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.719120 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.719148 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v2n6\" (UniqueName: \"kubernetes.io/projected/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-kube-api-access-4v2n6\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.719158 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.977694 4972 generic.go:334] "Generic (PLEG): container finished" podID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerID="66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004" exitCode=0 Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.977746 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rqp4x" Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.977778 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqp4x" event={"ID":"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845","Type":"ContainerDied","Data":"66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004"} Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.977891 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqp4x" event={"ID":"b4fbfefc-9515-40a2-b1d4-94fb9f7c3845","Type":"ContainerDied","Data":"01fa5802dfda9923dd6251d94201facdc2b264b316e48acfb4128c738499623e"} Nov 21 11:28:28 crc kubenswrapper[4972]: I1121 11:28:28.977938 4972 scope.go:117] "RemoveContainer" containerID="66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004" Nov 21 11:28:29 crc kubenswrapper[4972]: I1121 11:28:29.016854 4972 scope.go:117] "RemoveContainer" containerID="2c1cd1b4c33013ff36c57a9e32b978ce6f04ce7d159024d7a6757e9ba70bede3" Nov 21 11:28:29 crc kubenswrapper[4972]: I1121 11:28:29.021233 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rqp4x"] Nov 21 11:28:29 crc kubenswrapper[4972]: I1121 11:28:29.031537 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rqp4x"] Nov 21 11:28:29 crc kubenswrapper[4972]: I1121 11:28:29.042533 4972 scope.go:117] "RemoveContainer" containerID="df703f833507f3382ccd85f3b2aa4dd1cdf60ec079a62d1fe50b38abb219f46f" Nov 21 11:28:29 crc kubenswrapper[4972]: I1121 11:28:29.120522 4972 scope.go:117] "RemoveContainer" containerID="66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004" Nov 21 11:28:29 crc kubenswrapper[4972]: E1121 11:28:29.121046 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004\": container with ID starting with 66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004 not found: ID does not exist" containerID="66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004" Nov 21 11:28:29 crc kubenswrapper[4972]: I1121 11:28:29.121144 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004"} err="failed to get container status \"66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004\": rpc error: code = NotFound desc = could not find container \"66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004\": container with ID starting with 66111cce36542a4b791c147fa9959d909ac9a0e49f855ee46d93bcc983b2d004 not found: ID does not exist" Nov 21 11:28:29 crc kubenswrapper[4972]: I1121 11:28:29.121182 4972 scope.go:117] "RemoveContainer" containerID="2c1cd1b4c33013ff36c57a9e32b978ce6f04ce7d159024d7a6757e9ba70bede3" Nov 21 11:28:29 crc kubenswrapper[4972]: E1121 11:28:29.121627 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1cd1b4c33013ff36c57a9e32b978ce6f04ce7d159024d7a6757e9ba70bede3\": container with ID starting with 2c1cd1b4c33013ff36c57a9e32b978ce6f04ce7d159024d7a6757e9ba70bede3 not found: ID does not exist" containerID="2c1cd1b4c33013ff36c57a9e32b978ce6f04ce7d159024d7a6757e9ba70bede3" Nov 21 11:28:29 crc kubenswrapper[4972]: I1121 11:28:29.121770 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1cd1b4c33013ff36c57a9e32b978ce6f04ce7d159024d7a6757e9ba70bede3"} err="failed to get container status \"2c1cd1b4c33013ff36c57a9e32b978ce6f04ce7d159024d7a6757e9ba70bede3\": rpc error: code = NotFound desc = could not find container \"2c1cd1b4c33013ff36c57a9e32b978ce6f04ce7d159024d7a6757e9ba70bede3\": container with ID starting with 2c1cd1b4c33013ff36c57a9e32b978ce6f04ce7d159024d7a6757e9ba70bede3 not found: ID does not exist" Nov 21 11:28:29 crc kubenswrapper[4972]: I1121 11:28:29.121899 4972 scope.go:117] "RemoveContainer" containerID="df703f833507f3382ccd85f3b2aa4dd1cdf60ec079a62d1fe50b38abb219f46f" Nov 21 11:28:29 crc kubenswrapper[4972]: E1121 11:28:29.122358 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df703f833507f3382ccd85f3b2aa4dd1cdf60ec079a62d1fe50b38abb219f46f\": container with ID starting with df703f833507f3382ccd85f3b2aa4dd1cdf60ec079a62d1fe50b38abb219f46f not found: ID does not exist" containerID="df703f833507f3382ccd85f3b2aa4dd1cdf60ec079a62d1fe50b38abb219f46f" Nov 21 11:28:29 crc kubenswrapper[4972]: I1121 11:28:29.122436 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df703f833507f3382ccd85f3b2aa4dd1cdf60ec079a62d1fe50b38abb219f46f"} err="failed to get container status \"df703f833507f3382ccd85f3b2aa4dd1cdf60ec079a62d1fe50b38abb219f46f\": rpc error: code = NotFound desc = could not find container \"df703f833507f3382ccd85f3b2aa4dd1cdf60ec079a62d1fe50b38abb219f46f\": container with ID starting with df703f833507f3382ccd85f3b2aa4dd1cdf60ec079a62d1fe50b38abb219f46f not found: ID does not exist" Nov 21 11:28:29 crc kubenswrapper[4972]: I1121 11:28:29.774571 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" path="/var/lib/kubelet/pods/b4fbfefc-9515-40a2-b1d4-94fb9f7c3845/volumes" Nov 21 11:28:33 crc kubenswrapper[4972]: I1121 11:28:33.760427 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:28:33 crc kubenswrapper[4972]: E1121 11:28:33.761081 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:28:33 crc kubenswrapper[4972]: I1121 11:28:33.883066 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:33 crc kubenswrapper[4972]: I1121 11:28:33.967015 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566dd75fd7-4bkbp"] Nov 21 11:28:33 crc kubenswrapper[4972]: I1121 11:28:33.967642 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" podUID="1df96663-72a8-444d-af18-be73e7c8e955" containerName="dnsmasq-dns" containerID="cri-o://b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0" gracePeriod=10 Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.163874 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fc98f54cc-v6tgv"] Nov 21 11:28:34 crc kubenswrapper[4972]: E1121 11:28:34.164917 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerName="registry-server" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.164935 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerName="registry-server" Nov 21 11:28:34 crc kubenswrapper[4972]: E1121 11:28:34.164964 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerName="extract-utilities" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.164971 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerName="extract-utilities" Nov 21 11:28:34 crc kubenswrapper[4972]: E1121 11:28:34.165013 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerName="extract-content" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.165020 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerName="extract-content" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.167321 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fbfefc-9515-40a2-b1d4-94fb9f7c3845" containerName="registry-server" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.188165 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.197130 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fc98f54cc-v6tgv"] Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.362632 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-ovsdbserver-nb\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.362973 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-openstack-cell1\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.363040 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-ovsdbserver-sb\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.363061 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq7mc\" (UniqueName: \"kubernetes.io/projected/0fddcd19-6012-4953-a450-4230dc94d51c-kube-api-access-qq7mc\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.363091 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-dns-svc\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.363113 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-config\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.465419 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-openstack-cell1\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.465527 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-ovsdbserver-sb\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.465552 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq7mc\" (UniqueName: \"kubernetes.io/projected/0fddcd19-6012-4953-a450-4230dc94d51c-kube-api-access-qq7mc\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.465583 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-dns-svc\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.465602 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-config\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.465668 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-ovsdbserver-nb\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.466552 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-openstack-cell1\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.467098 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-ovsdbserver-sb\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.467101 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-ovsdbserver-nb\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.467329 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-config\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.467520 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fddcd19-6012-4953-a450-4230dc94d51c-dns-svc\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.496790 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq7mc\" (UniqueName: \"kubernetes.io/projected/0fddcd19-6012-4953-a450-4230dc94d51c-kube-api-access-qq7mc\") pod \"dnsmasq-dns-5fc98f54cc-v6tgv\" (UID: \"0fddcd19-6012-4953-a450-4230dc94d51c\") " pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.513719 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.640420 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.775293 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqgtf\" (UniqueName: \"kubernetes.io/projected/1df96663-72a8-444d-af18-be73e7c8e955-kube-api-access-sqgtf\") pod \"1df96663-72a8-444d-af18-be73e7c8e955\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.775458 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-ovsdbserver-nb\") pod \"1df96663-72a8-444d-af18-be73e7c8e955\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.775508 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-dns-svc\") pod \"1df96663-72a8-444d-af18-be73e7c8e955\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.775636 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-ovsdbserver-sb\") pod \"1df96663-72a8-444d-af18-be73e7c8e955\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.775661 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-config\") pod \"1df96663-72a8-444d-af18-be73e7c8e955\" (UID: \"1df96663-72a8-444d-af18-be73e7c8e955\") " Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.783489 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df96663-72a8-444d-af18-be73e7c8e955-kube-api-access-sqgtf" (OuterVolumeSpecName: "kube-api-access-sqgtf") pod "1df96663-72a8-444d-af18-be73e7c8e955" (UID: "1df96663-72a8-444d-af18-be73e7c8e955"). InnerVolumeSpecName "kube-api-access-sqgtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.858610 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1df96663-72a8-444d-af18-be73e7c8e955" (UID: "1df96663-72a8-444d-af18-be73e7c8e955"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.860745 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1df96663-72a8-444d-af18-be73e7c8e955" (UID: "1df96663-72a8-444d-af18-be73e7c8e955"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.877784 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1df96663-72a8-444d-af18-be73e7c8e955" (UID: "1df96663-72a8-444d-af18-be73e7c8e955"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.877942 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqgtf\" (UniqueName: \"kubernetes.io/projected/1df96663-72a8-444d-af18-be73e7c8e955-kube-api-access-sqgtf\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.877972 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.877981 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.877989 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.892702 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-config" (OuterVolumeSpecName: "config") pod "1df96663-72a8-444d-af18-be73e7c8e955" (UID: "1df96663-72a8-444d-af18-be73e7c8e955"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:28:34 crc kubenswrapper[4972]: I1121 11:28:34.979736 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df96663-72a8-444d-af18-be73e7c8e955-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.012643 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fc98f54cc-v6tgv"] Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.048170 4972 generic.go:334] "Generic (PLEG): container finished" podID="1df96663-72a8-444d-af18-be73e7c8e955" containerID="b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0" exitCode=0 Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.048247 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" event={"ID":"1df96663-72a8-444d-af18-be73e7c8e955","Type":"ContainerDied","Data":"b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0"} Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.048277 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" event={"ID":"1df96663-72a8-444d-af18-be73e7c8e955","Type":"ContainerDied","Data":"475a741c8203b766c98bdfe3afa886223d5b1378c0a000049fe798bce8066091"} Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.048298 4972 scope.go:117] "RemoveContainer" containerID="b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0" Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.048304 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566dd75fd7-4bkbp" Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.053879 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" event={"ID":"0fddcd19-6012-4953-a450-4230dc94d51c","Type":"ContainerStarted","Data":"2d7737ee0af77fc8dc8c99818e27ad9ac883b2d18c6932548e54660837620d2e"} Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.083942 4972 scope.go:117] "RemoveContainer" containerID="75b3112c94a1f4261d88b2d8449e1fac880b940f34d396b6c08cce2cb37f4890" Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.095990 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566dd75fd7-4bkbp"] Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.106023 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566dd75fd7-4bkbp"] Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.156856 4972 scope.go:117] "RemoveContainer" containerID="b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0" Nov 21 11:28:35 crc kubenswrapper[4972]: E1121 11:28:35.157421 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0\": container with ID starting with b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0 not found: ID does not exist" containerID="b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0" Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.157466 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0"} err="failed to get container status \"b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0\": rpc error: code = NotFound desc = could not find container \"b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0\": container with ID starting with b8c5b03939b19c6a81bf6758df74974b3fa11ca7eb18ba48c9ae506da5d2a5e0 not found: ID does not exist" Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.157488 4972 scope.go:117] "RemoveContainer" containerID="75b3112c94a1f4261d88b2d8449e1fac880b940f34d396b6c08cce2cb37f4890" Nov 21 11:28:35 crc kubenswrapper[4972]: E1121 11:28:35.158364 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b3112c94a1f4261d88b2d8449e1fac880b940f34d396b6c08cce2cb37f4890\": container with ID starting with 75b3112c94a1f4261d88b2d8449e1fac880b940f34d396b6c08cce2cb37f4890 not found: ID does not exist" containerID="75b3112c94a1f4261d88b2d8449e1fac880b940f34d396b6c08cce2cb37f4890" Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.158409 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b3112c94a1f4261d88b2d8449e1fac880b940f34d396b6c08cce2cb37f4890"} err="failed to get container status \"75b3112c94a1f4261d88b2d8449e1fac880b940f34d396b6c08cce2cb37f4890\": rpc error: code = NotFound desc = could not find container \"75b3112c94a1f4261d88b2d8449e1fac880b940f34d396b6c08cce2cb37f4890\": container with ID starting with 75b3112c94a1f4261d88b2d8449e1fac880b940f34d396b6c08cce2cb37f4890 not found: ID does not exist" Nov 21 11:28:35 crc kubenswrapper[4972]: I1121 11:28:35.776232 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1df96663-72a8-444d-af18-be73e7c8e955" path="/var/lib/kubelet/pods/1df96663-72a8-444d-af18-be73e7c8e955/volumes" Nov 21 11:28:36 crc kubenswrapper[4972]: I1121 11:28:36.069010 4972 generic.go:334] "Generic (PLEG): container finished" podID="0fddcd19-6012-4953-a450-4230dc94d51c" containerID="4fd8d0aa4e9e9e23b904b0106beb5994e0ef8906202ace054ad150e2172095ac" exitCode=0 Nov 21 11:28:36 crc kubenswrapper[4972]: I1121 11:28:36.069124 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" event={"ID":"0fddcd19-6012-4953-a450-4230dc94d51c","Type":"ContainerDied","Data":"4fd8d0aa4e9e9e23b904b0106beb5994e0ef8906202ace054ad150e2172095ac"} Nov 21 11:28:37 crc kubenswrapper[4972]: I1121 11:28:37.085426 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" event={"ID":"0fddcd19-6012-4953-a450-4230dc94d51c","Type":"ContainerStarted","Data":"1862a34eb322366e7aaf27e0bb1f67143c2b44a09bec80a71ee329169994d9ce"} Nov 21 11:28:37 crc kubenswrapper[4972]: I1121 11:28:37.085978 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:37 crc kubenswrapper[4972]: I1121 11:28:37.112185 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" podStartSLOduration=3.112170383 podStartE2EDuration="3.112170383s" podCreationTimestamp="2025-11-21 11:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:28:37.111355832 +0000 UTC m=+6462.220498410" watchObservedRunningTime="2025-11-21 11:28:37.112170383 +0000 UTC m=+6462.221312871" Nov 21 11:28:44 crc kubenswrapper[4972]: I1121 11:28:44.515116 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fc98f54cc-v6tgv" Nov 21 11:28:44 crc kubenswrapper[4972]: I1121 11:28:44.617771 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d9db6798c-7wmdw"] Nov 21 11:28:44 crc kubenswrapper[4972]: I1121 11:28:44.618205 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" podUID="b57d4549-e519-4791-97ab-4470b527bb43" containerName="dnsmasq-dns" containerID="cri-o://78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28" gracePeriod=10 Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.187039 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.191536 4972 generic.go:334] "Generic (PLEG): container finished" podID="b57d4549-e519-4791-97ab-4470b527bb43" containerID="78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28" exitCode=0 Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.191565 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" event={"ID":"b57d4549-e519-4791-97ab-4470b527bb43","Type":"ContainerDied","Data":"78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28"} Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.191586 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" event={"ID":"b57d4549-e519-4791-97ab-4470b527bb43","Type":"ContainerDied","Data":"ad5072d632f3309d27515f18ec47687ae9a43b3bec331eec74f5c4398efb6ee9"} Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.191602 4972 scope.go:117] "RemoveContainer" containerID="78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.191670 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d9db6798c-7wmdw" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.231129 4972 scope.go:117] "RemoveContainer" containerID="80f067dd8b1bf0c222858618b464f3340e0a38e57d51d0bc9e16af4925553aee" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.254411 4972 scope.go:117] "RemoveContainer" containerID="78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.255216 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-openstack-cell1\") pod \"b57d4549-e519-4791-97ab-4470b527bb43\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.255305 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-ovsdbserver-nb\") pod \"b57d4549-e519-4791-97ab-4470b527bb43\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.255356 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-config\") pod \"b57d4549-e519-4791-97ab-4470b527bb43\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.255478 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tqnk\" (UniqueName: \"kubernetes.io/projected/b57d4549-e519-4791-97ab-4470b527bb43-kube-api-access-4tqnk\") pod \"b57d4549-e519-4791-97ab-4470b527bb43\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.255562 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-dns-svc\") pod \"b57d4549-e519-4791-97ab-4470b527bb43\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.255582 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-ovsdbserver-sb\") pod \"b57d4549-e519-4791-97ab-4470b527bb43\" (UID: \"b57d4549-e519-4791-97ab-4470b527bb43\") " Nov 21 11:28:45 crc kubenswrapper[4972]: E1121 11:28:45.256339 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28\": container with ID starting with 78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28 not found: ID does not exist" containerID="78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.256373 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28"} err="failed to get container status \"78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28\": rpc error: code = NotFound desc = could not find container \"78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28\": container with ID starting with 78090c0c8fa1cbe8e2652dd610650635d183a320eb48310cbe6785dcfed5bc28 not found: ID does not exist" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.256399 4972 scope.go:117] "RemoveContainer" containerID="80f067dd8b1bf0c222858618b464f3340e0a38e57d51d0bc9e16af4925553aee" Nov 21 11:28:45 crc kubenswrapper[4972]: E1121 11:28:45.257104 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f067dd8b1bf0c222858618b464f3340e0a38e57d51d0bc9e16af4925553aee\": container with ID starting with 80f067dd8b1bf0c222858618b464f3340e0a38e57d51d0bc9e16af4925553aee not found: ID does not exist" containerID="80f067dd8b1bf0c222858618b464f3340e0a38e57d51d0bc9e16af4925553aee" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.257154 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f067dd8b1bf0c222858618b464f3340e0a38e57d51d0bc9e16af4925553aee"} err="failed to get container status \"80f067dd8b1bf0c222858618b464f3340e0a38e57d51d0bc9e16af4925553aee\": rpc error: code = NotFound desc = could not find container \"80f067dd8b1bf0c222858618b464f3340e0a38e57d51d0bc9e16af4925553aee\": container with ID starting with 80f067dd8b1bf0c222858618b464f3340e0a38e57d51d0bc9e16af4925553aee not found: ID does not exist" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.278070 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57d4549-e519-4791-97ab-4470b527bb43-kube-api-access-4tqnk" (OuterVolumeSpecName: "kube-api-access-4tqnk") pod "b57d4549-e519-4791-97ab-4470b527bb43" (UID: "b57d4549-e519-4791-97ab-4470b527bb43"). InnerVolumeSpecName "kube-api-access-4tqnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.320572 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b57d4549-e519-4791-97ab-4470b527bb43" (UID: "b57d4549-e519-4791-97ab-4470b527bb43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.332151 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b57d4549-e519-4791-97ab-4470b527bb43" (UID: "b57d4549-e519-4791-97ab-4470b527bb43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.356242 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "b57d4549-e519-4791-97ab-4470b527bb43" (UID: "b57d4549-e519-4791-97ab-4470b527bb43"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.357627 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tqnk\" (UniqueName: \"kubernetes.io/projected/b57d4549-e519-4791-97ab-4470b527bb43-kube-api-access-4tqnk\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.357649 4972 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.357659 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.357671 4972 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.358041 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-config" (OuterVolumeSpecName: "config") pod "b57d4549-e519-4791-97ab-4470b527bb43" (UID: "b57d4549-e519-4791-97ab-4470b527bb43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.363320 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b57d4549-e519-4791-97ab-4470b527bb43" (UID: "b57d4549-e519-4791-97ab-4470b527bb43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.460458 4972 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.460516 4972 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57d4549-e519-4791-97ab-4470b527bb43-config\") on node \"crc\" DevicePath \"\"" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.543130 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d9db6798c-7wmdw"] Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.558227 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d9db6798c-7wmdw"] Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.773742 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:28:45 crc kubenswrapper[4972]: E1121 11:28:45.774377 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:28:45 crc kubenswrapper[4972]: I1121 11:28:45.778710 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57d4549-e519-4791-97ab-4470b527bb43" path="/var/lib/kubelet/pods/b57d4549-e519-4791-97ab-4470b527bb43/volumes" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.903553 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9"] Nov 21 11:28:54 crc kubenswrapper[4972]: E1121 11:28:54.904670 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57d4549-e519-4791-97ab-4470b527bb43" containerName="init" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.904687 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57d4549-e519-4791-97ab-4470b527bb43" containerName="init" Nov 21 11:28:54 crc kubenswrapper[4972]: E1121 11:28:54.904705 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57d4549-e519-4791-97ab-4470b527bb43" containerName="dnsmasq-dns" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.904716 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57d4549-e519-4791-97ab-4470b527bb43" containerName="dnsmasq-dns" Nov 21 11:28:54 crc kubenswrapper[4972]: E1121 11:28:54.904735 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df96663-72a8-444d-af18-be73e7c8e955" containerName="init" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.904743 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df96663-72a8-444d-af18-be73e7c8e955" containerName="init" Nov 21 11:28:54 crc kubenswrapper[4972]: E1121 11:28:54.904772 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df96663-72a8-444d-af18-be73e7c8e955" containerName="dnsmasq-dns" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.904780 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df96663-72a8-444d-af18-be73e7c8e955" containerName="dnsmasq-dns" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.905072 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57d4549-e519-4791-97ab-4470b527bb43" containerName="dnsmasq-dns" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.905104 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df96663-72a8-444d-af18-be73e7c8e955" containerName="dnsmasq-dns" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.906103 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.910056 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.910154 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.910633 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.910909 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:28:54 crc kubenswrapper[4972]: I1121 11:28:54.951879 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9"] Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.013502 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.013618 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.013716 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.013884 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.013926 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6w92\" (UniqueName: \"kubernetes.io/projected/ec6ae156-2743-48fa-afda-8fcce08e9588-kube-api-access-h6w92\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.116555 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.116916 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.116983 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6w92\" (UniqueName: \"kubernetes.io/projected/ec6ae156-2743-48fa-afda-8fcce08e9588-kube-api-access-h6w92\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.117109 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.117228 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.125821 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.126443 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.128239 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.129176 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.144341 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6w92\" (UniqueName: \"kubernetes.io/projected/ec6ae156-2743-48fa-afda-8fcce08e9588-kube-api-access-h6w92\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.232697 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:28:55 crc kubenswrapper[4972]: I1121 11:28:55.895478 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9"] Nov 21 11:28:56 crc kubenswrapper[4972]: I1121 11:28:56.371558 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" event={"ID":"ec6ae156-2743-48fa-afda-8fcce08e9588","Type":"ContainerStarted","Data":"c0f839836afa5cfde7eee40de2972d42abc911a25909e890bb73d3eda85467ba"} Nov 21 11:28:56 crc kubenswrapper[4972]: I1121 11:28:56.759410 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:28:56 crc kubenswrapper[4972]: E1121 11:28:56.759708 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.290781 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ctxfc"] Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.326695 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctxfc"] Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.326891 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.384327 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4517ce1a-539c-4dcd-a541-021c6caa8b39-utilities\") pod \"redhat-marketplace-ctxfc\" (UID: \"4517ce1a-539c-4dcd-a541-021c6caa8b39\") " pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.384677 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4517ce1a-539c-4dcd-a541-021c6caa8b39-catalog-content\") pod \"redhat-marketplace-ctxfc\" (UID: \"4517ce1a-539c-4dcd-a541-021c6caa8b39\") " pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.384846 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzj5j\" (UniqueName: \"kubernetes.io/projected/4517ce1a-539c-4dcd-a541-021c6caa8b39-kube-api-access-fzj5j\") pod \"redhat-marketplace-ctxfc\" (UID: \"4517ce1a-539c-4dcd-a541-021c6caa8b39\") " pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.489066 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4517ce1a-539c-4dcd-a541-021c6caa8b39-utilities\") pod \"redhat-marketplace-ctxfc\" (UID: \"4517ce1a-539c-4dcd-a541-021c6caa8b39\") " pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.489218 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4517ce1a-539c-4dcd-a541-021c6caa8b39-catalog-content\") pod \"redhat-marketplace-ctxfc\" (UID: \"4517ce1a-539c-4dcd-a541-021c6caa8b39\") " pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.489239 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzj5j\" (UniqueName: \"kubernetes.io/projected/4517ce1a-539c-4dcd-a541-021c6caa8b39-kube-api-access-fzj5j\") pod \"redhat-marketplace-ctxfc\" (UID: \"4517ce1a-539c-4dcd-a541-021c6caa8b39\") " pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.489668 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4517ce1a-539c-4dcd-a541-021c6caa8b39-utilities\") pod \"redhat-marketplace-ctxfc\" (UID: \"4517ce1a-539c-4dcd-a541-021c6caa8b39\") " pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.489995 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4517ce1a-539c-4dcd-a541-021c6caa8b39-catalog-content\") pod \"redhat-marketplace-ctxfc\" (UID: \"4517ce1a-539c-4dcd-a541-021c6caa8b39\") " pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.510095 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzj5j\" (UniqueName: \"kubernetes.io/projected/4517ce1a-539c-4dcd-a541-021c6caa8b39-kube-api-access-fzj5j\") pod \"redhat-marketplace-ctxfc\" (UID: \"4517ce1a-539c-4dcd-a541-021c6caa8b39\") " pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:04 crc kubenswrapper[4972]: I1121 11:29:04.660793 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:05 crc kubenswrapper[4972]: I1121 11:29:05.304139 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:29:05 crc kubenswrapper[4972]: I1121 11:29:05.695268 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctxfc"] Nov 21 11:29:06 crc kubenswrapper[4972]: I1121 11:29:06.482121 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" event={"ID":"ec6ae156-2743-48fa-afda-8fcce08e9588","Type":"ContainerStarted","Data":"51f2830085460332ea8dfb0692dfe3892041d246ee18f1bdb2ea21ba875e6028"} Nov 21 11:29:06 crc kubenswrapper[4972]: I1121 11:29:06.486344 4972 generic.go:334] "Generic (PLEG): container finished" podID="4517ce1a-539c-4dcd-a541-021c6caa8b39" containerID="18469f9fcc36b505f5c800802943409102e51f9e5214d669a57ee49159dc6872" exitCode=0 Nov 21 11:29:06 crc kubenswrapper[4972]: I1121 11:29:06.486391 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctxfc" event={"ID":"4517ce1a-539c-4dcd-a541-021c6caa8b39","Type":"ContainerDied","Data":"18469f9fcc36b505f5c800802943409102e51f9e5214d669a57ee49159dc6872"} Nov 21 11:29:06 crc kubenswrapper[4972]: I1121 11:29:06.486418 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctxfc" event={"ID":"4517ce1a-539c-4dcd-a541-021c6caa8b39","Type":"ContainerStarted","Data":"a668bd202500e74e8ba372b0455c28ab5a458de159941b43dfddeb189931e0ee"} Nov 21 11:29:06 crc kubenswrapper[4972]: I1121 11:29:06.518678 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" podStartSLOduration=3.116119539 podStartE2EDuration="12.518660656s" podCreationTimestamp="2025-11-21 11:28:54 +0000 UTC" firstStartedPulling="2025-11-21 11:28:55.897381775 +0000 UTC m=+6481.006524273" lastFinishedPulling="2025-11-21 11:29:05.299922892 +0000 UTC m=+6490.409065390" observedRunningTime="2025-11-21 11:29:06.511730214 +0000 UTC m=+6491.620872712" watchObservedRunningTime="2025-11-21 11:29:06.518660656 +0000 UTC m=+6491.627803154" Nov 21 11:29:08 crc kubenswrapper[4972]: I1121 11:29:08.512796 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctxfc" event={"ID":"4517ce1a-539c-4dcd-a541-021c6caa8b39","Type":"ContainerStarted","Data":"ffc5f35eb895272f2f8de5f1a35ce02a7ddb20c99ed76a57b5cafd195c964fb6"} Nov 21 11:29:09 crc kubenswrapper[4972]: I1121 11:29:09.759934 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:29:09 crc kubenswrapper[4972]: E1121 11:29:09.760504 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:29:10 crc kubenswrapper[4972]: I1121 11:29:10.545114 4972 generic.go:334] "Generic (PLEG): container finished" podID="4517ce1a-539c-4dcd-a541-021c6caa8b39" containerID="ffc5f35eb895272f2f8de5f1a35ce02a7ddb20c99ed76a57b5cafd195c964fb6" exitCode=0 Nov 21 11:29:10 crc kubenswrapper[4972]: I1121 11:29:10.545155 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctxfc" event={"ID":"4517ce1a-539c-4dcd-a541-021c6caa8b39","Type":"ContainerDied","Data":"ffc5f35eb895272f2f8de5f1a35ce02a7ddb20c99ed76a57b5cafd195c964fb6"} Nov 21 11:29:11 crc kubenswrapper[4972]: I1121 11:29:11.561292 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctxfc" event={"ID":"4517ce1a-539c-4dcd-a541-021c6caa8b39","Type":"ContainerStarted","Data":"2ff5ab256dc57d80559a264d992aa1502a51a5770fdba7e3d3a8cd0adcb0e4fb"} Nov 21 11:29:11 crc kubenswrapper[4972]: I1121 11:29:11.596695 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ctxfc" podStartSLOduration=2.990576209 podStartE2EDuration="7.596673587s" podCreationTimestamp="2025-11-21 11:29:04 +0000 UTC" firstStartedPulling="2025-11-21 11:29:06.489466138 +0000 UTC m=+6491.598608636" lastFinishedPulling="2025-11-21 11:29:11.095563506 +0000 UTC m=+6496.204706014" observedRunningTime="2025-11-21 11:29:11.584967579 +0000 UTC m=+6496.694110127" watchObservedRunningTime="2025-11-21 11:29:11.596673587 +0000 UTC m=+6496.705816095" Nov 21 11:29:14 crc kubenswrapper[4972]: I1121 11:29:14.661749 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:14 crc kubenswrapper[4972]: I1121 11:29:14.662443 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:14 crc kubenswrapper[4972]: I1121 11:29:14.733224 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:19 crc kubenswrapper[4972]: I1121 11:29:19.665308 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec6ae156-2743-48fa-afda-8fcce08e9588" containerID="51f2830085460332ea8dfb0692dfe3892041d246ee18f1bdb2ea21ba875e6028" exitCode=0 Nov 21 11:29:19 crc kubenswrapper[4972]: I1121 11:29:19.665550 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" event={"ID":"ec6ae156-2743-48fa-afda-8fcce08e9588","Type":"ContainerDied","Data":"51f2830085460332ea8dfb0692dfe3892041d246ee18f1bdb2ea21ba875e6028"} Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.301077 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.396677 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6w92\" (UniqueName: \"kubernetes.io/projected/ec6ae156-2743-48fa-afda-8fcce08e9588-kube-api-access-h6w92\") pod \"ec6ae156-2743-48fa-afda-8fcce08e9588\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.396750 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-inventory\") pod \"ec6ae156-2743-48fa-afda-8fcce08e9588\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.396925 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-ssh-key\") pod \"ec6ae156-2743-48fa-afda-8fcce08e9588\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.396983 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-ceph\") pod \"ec6ae156-2743-48fa-afda-8fcce08e9588\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.397009 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-pre-adoption-validation-combined-ca-bundle\") pod \"ec6ae156-2743-48fa-afda-8fcce08e9588\" (UID: \"ec6ae156-2743-48fa-afda-8fcce08e9588\") " Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.404441 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-ceph" (OuterVolumeSpecName: "ceph") pod "ec6ae156-2743-48fa-afda-8fcce08e9588" (UID: "ec6ae156-2743-48fa-afda-8fcce08e9588"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.404674 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "ec6ae156-2743-48fa-afda-8fcce08e9588" (UID: "ec6ae156-2743-48fa-afda-8fcce08e9588"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.414325 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6ae156-2743-48fa-afda-8fcce08e9588-kube-api-access-h6w92" (OuterVolumeSpecName: "kube-api-access-h6w92") pod "ec6ae156-2743-48fa-afda-8fcce08e9588" (UID: "ec6ae156-2743-48fa-afda-8fcce08e9588"). InnerVolumeSpecName "kube-api-access-h6w92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.440026 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-inventory" (OuterVolumeSpecName: "inventory") pod "ec6ae156-2743-48fa-afda-8fcce08e9588" (UID: "ec6ae156-2743-48fa-afda-8fcce08e9588"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.451299 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ec6ae156-2743-48fa-afda-8fcce08e9588" (UID: "ec6ae156-2743-48fa-afda-8fcce08e9588"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.500523 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6w92\" (UniqueName: \"kubernetes.io/projected/ec6ae156-2743-48fa-afda-8fcce08e9588-kube-api-access-h6w92\") on node \"crc\" DevicePath \"\"" Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.500557 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.500566 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.500577 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.500585 4972 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6ae156-2743-48fa-afda-8fcce08e9588-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.699728 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" event={"ID":"ec6ae156-2743-48fa-afda-8fcce08e9588","Type":"ContainerDied","Data":"c0f839836afa5cfde7eee40de2972d42abc911a25909e890bb73d3eda85467ba"} Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.699770 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0f839836afa5cfde7eee40de2972d42abc911a25909e890bb73d3eda85467ba" Nov 21 11:29:21 crc kubenswrapper[4972]: I1121 11:29:21.699777 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9" Nov 21 11:29:22 crc kubenswrapper[4972]: I1121 11:29:22.760160 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:29:22 crc kubenswrapper[4972]: E1121 11:29:22.760685 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:29:24 crc kubenswrapper[4972]: I1121 11:29:24.730260 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:24 crc kubenswrapper[4972]: I1121 11:29:24.800467 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctxfc"] Nov 21 11:29:25 crc kubenswrapper[4972]: I1121 11:29:25.752949 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ctxfc" podUID="4517ce1a-539c-4dcd-a541-021c6caa8b39" containerName="registry-server" containerID="cri-o://2ff5ab256dc57d80559a264d992aa1502a51a5770fdba7e3d3a8cd0adcb0e4fb" gracePeriod=2 Nov 21 11:29:26 crc kubenswrapper[4972]: I1121 11:29:26.772376 4972 generic.go:334] "Generic (PLEG): container finished" podID="4517ce1a-539c-4dcd-a541-021c6caa8b39" containerID="2ff5ab256dc57d80559a264d992aa1502a51a5770fdba7e3d3a8cd0adcb0e4fb" exitCode=0 Nov 21 11:29:26 crc kubenswrapper[4972]: I1121 11:29:26.772461 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctxfc" event={"ID":"4517ce1a-539c-4dcd-a541-021c6caa8b39","Type":"ContainerDied","Data":"2ff5ab256dc57d80559a264d992aa1502a51a5770fdba7e3d3a8cd0adcb0e4fb"} Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.434597 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.556850 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4517ce1a-539c-4dcd-a541-021c6caa8b39-catalog-content\") pod \"4517ce1a-539c-4dcd-a541-021c6caa8b39\" (UID: \"4517ce1a-539c-4dcd-a541-021c6caa8b39\") " Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.557094 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzj5j\" (UniqueName: \"kubernetes.io/projected/4517ce1a-539c-4dcd-a541-021c6caa8b39-kube-api-access-fzj5j\") pod \"4517ce1a-539c-4dcd-a541-021c6caa8b39\" (UID: \"4517ce1a-539c-4dcd-a541-021c6caa8b39\") " Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.557281 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4517ce1a-539c-4dcd-a541-021c6caa8b39-utilities\") pod \"4517ce1a-539c-4dcd-a541-021c6caa8b39\" (UID: \"4517ce1a-539c-4dcd-a541-021c6caa8b39\") " Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.557993 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4517ce1a-539c-4dcd-a541-021c6caa8b39-utilities" (OuterVolumeSpecName: "utilities") pod "4517ce1a-539c-4dcd-a541-021c6caa8b39" (UID: "4517ce1a-539c-4dcd-a541-021c6caa8b39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.558516 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4517ce1a-539c-4dcd-a541-021c6caa8b39-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.563067 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4517ce1a-539c-4dcd-a541-021c6caa8b39-kube-api-access-fzj5j" (OuterVolumeSpecName: "kube-api-access-fzj5j") pod "4517ce1a-539c-4dcd-a541-021c6caa8b39" (UID: "4517ce1a-539c-4dcd-a541-021c6caa8b39"). InnerVolumeSpecName "kube-api-access-fzj5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.576400 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4517ce1a-539c-4dcd-a541-021c6caa8b39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4517ce1a-539c-4dcd-a541-021c6caa8b39" (UID: "4517ce1a-539c-4dcd-a541-021c6caa8b39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.660158 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4517ce1a-539c-4dcd-a541-021c6caa8b39-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.660192 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzj5j\" (UniqueName: \"kubernetes.io/projected/4517ce1a-539c-4dcd-a541-021c6caa8b39-kube-api-access-fzj5j\") on node \"crc\" DevicePath \"\"" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.774105 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54"] Nov 21 11:29:27 crc kubenswrapper[4972]: E1121 11:29:27.774538 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4517ce1a-539c-4dcd-a541-021c6caa8b39" containerName="registry-server" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.774553 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4517ce1a-539c-4dcd-a541-021c6caa8b39" containerName="registry-server" Nov 21 11:29:27 crc kubenswrapper[4972]: E1121 11:29:27.774572 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6ae156-2743-48fa-afda-8fcce08e9588" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.774582 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6ae156-2743-48fa-afda-8fcce08e9588" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 21 11:29:27 crc kubenswrapper[4972]: E1121 11:29:27.774604 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4517ce1a-539c-4dcd-a541-021c6caa8b39" containerName="extract-content" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.774612 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4517ce1a-539c-4dcd-a541-021c6caa8b39" containerName="extract-content" Nov 21 11:29:27 crc kubenswrapper[4972]: E1121 11:29:27.774637 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4517ce1a-539c-4dcd-a541-021c6caa8b39" containerName="extract-utilities" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.774644 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="4517ce1a-539c-4dcd-a541-021c6caa8b39" containerName="extract-utilities" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.774876 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6ae156-2743-48fa-afda-8fcce08e9588" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.774913 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="4517ce1a-539c-4dcd-a541-021c6caa8b39" containerName="registry-server" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.775796 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.779298 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.779421 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.779542 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.779904 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.787261 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctxfc" event={"ID":"4517ce1a-539c-4dcd-a541-021c6caa8b39","Type":"ContainerDied","Data":"a668bd202500e74e8ba372b0455c28ab5a458de159941b43dfddeb189931e0ee"} Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.787316 4972 scope.go:117] "RemoveContainer" containerID="2ff5ab256dc57d80559a264d992aa1502a51a5770fdba7e3d3a8cd0adcb0e4fb" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.787477 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctxfc" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.788893 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54"] Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.828422 4972 scope.go:117] "RemoveContainer" containerID="ffc5f35eb895272f2f8de5f1a35ce02a7ddb20c99ed76a57b5cafd195c964fb6" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.848167 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctxfc"] Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.855218 4972 scope.go:117] "RemoveContainer" containerID="18469f9fcc36b505f5c800802943409102e51f9e5214d669a57ee49159dc6872" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.861065 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctxfc"] Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.964957 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.965008 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.965095 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt59p\" (UniqueName: \"kubernetes.io/projected/0b686359-266d-4e3b-b383-2ed81fb826ed-kube-api-access-jt59p\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.965198 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:27 crc kubenswrapper[4972]: I1121 11:29:27.965232 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.066894 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.066930 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.067074 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.067117 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.067718 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt59p\" (UniqueName: \"kubernetes.io/projected/0b686359-266d-4e3b-b383-2ed81fb826ed-kube-api-access-jt59p\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.073304 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.073533 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.073648 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.074297 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.096669 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt59p\" (UniqueName: \"kubernetes.io/projected/0b686359-266d-4e3b-b383-2ed81fb826ed-kube-api-access-jt59p\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.119652 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:29:28 crc kubenswrapper[4972]: W1121 11:29:28.736076 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b686359_266d_4e3b_b383_2ed81fb826ed.slice/crio-93b889d8ad66769e4f6ca2908a30681e00f179a3e3297dfc4ee5472a816f8607 WatchSource:0}: Error finding container 93b889d8ad66769e4f6ca2908a30681e00f179a3e3297dfc4ee5472a816f8607: Status 404 returned error can't find the container with id 93b889d8ad66769e4f6ca2908a30681e00f179a3e3297dfc4ee5472a816f8607 Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.746147 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54"] Nov 21 11:29:28 crc kubenswrapper[4972]: I1121 11:29:28.800373 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" event={"ID":"0b686359-266d-4e3b-b383-2ed81fb826ed","Type":"ContainerStarted","Data":"93b889d8ad66769e4f6ca2908a30681e00f179a3e3297dfc4ee5472a816f8607"} Nov 21 11:29:29 crc kubenswrapper[4972]: I1121 11:29:29.770305 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4517ce1a-539c-4dcd-a541-021c6caa8b39" path="/var/lib/kubelet/pods/4517ce1a-539c-4dcd-a541-021c6caa8b39/volumes" Nov 21 11:29:29 crc kubenswrapper[4972]: I1121 11:29:29.810286 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" event={"ID":"0b686359-266d-4e3b-b383-2ed81fb826ed","Type":"ContainerStarted","Data":"27c76edd8498d7937b88ae5c54996105dc976b0a1a4298d79c2ec442aa284ecd"} Nov 21 11:29:29 crc kubenswrapper[4972]: I1121 11:29:29.825555 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" podStartSLOduration=2.379171132 podStartE2EDuration="2.825539573s" podCreationTimestamp="2025-11-21 11:29:27 +0000 UTC" firstStartedPulling="2025-11-21 11:29:28.73901702 +0000 UTC m=+6513.848159528" lastFinishedPulling="2025-11-21 11:29:29.185385431 +0000 UTC m=+6514.294527969" observedRunningTime="2025-11-21 11:29:29.824458684 +0000 UTC m=+6514.933601202" watchObservedRunningTime="2025-11-21 11:29:29.825539573 +0000 UTC m=+6514.934682071" Nov 21 11:29:36 crc kubenswrapper[4972]: I1121 11:29:36.497689 4972 scope.go:117] "RemoveContainer" containerID="6a4eb0d1ce4fb0b2650a784bec023cca5715c57332d22c638d01de231f4b5aea" Nov 21 11:29:36 crc kubenswrapper[4972]: I1121 11:29:36.873319 4972 scope.go:117] "RemoveContainer" containerID="7feca0e0abe6aa39112e535ff525bb6ddcc2a6dd363d5402ce1c9a95b986194e" Nov 21 11:29:37 crc kubenswrapper[4972]: I1121 11:29:37.760125 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:29:37 crc kubenswrapper[4972]: E1121 11:29:37.761350 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:29:48 crc kubenswrapper[4972]: I1121 11:29:48.760380 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:29:48 crc kubenswrapper[4972]: E1121 11:29:48.761512 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.182876 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c"] Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.185930 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.189497 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.190293 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.204452 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c"] Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.334671 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e54c8a44-a05f-4a74-8d3c-b8e74844033e-secret-volume\") pod \"collect-profiles-29395410-z8h5c\" (UID: \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.335031 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e54c8a44-a05f-4a74-8d3c-b8e74844033e-config-volume\") pod \"collect-profiles-29395410-z8h5c\" (UID: \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.335161 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzjp4\" (UniqueName: \"kubernetes.io/projected/e54c8a44-a05f-4a74-8d3c-b8e74844033e-kube-api-access-hzjp4\") pod \"collect-profiles-29395410-z8h5c\" (UID: \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.437461 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e54c8a44-a05f-4a74-8d3c-b8e74844033e-config-volume\") pod \"collect-profiles-29395410-z8h5c\" (UID: \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.437532 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzjp4\" (UniqueName: \"kubernetes.io/projected/e54c8a44-a05f-4a74-8d3c-b8e74844033e-kube-api-access-hzjp4\") pod \"collect-profiles-29395410-z8h5c\" (UID: \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.437736 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e54c8a44-a05f-4a74-8d3c-b8e74844033e-secret-volume\") pod \"collect-profiles-29395410-z8h5c\" (UID: \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.440020 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e54c8a44-a05f-4a74-8d3c-b8e74844033e-config-volume\") pod \"collect-profiles-29395410-z8h5c\" (UID: \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.446062 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e54c8a44-a05f-4a74-8d3c-b8e74844033e-secret-volume\") pod \"collect-profiles-29395410-z8h5c\" (UID: \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.455593 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzjp4\" (UniqueName: \"kubernetes.io/projected/e54c8a44-a05f-4a74-8d3c-b8e74844033e-kube-api-access-hzjp4\") pod \"collect-profiles-29395410-z8h5c\" (UID: \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:00 crc kubenswrapper[4972]: I1121 11:30:00.524496 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:01 crc kubenswrapper[4972]: I1121 11:30:01.034695 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c"] Nov 21 11:30:01 crc kubenswrapper[4972]: I1121 11:30:01.233076 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" event={"ID":"e54c8a44-a05f-4a74-8d3c-b8e74844033e","Type":"ContainerStarted","Data":"c6ea72b63249f2c4cb2d3be4b86e41f50a17c592adca487d0b65a2e76fde0299"} Nov 21 11:30:02 crc kubenswrapper[4972]: I1121 11:30:02.248050 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" event={"ID":"e54c8a44-a05f-4a74-8d3c-b8e74844033e","Type":"ContainerStarted","Data":"d0588c395de5b47c6d16f8907b229dc00a9c7388b590fbc33b3d2420db1e3a29"} Nov 21 11:30:02 crc kubenswrapper[4972]: I1121 11:30:02.270414 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" podStartSLOduration=2.2703967609999998 podStartE2EDuration="2.270396761s" podCreationTimestamp="2025-11-21 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 11:30:02.264045134 +0000 UTC m=+6547.373187682" watchObservedRunningTime="2025-11-21 11:30:02.270396761 +0000 UTC m=+6547.379539249" Nov 21 11:30:03 crc kubenswrapper[4972]: I1121 11:30:03.264605 4972 generic.go:334] "Generic (PLEG): container finished" podID="e54c8a44-a05f-4a74-8d3c-b8e74844033e" containerID="d0588c395de5b47c6d16f8907b229dc00a9c7388b590fbc33b3d2420db1e3a29" exitCode=0 Nov 21 11:30:03 crc kubenswrapper[4972]: I1121 11:30:03.264908 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" event={"ID":"e54c8a44-a05f-4a74-8d3c-b8e74844033e","Type":"ContainerDied","Data":"d0588c395de5b47c6d16f8907b229dc00a9c7388b590fbc33b3d2420db1e3a29"} Nov 21 11:30:03 crc kubenswrapper[4972]: I1121 11:30:03.760437 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:30:03 crc kubenswrapper[4972]: E1121 11:30:03.761027 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:30:04 crc kubenswrapper[4972]: I1121 11:30:04.692923 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:04 crc kubenswrapper[4972]: I1121 11:30:04.850467 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e54c8a44-a05f-4a74-8d3c-b8e74844033e-config-volume\") pod \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\" (UID: \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\") " Nov 21 11:30:04 crc kubenswrapper[4972]: I1121 11:30:04.850536 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzjp4\" (UniqueName: \"kubernetes.io/projected/e54c8a44-a05f-4a74-8d3c-b8e74844033e-kube-api-access-hzjp4\") pod \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\" (UID: \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\") " Nov 21 11:30:04 crc kubenswrapper[4972]: I1121 11:30:04.850918 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e54c8a44-a05f-4a74-8d3c-b8e74844033e-secret-volume\") pod \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\" (UID: \"e54c8a44-a05f-4a74-8d3c-b8e74844033e\") " Nov 21 11:30:04 crc kubenswrapper[4972]: I1121 11:30:04.851589 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54c8a44-a05f-4a74-8d3c-b8e74844033e-config-volume" (OuterVolumeSpecName: "config-volume") pod "e54c8a44-a05f-4a74-8d3c-b8e74844033e" (UID: "e54c8a44-a05f-4a74-8d3c-b8e74844033e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:30:04 crc kubenswrapper[4972]: I1121 11:30:04.851708 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e54c8a44-a05f-4a74-8d3c-b8e74844033e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 11:30:04 crc kubenswrapper[4972]: I1121 11:30:04.858522 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54c8a44-a05f-4a74-8d3c-b8e74844033e-kube-api-access-hzjp4" (OuterVolumeSpecName: "kube-api-access-hzjp4") pod "e54c8a44-a05f-4a74-8d3c-b8e74844033e" (UID: "e54c8a44-a05f-4a74-8d3c-b8e74844033e"). InnerVolumeSpecName "kube-api-access-hzjp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:30:04 crc kubenswrapper[4972]: I1121 11:30:04.861052 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54c8a44-a05f-4a74-8d3c-b8e74844033e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e54c8a44-a05f-4a74-8d3c-b8e74844033e" (UID: "e54c8a44-a05f-4a74-8d3c-b8e74844033e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:30:04 crc kubenswrapper[4972]: I1121 11:30:04.954239 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e54c8a44-a05f-4a74-8d3c-b8e74844033e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 11:30:04 crc kubenswrapper[4972]: I1121 11:30:04.954529 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzjp4\" (UniqueName: \"kubernetes.io/projected/e54c8a44-a05f-4a74-8d3c-b8e74844033e-kube-api-access-hzjp4\") on node \"crc\" DevicePath \"\"" Nov 21 11:30:05 crc kubenswrapper[4972]: I1121 11:30:05.295575 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" event={"ID":"e54c8a44-a05f-4a74-8d3c-b8e74844033e","Type":"ContainerDied","Data":"c6ea72b63249f2c4cb2d3be4b86e41f50a17c592adca487d0b65a2e76fde0299"} Nov 21 11:30:05 crc kubenswrapper[4972]: I1121 11:30:05.295634 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6ea72b63249f2c4cb2d3be4b86e41f50a17c592adca487d0b65a2e76fde0299" Nov 21 11:30:05 crc kubenswrapper[4972]: I1121 11:30:05.295710 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c" Nov 21 11:30:05 crc kubenswrapper[4972]: I1121 11:30:05.356996 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q"] Nov 21 11:30:05 crc kubenswrapper[4972]: I1121 11:30:05.367385 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395365-nnj4q"] Nov 21 11:30:05 crc kubenswrapper[4972]: I1121 11:30:05.779677 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df320d5f-4313-4611-a7e6-4ec305b881d4" path="/var/lib/kubelet/pods/df320d5f-4313-4611-a7e6-4ec305b881d4/volumes" Nov 21 11:30:08 crc kubenswrapper[4972]: I1121 11:30:08.033802 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-b5hrg"] Nov 21 11:30:08 crc kubenswrapper[4972]: I1121 11:30:08.043426 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-b5hrg"] Nov 21 11:30:09 crc kubenswrapper[4972]: I1121 11:30:09.045904 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-9cd6-account-create-8fgxq"] Nov 21 11:30:09 crc kubenswrapper[4972]: I1121 11:30:09.066431 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-9cd6-account-create-8fgxq"] Nov 21 11:30:09 crc kubenswrapper[4972]: I1121 11:30:09.778994 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7ab426-aeb2-4659-8bd9-d9322fa3e63a" path="/var/lib/kubelet/pods/2f7ab426-aeb2-4659-8bd9-d9322fa3e63a/volumes" Nov 21 11:30:09 crc kubenswrapper[4972]: I1121 11:30:09.782137 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe47932-69ba-4784-9884-0938691eeef0" path="/var/lib/kubelet/pods/efe47932-69ba-4784-9884-0938691eeef0/volumes" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.039279 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-5cgr2"] Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.053627 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-5cgr2"] Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.107517 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nrzxz"] Nov 21 11:30:14 crc kubenswrapper[4972]: E1121 11:30:14.108561 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54c8a44-a05f-4a74-8d3c-b8e74844033e" containerName="collect-profiles" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.108591 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54c8a44-a05f-4a74-8d3c-b8e74844033e" containerName="collect-profiles" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.108978 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54c8a44-a05f-4a74-8d3c-b8e74844033e" containerName="collect-profiles" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.111675 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.120271 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nrzxz"] Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.175358 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb9hd\" (UniqueName: \"kubernetes.io/projected/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-kube-api-access-pb9hd\") pod \"community-operators-nrzxz\" (UID: \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\") " pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.175419 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-utilities\") pod \"community-operators-nrzxz\" (UID: \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\") " pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.175754 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-catalog-content\") pod \"community-operators-nrzxz\" (UID: \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\") " pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.278978 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb9hd\" (UniqueName: \"kubernetes.io/projected/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-kube-api-access-pb9hd\") pod \"community-operators-nrzxz\" (UID: \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\") " pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.279093 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-utilities\") pod \"community-operators-nrzxz\" (UID: \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\") " pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.279340 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-catalog-content\") pod \"community-operators-nrzxz\" (UID: \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\") " pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.279899 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-utilities\") pod \"community-operators-nrzxz\" (UID: \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\") " pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.279970 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-catalog-content\") pod \"community-operators-nrzxz\" (UID: \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\") " pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.302355 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb9hd\" (UniqueName: \"kubernetes.io/projected/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-kube-api-access-pb9hd\") pod \"community-operators-nrzxz\" (UID: \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\") " pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:14 crc kubenswrapper[4972]: I1121 11:30:14.443028 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:15 crc kubenswrapper[4972]: I1121 11:30:15.023654 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nrzxz"] Nov 21 11:30:15 crc kubenswrapper[4972]: I1121 11:30:15.056321 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-c9cc-account-create-wpbmf"] Nov 21 11:30:15 crc kubenswrapper[4972]: I1121 11:30:15.068530 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-c9cc-account-create-wpbmf"] Nov 21 11:30:15 crc kubenswrapper[4972]: I1121 11:30:15.433258 4972 generic.go:334] "Generic (PLEG): container finished" podID="2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" containerID="b40362488e79881941b67933a38870d4c59549ded9e74fd2dfe2ed454184d79e" exitCode=0 Nov 21 11:30:15 crc kubenswrapper[4972]: I1121 11:30:15.433312 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzxz" event={"ID":"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c","Type":"ContainerDied","Data":"b40362488e79881941b67933a38870d4c59549ded9e74fd2dfe2ed454184d79e"} Nov 21 11:30:15 crc kubenswrapper[4972]: I1121 11:30:15.433341 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzxz" event={"ID":"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c","Type":"ContainerStarted","Data":"cf495fd826f33da006dbb39c9113ac608b48c1866f34f5412ac940fb7fd68ddc"} Nov 21 11:30:15 crc kubenswrapper[4972]: I1121 11:30:15.779159 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b26e1de3-52fc-4a82-abcb-e54ffa0b95a3" path="/var/lib/kubelet/pods/b26e1de3-52fc-4a82-abcb-e54ffa0b95a3/volumes" Nov 21 11:30:15 crc kubenswrapper[4972]: I1121 11:30:15.780740 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e29f549f-1f3b-43ad-973a-0adf21de58f0" path="/var/lib/kubelet/pods/e29f549f-1f3b-43ad-973a-0adf21de58f0/volumes" Nov 21 11:30:17 crc kubenswrapper[4972]: I1121 11:30:17.472010 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzxz" event={"ID":"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c","Type":"ContainerStarted","Data":"1f2975203c70dc9a57ef5f24e9f4ae43427818daf087d67c7e17708914c399fc"} Nov 21 11:30:17 crc kubenswrapper[4972]: I1121 11:30:17.760339 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:30:17 crc kubenswrapper[4972]: E1121 11:30:17.760728 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:30:20 crc kubenswrapper[4972]: I1121 11:30:20.516250 4972 generic.go:334] "Generic (PLEG): container finished" podID="2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" containerID="1f2975203c70dc9a57ef5f24e9f4ae43427818daf087d67c7e17708914c399fc" exitCode=0 Nov 21 11:30:20 crc kubenswrapper[4972]: I1121 11:30:20.516339 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzxz" event={"ID":"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c","Type":"ContainerDied","Data":"1f2975203c70dc9a57ef5f24e9f4ae43427818daf087d67c7e17708914c399fc"} Nov 21 11:30:21 crc kubenswrapper[4972]: I1121 11:30:21.528487 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzxz" event={"ID":"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c","Type":"ContainerStarted","Data":"3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0"} Nov 21 11:30:21 crc kubenswrapper[4972]: I1121 11:30:21.551033 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nrzxz" podStartSLOduration=1.902489404 podStartE2EDuration="7.551009573s" podCreationTimestamp="2025-11-21 11:30:14 +0000 UTC" firstStartedPulling="2025-11-21 11:30:15.434819553 +0000 UTC m=+6560.543962051" lastFinishedPulling="2025-11-21 11:30:21.083339722 +0000 UTC m=+6566.192482220" observedRunningTime="2025-11-21 11:30:21.544363118 +0000 UTC m=+6566.653505616" watchObservedRunningTime="2025-11-21 11:30:21.551009573 +0000 UTC m=+6566.660152071" Nov 21 11:30:24 crc kubenswrapper[4972]: I1121 11:30:24.444030 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:24 crc kubenswrapper[4972]: I1121 11:30:24.444612 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:24 crc kubenswrapper[4972]: I1121 11:30:24.530611 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:30 crc kubenswrapper[4972]: I1121 11:30:30.760565 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:30:30 crc kubenswrapper[4972]: E1121 11:30:30.761962 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:30:34 crc kubenswrapper[4972]: I1121 11:30:34.502651 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:34 crc kubenswrapper[4972]: I1121 11:30:34.562073 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nrzxz"] Nov 21 11:30:34 crc kubenswrapper[4972]: I1121 11:30:34.706541 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nrzxz" podUID="2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" containerName="registry-server" containerID="cri-o://3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0" gracePeriod=2 Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.267894 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.301406 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb9hd\" (UniqueName: \"kubernetes.io/projected/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-kube-api-access-pb9hd\") pod \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\" (UID: \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\") " Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.301925 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-catalog-content\") pod \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\" (UID: \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\") " Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.312337 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-kube-api-access-pb9hd" (OuterVolumeSpecName: "kube-api-access-pb9hd") pod "2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" (UID: "2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c"). InnerVolumeSpecName "kube-api-access-pb9hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.368619 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" (UID: "2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.403808 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-utilities\") pod \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\" (UID: \"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c\") " Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.404498 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-utilities" (OuterVolumeSpecName: "utilities") pod "2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" (UID: "2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.405103 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.405191 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.405253 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb9hd\" (UniqueName: \"kubernetes.io/projected/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c-kube-api-access-pb9hd\") on node \"crc\" DevicePath \"\"" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.721282 4972 generic.go:334] "Generic (PLEG): container finished" podID="2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" containerID="3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0" exitCode=0 Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.721327 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzxz" event={"ID":"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c","Type":"ContainerDied","Data":"3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0"} Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.721358 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrzxz" event={"ID":"2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c","Type":"ContainerDied","Data":"cf495fd826f33da006dbb39c9113ac608b48c1866f34f5412ac940fb7fd68ddc"} Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.721375 4972 scope.go:117] "RemoveContainer" containerID="3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.721412 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrzxz" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.753091 4972 scope.go:117] "RemoveContainer" containerID="1f2975203c70dc9a57ef5f24e9f4ae43427818daf087d67c7e17708914c399fc" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.782386 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nrzxz"] Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.796942 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nrzxz"] Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.799250 4972 scope.go:117] "RemoveContainer" containerID="b40362488e79881941b67933a38870d4c59549ded9e74fd2dfe2ed454184d79e" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.821730 4972 scope.go:117] "RemoveContainer" containerID="3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0" Nov 21 11:30:35 crc kubenswrapper[4972]: E1121 11:30:35.822224 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0\": container with ID starting with 3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0 not found: ID does not exist" containerID="3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.822264 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0"} err="failed to get container status \"3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0\": rpc error: code = NotFound desc = could not find container \"3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0\": container with ID starting with 3d8258e48f3d897d1e17a60e74efa19abdaf5a7052a7ff6058e9294f23eca5a0 not found: ID does not exist" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.822292 4972 scope.go:117] "RemoveContainer" containerID="1f2975203c70dc9a57ef5f24e9f4ae43427818daf087d67c7e17708914c399fc" Nov 21 11:30:35 crc kubenswrapper[4972]: E1121 11:30:35.822603 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f2975203c70dc9a57ef5f24e9f4ae43427818daf087d67c7e17708914c399fc\": container with ID starting with 1f2975203c70dc9a57ef5f24e9f4ae43427818daf087d67c7e17708914c399fc not found: ID does not exist" containerID="1f2975203c70dc9a57ef5f24e9f4ae43427818daf087d67c7e17708914c399fc" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.822633 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f2975203c70dc9a57ef5f24e9f4ae43427818daf087d67c7e17708914c399fc"} err="failed to get container status \"1f2975203c70dc9a57ef5f24e9f4ae43427818daf087d67c7e17708914c399fc\": rpc error: code = NotFound desc = could not find container \"1f2975203c70dc9a57ef5f24e9f4ae43427818daf087d67c7e17708914c399fc\": container with ID starting with 1f2975203c70dc9a57ef5f24e9f4ae43427818daf087d67c7e17708914c399fc not found: ID does not exist" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.822652 4972 scope.go:117] "RemoveContainer" containerID="b40362488e79881941b67933a38870d4c59549ded9e74fd2dfe2ed454184d79e" Nov 21 11:30:35 crc kubenswrapper[4972]: E1121 11:30:35.823089 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40362488e79881941b67933a38870d4c59549ded9e74fd2dfe2ed454184d79e\": container with ID starting with b40362488e79881941b67933a38870d4c59549ded9e74fd2dfe2ed454184d79e not found: ID does not exist" containerID="b40362488e79881941b67933a38870d4c59549ded9e74fd2dfe2ed454184d79e" Nov 21 11:30:35 crc kubenswrapper[4972]: I1121 11:30:35.823116 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40362488e79881941b67933a38870d4c59549ded9e74fd2dfe2ed454184d79e"} err="failed to get container status \"b40362488e79881941b67933a38870d4c59549ded9e74fd2dfe2ed454184d79e\": rpc error: code = NotFound desc = could not find container \"b40362488e79881941b67933a38870d4c59549ded9e74fd2dfe2ed454184d79e\": container with ID starting with b40362488e79881941b67933a38870d4c59549ded9e74fd2dfe2ed454184d79e not found: ID does not exist" Nov 21 11:30:37 crc kubenswrapper[4972]: I1121 11:30:37.028467 4972 scope.go:117] "RemoveContainer" containerID="b221eac010938bdd2f0353f243100715b651459b45199fb2d4f37b43b4f8fc87" Nov 21 11:30:37 crc kubenswrapper[4972]: I1121 11:30:37.058374 4972 scope.go:117] "RemoveContainer" containerID="5b877c344544496ab31c15d7c7648fedd4f867113b79eca457f207908cf93842" Nov 21 11:30:37 crc kubenswrapper[4972]: I1121 11:30:37.142392 4972 scope.go:117] "RemoveContainer" containerID="e2ecbdfbb326d59324bb1e229a997b9bab7a7cd85bfe5a820fa0449f8774d782" Nov 21 11:30:37 crc kubenswrapper[4972]: I1121 11:30:37.194881 4972 scope.go:117] "RemoveContainer" containerID="76f8dfd8146b6a2e3133564a3c025b73600b86a4f1a0d5c2b606907dfc5fc15f" Nov 21 11:30:37 crc kubenswrapper[4972]: I1121 11:30:37.254127 4972 scope.go:117] "RemoveContainer" containerID="fa006d2ceb4a459f0e2726f6cb495056ad7c8ed0e2f3baf5e28478a885d27508" Nov 21 11:30:37 crc kubenswrapper[4972]: I1121 11:30:37.772919 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" path="/var/lib/kubelet/pods/2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c/volumes" Nov 21 11:30:44 crc kubenswrapper[4972]: I1121 11:30:44.760727 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:30:44 crc kubenswrapper[4972]: E1121 11:30:44.761988 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:30:50 crc kubenswrapper[4972]: I1121 11:30:50.774863 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5kxpt"] Nov 21 11:30:50 crc kubenswrapper[4972]: E1121 11:30:50.776540 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" containerName="extract-utilities" Nov 21 11:30:50 crc kubenswrapper[4972]: I1121 11:30:50.776641 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" containerName="extract-utilities" Nov 21 11:30:50 crc kubenswrapper[4972]: E1121 11:30:50.776685 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" containerName="extract-content" Nov 21 11:30:50 crc kubenswrapper[4972]: I1121 11:30:50.776706 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" containerName="extract-content" Nov 21 11:30:50 crc kubenswrapper[4972]: E1121 11:30:50.776734 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" containerName="registry-server" Nov 21 11:30:50 crc kubenswrapper[4972]: I1121 11:30:50.776747 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" containerName="registry-server" Nov 21 11:30:50 crc kubenswrapper[4972]: I1121 11:30:50.777360 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2249b83c-e21a-490e-9b0a-b0b3cc4d0c1c" containerName="registry-server" Nov 21 11:30:50 crc kubenswrapper[4972]: I1121 11:30:50.780492 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:30:50 crc kubenswrapper[4972]: I1121 11:30:50.786019 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5kxpt"] Nov 21 11:30:50 crc kubenswrapper[4972]: I1121 11:30:50.898334 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d4b6d48-65e1-46ef-b330-b059b679cc66-catalog-content\") pod \"certified-operators-5kxpt\" (UID: \"6d4b6d48-65e1-46ef-b330-b059b679cc66\") " pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:30:50 crc kubenswrapper[4972]: I1121 11:30:50.898432 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd8l5\" (UniqueName: \"kubernetes.io/projected/6d4b6d48-65e1-46ef-b330-b059b679cc66-kube-api-access-xd8l5\") pod \"certified-operators-5kxpt\" (UID: \"6d4b6d48-65e1-46ef-b330-b059b679cc66\") " pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:30:50 crc kubenswrapper[4972]: I1121 11:30:50.898568 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d4b6d48-65e1-46ef-b330-b059b679cc66-utilities\") pod \"certified-operators-5kxpt\" (UID: \"6d4b6d48-65e1-46ef-b330-b059b679cc66\") " pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:30:51 crc kubenswrapper[4972]: I1121 11:30:51.001126 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d4b6d48-65e1-46ef-b330-b059b679cc66-catalog-content\") pod \"certified-operators-5kxpt\" (UID: \"6d4b6d48-65e1-46ef-b330-b059b679cc66\") " pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:30:51 crc kubenswrapper[4972]: I1121 11:30:51.001216 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd8l5\" (UniqueName: \"kubernetes.io/projected/6d4b6d48-65e1-46ef-b330-b059b679cc66-kube-api-access-xd8l5\") pod \"certified-operators-5kxpt\" (UID: \"6d4b6d48-65e1-46ef-b330-b059b679cc66\") " pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:30:51 crc kubenswrapper[4972]: I1121 11:30:51.001391 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d4b6d48-65e1-46ef-b330-b059b679cc66-utilities\") pod \"certified-operators-5kxpt\" (UID: \"6d4b6d48-65e1-46ef-b330-b059b679cc66\") " pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:30:51 crc kubenswrapper[4972]: I1121 11:30:51.001611 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d4b6d48-65e1-46ef-b330-b059b679cc66-catalog-content\") pod \"certified-operators-5kxpt\" (UID: \"6d4b6d48-65e1-46ef-b330-b059b679cc66\") " pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:30:51 crc kubenswrapper[4972]: I1121 11:30:51.002003 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d4b6d48-65e1-46ef-b330-b059b679cc66-utilities\") pod \"certified-operators-5kxpt\" (UID: \"6d4b6d48-65e1-46ef-b330-b059b679cc66\") " pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:30:51 crc kubenswrapper[4972]: I1121 11:30:51.026559 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd8l5\" (UniqueName: \"kubernetes.io/projected/6d4b6d48-65e1-46ef-b330-b059b679cc66-kube-api-access-xd8l5\") pod \"certified-operators-5kxpt\" (UID: \"6d4b6d48-65e1-46ef-b330-b059b679cc66\") " pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:30:51 crc kubenswrapper[4972]: I1121 11:30:51.117173 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:30:51 crc kubenswrapper[4972]: I1121 11:30:51.690886 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5kxpt"] Nov 21 11:30:51 crc kubenswrapper[4972]: I1121 11:30:51.922126 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kxpt" event={"ID":"6d4b6d48-65e1-46ef-b330-b059b679cc66","Type":"ContainerStarted","Data":"8f7385dc48e215cd4e3adc6ee1ca6d6c08f7eb692b23f3a29da88a6b96f192fd"} Nov 21 11:30:52 crc kubenswrapper[4972]: I1121 11:30:52.937647 4972 generic.go:334] "Generic (PLEG): container finished" podID="6d4b6d48-65e1-46ef-b330-b059b679cc66" containerID="5e14c16d691be877c58fafc99eae5e071e6286982573cbb52dce5023e7dc4d3b" exitCode=0 Nov 21 11:30:52 crc kubenswrapper[4972]: I1121 11:30:52.938131 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kxpt" event={"ID":"6d4b6d48-65e1-46ef-b330-b059b679cc66","Type":"ContainerDied","Data":"5e14c16d691be877c58fafc99eae5e071e6286982573cbb52dce5023e7dc4d3b"} Nov 21 11:30:53 crc kubenswrapper[4972]: I1121 11:30:53.949792 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kxpt" event={"ID":"6d4b6d48-65e1-46ef-b330-b059b679cc66","Type":"ContainerStarted","Data":"192711cdd67cb00afd9986be4515d5c5201dbc715e88096e805bc2a1747613fd"} Nov 21 11:30:56 crc kubenswrapper[4972]: I1121 11:30:56.761286 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:30:56 crc kubenswrapper[4972]: E1121 11:30:56.762637 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:30:56 crc kubenswrapper[4972]: I1121 11:30:56.987601 4972 generic.go:334] "Generic (PLEG): container finished" podID="6d4b6d48-65e1-46ef-b330-b059b679cc66" containerID="192711cdd67cb00afd9986be4515d5c5201dbc715e88096e805bc2a1747613fd" exitCode=0 Nov 21 11:30:56 crc kubenswrapper[4972]: I1121 11:30:56.987677 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kxpt" event={"ID":"6d4b6d48-65e1-46ef-b330-b059b679cc66","Type":"ContainerDied","Data":"192711cdd67cb00afd9986be4515d5c5201dbc715e88096e805bc2a1747613fd"} Nov 21 11:30:58 crc kubenswrapper[4972]: I1121 11:30:58.003177 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kxpt" event={"ID":"6d4b6d48-65e1-46ef-b330-b059b679cc66","Type":"ContainerStarted","Data":"cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833"} Nov 21 11:30:58 crc kubenswrapper[4972]: I1121 11:30:58.047175 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5kxpt" podStartSLOduration=3.535090323 podStartE2EDuration="8.047150966s" podCreationTimestamp="2025-11-21 11:30:50 +0000 UTC" firstStartedPulling="2025-11-21 11:30:52.941117006 +0000 UTC m=+6598.050259504" lastFinishedPulling="2025-11-21 11:30:57.453177649 +0000 UTC m=+6602.562320147" observedRunningTime="2025-11-21 11:30:58.034498612 +0000 UTC m=+6603.143641180" watchObservedRunningTime="2025-11-21 11:30:58.047150966 +0000 UTC m=+6603.156293504" Nov 21 11:31:01 crc kubenswrapper[4972]: I1121 11:31:01.117627 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:31:01 crc kubenswrapper[4972]: I1121 11:31:01.118186 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:31:01 crc kubenswrapper[4972]: I1121 11:31:01.199395 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:31:02 crc kubenswrapper[4972]: I1121 11:31:02.147445 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:31:03 crc kubenswrapper[4972]: I1121 11:31:03.561910 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5kxpt"] Nov 21 11:31:04 crc kubenswrapper[4972]: I1121 11:31:04.073754 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5kxpt" podUID="6d4b6d48-65e1-46ef-b330-b059b679cc66" containerName="registry-server" containerID="cri-o://cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833" gracePeriod=2 Nov 21 11:31:04 crc kubenswrapper[4972]: I1121 11:31:04.617190 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:31:04 crc kubenswrapper[4972]: I1121 11:31:04.670481 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd8l5\" (UniqueName: \"kubernetes.io/projected/6d4b6d48-65e1-46ef-b330-b059b679cc66-kube-api-access-xd8l5\") pod \"6d4b6d48-65e1-46ef-b330-b059b679cc66\" (UID: \"6d4b6d48-65e1-46ef-b330-b059b679cc66\") " Nov 21 11:31:04 crc kubenswrapper[4972]: I1121 11:31:04.670942 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d4b6d48-65e1-46ef-b330-b059b679cc66-catalog-content\") pod \"6d4b6d48-65e1-46ef-b330-b059b679cc66\" (UID: \"6d4b6d48-65e1-46ef-b330-b059b679cc66\") " Nov 21 11:31:04 crc kubenswrapper[4972]: I1121 11:31:04.671044 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d4b6d48-65e1-46ef-b330-b059b679cc66-utilities\") pod \"6d4b6d48-65e1-46ef-b330-b059b679cc66\" (UID: \"6d4b6d48-65e1-46ef-b330-b059b679cc66\") " Nov 21 11:31:04 crc kubenswrapper[4972]: I1121 11:31:04.673522 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d4b6d48-65e1-46ef-b330-b059b679cc66-utilities" (OuterVolumeSpecName: "utilities") pod "6d4b6d48-65e1-46ef-b330-b059b679cc66" (UID: "6d4b6d48-65e1-46ef-b330-b059b679cc66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:31:04 crc kubenswrapper[4972]: I1121 11:31:04.706241 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4b6d48-65e1-46ef-b330-b059b679cc66-kube-api-access-xd8l5" (OuterVolumeSpecName: "kube-api-access-xd8l5") pod "6d4b6d48-65e1-46ef-b330-b059b679cc66" (UID: "6d4b6d48-65e1-46ef-b330-b059b679cc66"). InnerVolumeSpecName "kube-api-access-xd8l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:31:04 crc kubenswrapper[4972]: I1121 11:31:04.740118 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d4b6d48-65e1-46ef-b330-b059b679cc66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d4b6d48-65e1-46ef-b330-b059b679cc66" (UID: "6d4b6d48-65e1-46ef-b330-b059b679cc66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:31:04 crc kubenswrapper[4972]: I1121 11:31:04.773340 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d4b6d48-65e1-46ef-b330-b059b679cc66-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:31:04 crc kubenswrapper[4972]: I1121 11:31:04.773407 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d4b6d48-65e1-46ef-b330-b059b679cc66-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:31:04 crc kubenswrapper[4972]: I1121 11:31:04.773416 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd8l5\" (UniqueName: \"kubernetes.io/projected/6d4b6d48-65e1-46ef-b330-b059b679cc66-kube-api-access-xd8l5\") on node \"crc\" DevicePath \"\"" Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.090158 4972 generic.go:334] "Generic (PLEG): container finished" podID="6d4b6d48-65e1-46ef-b330-b059b679cc66" containerID="cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833" exitCode=0 Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.090223 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kxpt" event={"ID":"6d4b6d48-65e1-46ef-b330-b059b679cc66","Type":"ContainerDied","Data":"cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833"} Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.090283 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kxpt" Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.090316 4972 scope.go:117] "RemoveContainer" containerID="cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833" Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.090295 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kxpt" event={"ID":"6d4b6d48-65e1-46ef-b330-b059b679cc66","Type":"ContainerDied","Data":"8f7385dc48e215cd4e3adc6ee1ca6d6c08f7eb692b23f3a29da88a6b96f192fd"} Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.132884 4972 scope.go:117] "RemoveContainer" containerID="192711cdd67cb00afd9986be4515d5c5201dbc715e88096e805bc2a1747613fd" Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.148328 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5kxpt"] Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.158090 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5kxpt"] Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.172170 4972 scope.go:117] "RemoveContainer" containerID="5e14c16d691be877c58fafc99eae5e071e6286982573cbb52dce5023e7dc4d3b" Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.226503 4972 scope.go:117] "RemoveContainer" containerID="cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833" Nov 21 11:31:05 crc kubenswrapper[4972]: E1121 11:31:05.227258 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833\": container with ID starting with cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833 not found: ID does not exist" containerID="cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833" Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.227289 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833"} err="failed to get container status \"cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833\": rpc error: code = NotFound desc = could not find container \"cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833\": container with ID starting with cb57ce2b7b2f7db7e5b0c87741c0cb1a84a8a321656ed39cf104f0f1da878833 not found: ID does not exist" Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.227314 4972 scope.go:117] "RemoveContainer" containerID="192711cdd67cb00afd9986be4515d5c5201dbc715e88096e805bc2a1747613fd" Nov 21 11:31:05 crc kubenswrapper[4972]: E1121 11:31:05.227768 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"192711cdd67cb00afd9986be4515d5c5201dbc715e88096e805bc2a1747613fd\": container with ID starting with 192711cdd67cb00afd9986be4515d5c5201dbc715e88096e805bc2a1747613fd not found: ID does not exist" containerID="192711cdd67cb00afd9986be4515d5c5201dbc715e88096e805bc2a1747613fd" Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.227845 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192711cdd67cb00afd9986be4515d5c5201dbc715e88096e805bc2a1747613fd"} err="failed to get container status \"192711cdd67cb00afd9986be4515d5c5201dbc715e88096e805bc2a1747613fd\": rpc error: code = NotFound desc = could not find container \"192711cdd67cb00afd9986be4515d5c5201dbc715e88096e805bc2a1747613fd\": container with ID starting with 192711cdd67cb00afd9986be4515d5c5201dbc715e88096e805bc2a1747613fd not found: ID does not exist" Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.227892 4972 scope.go:117] "RemoveContainer" containerID="5e14c16d691be877c58fafc99eae5e071e6286982573cbb52dce5023e7dc4d3b" Nov 21 11:31:05 crc kubenswrapper[4972]: E1121 11:31:05.228263 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e14c16d691be877c58fafc99eae5e071e6286982573cbb52dce5023e7dc4d3b\": container with ID starting with 5e14c16d691be877c58fafc99eae5e071e6286982573cbb52dce5023e7dc4d3b not found: ID does not exist" containerID="5e14c16d691be877c58fafc99eae5e071e6286982573cbb52dce5023e7dc4d3b" Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.228304 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e14c16d691be877c58fafc99eae5e071e6286982573cbb52dce5023e7dc4d3b"} err="failed to get container status \"5e14c16d691be877c58fafc99eae5e071e6286982573cbb52dce5023e7dc4d3b\": rpc error: code = NotFound desc = could not find container \"5e14c16d691be877c58fafc99eae5e071e6286982573cbb52dce5023e7dc4d3b\": container with ID starting with 5e14c16d691be877c58fafc99eae5e071e6286982573cbb52dce5023e7dc4d3b not found: ID does not exist" Nov 21 11:31:05 crc kubenswrapper[4972]: I1121 11:31:05.789814 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d4b6d48-65e1-46ef-b330-b059b679cc66" path="/var/lib/kubelet/pods/6d4b6d48-65e1-46ef-b330-b059b679cc66/volumes" Nov 21 11:31:08 crc kubenswrapper[4972]: I1121 11:31:08.760711 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:31:08 crc kubenswrapper[4972]: E1121 11:31:08.762795 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:31:13 crc kubenswrapper[4972]: I1121 11:31:13.073802 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-4ql9t"] Nov 21 11:31:13 crc kubenswrapper[4972]: I1121 11:31:13.084747 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-4ql9t"] Nov 21 11:31:13 crc kubenswrapper[4972]: I1121 11:31:13.780155 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b27865-54df-4c42-9736-58c80e882e00" path="/var/lib/kubelet/pods/50b27865-54df-4c42-9736-58c80e882e00/volumes" Nov 21 11:31:22 crc kubenswrapper[4972]: I1121 11:31:22.760219 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:31:22 crc kubenswrapper[4972]: E1121 11:31:22.761322 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:31:33 crc kubenswrapper[4972]: I1121 11:31:33.760151 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:31:33 crc kubenswrapper[4972]: E1121 11:31:33.760992 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:31:37 crc kubenswrapper[4972]: I1121 11:31:37.433048 4972 scope.go:117] "RemoveContainer" containerID="958edea82e9cc878b3542efaf4effeaa08caad8c3519d9e081bab43e298cfe23" Nov 21 11:31:37 crc kubenswrapper[4972]: I1121 11:31:37.552449 4972 scope.go:117] "RemoveContainer" containerID="52db02c247b74a1801e4ff5e4137738cdd4631dda9e0c67a7f255287cc78f74c" Nov 21 11:31:45 crc kubenswrapper[4972]: I1121 11:31:45.766959 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:31:45 crc kubenswrapper[4972]: E1121 11:31:45.767850 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:31:56 crc kubenswrapper[4972]: I1121 11:31:56.759942 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:31:56 crc kubenswrapper[4972]: E1121 11:31:56.761013 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:32:07 crc kubenswrapper[4972]: I1121 11:32:07.759681 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:32:07 crc kubenswrapper[4972]: E1121 11:32:07.760851 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:32:19 crc kubenswrapper[4972]: I1121 11:32:19.760747 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:32:19 crc kubenswrapper[4972]: E1121 11:32:19.762318 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:32:34 crc kubenswrapper[4972]: I1121 11:32:34.760553 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:32:35 crc kubenswrapper[4972]: I1121 11:32:35.477002 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"b8a61924f939aa6294ac4169bfcff13e2c0e9780bf272ce11947e911688c0b8e"} Nov 21 11:33:53 crc kubenswrapper[4972]: I1121 11:33:53.066488 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-rzn97"] Nov 21 11:33:53 crc kubenswrapper[4972]: I1121 11:33:53.087511 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-4f8e-account-create-89w2g"] Nov 21 11:33:53 crc kubenswrapper[4972]: I1121 11:33:53.098890 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-rzn97"] Nov 21 11:33:53 crc kubenswrapper[4972]: I1121 11:33:53.108313 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-4f8e-account-create-89w2g"] Nov 21 11:33:53 crc kubenswrapper[4972]: I1121 11:33:53.777721 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="054461e0-29d2-4eb4-be72-a6e6e0c6fe5f" path="/var/lib/kubelet/pods/054461e0-29d2-4eb4-be72-a6e6e0c6fe5f/volumes" Nov 21 11:33:53 crc kubenswrapper[4972]: I1121 11:33:53.779884 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d2d2162-c2ba-4192-a494-b4d8825272af" path="/var/lib/kubelet/pods/8d2d2162-c2ba-4192-a494-b4d8825272af/volumes" Nov 21 11:34:11 crc kubenswrapper[4972]: I1121 11:34:11.057577 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-z42h6"] Nov 21 11:34:11 crc kubenswrapper[4972]: I1121 11:34:11.068539 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-z42h6"] Nov 21 11:34:11 crc kubenswrapper[4972]: I1121 11:34:11.780226 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b0d062-1781-4712-aa0d-fe59728e52e1" path="/var/lib/kubelet/pods/b1b0d062-1781-4712-aa0d-fe59728e52e1/volumes" Nov 21 11:34:37 crc kubenswrapper[4972]: I1121 11:34:37.694684 4972 scope.go:117] "RemoveContainer" containerID="3e2c3c48e02b6939c41ddb61b6e5ba4d82bb4b3177fc97ea38d25ac273c05cd6" Nov 21 11:34:37 crc kubenswrapper[4972]: I1121 11:34:37.720894 4972 scope.go:117] "RemoveContainer" containerID="8baaf5fcfdcf9482ee48ca7d86f8e6f088a3d3ca25d83828703e56359ce019ac" Nov 21 11:34:37 crc kubenswrapper[4972]: I1121 11:34:37.786790 4972 scope.go:117] "RemoveContainer" containerID="abf8b55c8c5b2bb6f348026097b4eba3d45252b7417e97ba8b586d2a9b095172" Nov 21 11:34:56 crc kubenswrapper[4972]: I1121 11:34:56.178899 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:34:56 crc kubenswrapper[4972]: I1121 11:34:56.179781 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:35:26 crc kubenswrapper[4972]: I1121 11:35:26.178916 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:35:26 crc kubenswrapper[4972]: I1121 11:35:26.179790 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:35:56 crc kubenswrapper[4972]: I1121 11:35:56.179284 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:35:56 crc kubenswrapper[4972]: I1121 11:35:56.180262 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:35:56 crc kubenswrapper[4972]: I1121 11:35:56.180347 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 11:35:56 crc kubenswrapper[4972]: I1121 11:35:56.181640 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8a61924f939aa6294ac4169bfcff13e2c0e9780bf272ce11947e911688c0b8e"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 11:35:56 crc kubenswrapper[4972]: I1121 11:35:56.181731 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://b8a61924f939aa6294ac4169bfcff13e2c0e9780bf272ce11947e911688c0b8e" gracePeriod=600 Nov 21 11:35:57 crc kubenswrapper[4972]: I1121 11:35:57.010955 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="b8a61924f939aa6294ac4169bfcff13e2c0e9780bf272ce11947e911688c0b8e" exitCode=0 Nov 21 11:35:57 crc kubenswrapper[4972]: I1121 11:35:57.011022 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"b8a61924f939aa6294ac4169bfcff13e2c0e9780bf272ce11947e911688c0b8e"} Nov 21 11:35:57 crc kubenswrapper[4972]: I1121 11:35:57.011365 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8"} Nov 21 11:35:57 crc kubenswrapper[4972]: I1121 11:35:57.011393 4972 scope.go:117] "RemoveContainer" containerID="8fd8bdcefc48e1e46c30f83110429a248dba898dd0cca13175b6aac7f2b70453" Nov 21 11:36:28 crc kubenswrapper[4972]: I1121 11:36:28.072130 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-d5da-account-create-xlssc"] Nov 21 11:36:28 crc kubenswrapper[4972]: I1121 11:36:28.085077 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-slr78"] Nov 21 11:36:28 crc kubenswrapper[4972]: I1121 11:36:28.097984 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-d5da-account-create-xlssc"] Nov 21 11:36:28 crc kubenswrapper[4972]: I1121 11:36:28.108304 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-slr78"] Nov 21 11:36:29 crc kubenswrapper[4972]: I1121 11:36:29.775736 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc4ff6c-b869-4c9d-b8a5-4c53ae802499" path="/var/lib/kubelet/pods/9bc4ff6c-b869-4c9d-b8a5-4c53ae802499/volumes" Nov 21 11:36:29 crc kubenswrapper[4972]: I1121 11:36:29.776545 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10" path="/var/lib/kubelet/pods/e8f56c97-4a4b-4243-9cef-0a0cfcfc7f10/volumes" Nov 21 11:36:37 crc kubenswrapper[4972]: I1121 11:36:37.939195 4972 scope.go:117] "RemoveContainer" containerID="a91b4938b29e220872e608e376f91f574284e53be09c0f358c1acea7b4244c32" Nov 21 11:36:37 crc kubenswrapper[4972]: I1121 11:36:37.983706 4972 scope.go:117] "RemoveContainer" containerID="4d1c7146b08d1b5dea1474e0b1e7463cbff363070ea43f94866e242adc3ae65c" Nov 21 11:36:39 crc kubenswrapper[4972]: I1121 11:36:39.044073 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-z5wc9"] Nov 21 11:36:39 crc kubenswrapper[4972]: I1121 11:36:39.054712 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-z5wc9"] Nov 21 11:36:39 crc kubenswrapper[4972]: I1121 11:36:39.781149 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffac6a90-89db-46b9-8c66-96177f795af2" path="/var/lib/kubelet/pods/ffac6a90-89db-46b9-8c66-96177f795af2/volumes" Nov 21 11:36:59 crc kubenswrapper[4972]: I1121 11:36:59.069767 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-b66hk"] Nov 21 11:36:59 crc kubenswrapper[4972]: I1121 11:36:59.087735 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-0907-account-create-sr28f"] Nov 21 11:36:59 crc kubenswrapper[4972]: I1121 11:36:59.100849 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-b66hk"] Nov 21 11:36:59 crc kubenswrapper[4972]: I1121 11:36:59.110557 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-0907-account-create-sr28f"] Nov 21 11:36:59 crc kubenswrapper[4972]: I1121 11:36:59.776526 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2a1e27a-2332-4812-8449-5dc621f868be" path="/var/lib/kubelet/pods/c2a1e27a-2332-4812-8449-5dc621f868be/volumes" Nov 21 11:36:59 crc kubenswrapper[4972]: I1121 11:36:59.781966 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4e2872-38ce-4a6d-95cb-a96009c830eb" path="/var/lib/kubelet/pods/db4e2872-38ce-4a6d-95cb-a96009c830eb/volumes" Nov 21 11:37:12 crc kubenswrapper[4972]: I1121 11:37:12.046609 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-fzq6w"] Nov 21 11:37:12 crc kubenswrapper[4972]: I1121 11:37:12.065415 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-fzq6w"] Nov 21 11:37:13 crc kubenswrapper[4972]: I1121 11:37:13.774926 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc89a5eb-45b7-4f6f-83f9-0e3930716033" path="/var/lib/kubelet/pods/fc89a5eb-45b7-4f6f-83f9-0e3930716033/volumes" Nov 21 11:37:38 crc kubenswrapper[4972]: I1121 11:37:38.108333 4972 scope.go:117] "RemoveContainer" containerID="782b7593fe90a9e77ed9f4af936d2a089484907834cdde791139d04a327117d0" Nov 21 11:37:38 crc kubenswrapper[4972]: I1121 11:37:38.178766 4972 scope.go:117] "RemoveContainer" containerID="f4cf196adb4448c010da6887decd6675960faeef07239438e4891e23f5112fbc" Nov 21 11:37:38 crc kubenswrapper[4972]: I1121 11:37:38.226747 4972 scope.go:117] "RemoveContainer" containerID="ad7c4617fc6180c3575468abe5dfd86753c63ba53c51af061fdeb35e10443512" Nov 21 11:37:38 crc kubenswrapper[4972]: I1121 11:37:38.306122 4972 scope.go:117] "RemoveContainer" containerID="22bb5f415d5c075e5eb476be643bec8e4a9a09a2be654d695b3a104477c7b16f" Nov 21 11:37:56 crc kubenswrapper[4972]: I1121 11:37:56.179168 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:37:56 crc kubenswrapper[4972]: I1121 11:37:56.179988 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:38:26 crc kubenswrapper[4972]: I1121 11:38:26.179375 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:38:26 crc kubenswrapper[4972]: I1121 11:38:26.180152 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:38:56 crc kubenswrapper[4972]: I1121 11:38:56.179294 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:38:56 crc kubenswrapper[4972]: I1121 11:38:56.179814 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:38:56 crc kubenswrapper[4972]: I1121 11:38:56.179880 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 11:38:56 crc kubenswrapper[4972]: I1121 11:38:56.180615 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 11:38:56 crc kubenswrapper[4972]: I1121 11:38:56.180681 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" gracePeriod=600 Nov 21 11:38:56 crc kubenswrapper[4972]: E1121 11:38:56.335882 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:38:57 crc kubenswrapper[4972]: I1121 11:38:57.345649 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" exitCode=0 Nov 21 11:38:57 crc kubenswrapper[4972]: I1121 11:38:57.345700 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8"} Nov 21 11:38:57 crc kubenswrapper[4972]: I1121 11:38:57.345743 4972 scope.go:117] "RemoveContainer" containerID="b8a61924f939aa6294ac4169bfcff13e2c0e9780bf272ce11947e911688c0b8e" Nov 21 11:38:57 crc kubenswrapper[4972]: I1121 11:38:57.346580 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:38:57 crc kubenswrapper[4972]: E1121 11:38:57.347329 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:39:11 crc kubenswrapper[4972]: I1121 11:39:11.760867 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:39:11 crc kubenswrapper[4972]: E1121 11:39:11.762270 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.360437 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dm9gl"] Nov 21 11:39:14 crc kubenswrapper[4972]: E1121 11:39:14.361825 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4b6d48-65e1-46ef-b330-b059b679cc66" containerName="registry-server" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.361880 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4b6d48-65e1-46ef-b330-b059b679cc66" containerName="registry-server" Nov 21 11:39:14 crc kubenswrapper[4972]: E1121 11:39:14.361905 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4b6d48-65e1-46ef-b330-b059b679cc66" containerName="extract-utilities" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.361918 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4b6d48-65e1-46ef-b330-b059b679cc66" containerName="extract-utilities" Nov 21 11:39:14 crc kubenswrapper[4972]: E1121 11:39:14.361956 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4b6d48-65e1-46ef-b330-b059b679cc66" containerName="extract-content" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.361970 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4b6d48-65e1-46ef-b330-b059b679cc66" containerName="extract-content" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.362355 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d4b6d48-65e1-46ef-b330-b059b679cc66" containerName="registry-server" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.381917 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.446881 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dm9gl"] Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.517592 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77713dc-c645-4d18-b1b4-750673cae4d4-utilities\") pod \"redhat-operators-dm9gl\" (UID: \"a77713dc-c645-4d18-b1b4-750673cae4d4\") " pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.517681 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77713dc-c645-4d18-b1b4-750673cae4d4-catalog-content\") pod \"redhat-operators-dm9gl\" (UID: \"a77713dc-c645-4d18-b1b4-750673cae4d4\") " pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.517743 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrcrb\" (UniqueName: \"kubernetes.io/projected/a77713dc-c645-4d18-b1b4-750673cae4d4-kube-api-access-rrcrb\") pod \"redhat-operators-dm9gl\" (UID: \"a77713dc-c645-4d18-b1b4-750673cae4d4\") " pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.619966 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77713dc-c645-4d18-b1b4-750673cae4d4-utilities\") pod \"redhat-operators-dm9gl\" (UID: \"a77713dc-c645-4d18-b1b4-750673cae4d4\") " pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.620080 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77713dc-c645-4d18-b1b4-750673cae4d4-catalog-content\") pod \"redhat-operators-dm9gl\" (UID: \"a77713dc-c645-4d18-b1b4-750673cae4d4\") " pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.620541 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77713dc-c645-4d18-b1b4-750673cae4d4-utilities\") pod \"redhat-operators-dm9gl\" (UID: \"a77713dc-c645-4d18-b1b4-750673cae4d4\") " pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.620593 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77713dc-c645-4d18-b1b4-750673cae4d4-catalog-content\") pod \"redhat-operators-dm9gl\" (UID: \"a77713dc-c645-4d18-b1b4-750673cae4d4\") " pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.620742 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrcrb\" (UniqueName: \"kubernetes.io/projected/a77713dc-c645-4d18-b1b4-750673cae4d4-kube-api-access-rrcrb\") pod \"redhat-operators-dm9gl\" (UID: \"a77713dc-c645-4d18-b1b4-750673cae4d4\") " pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.655723 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrcrb\" (UniqueName: \"kubernetes.io/projected/a77713dc-c645-4d18-b1b4-750673cae4d4-kube-api-access-rrcrb\") pod \"redhat-operators-dm9gl\" (UID: \"a77713dc-c645-4d18-b1b4-750673cae4d4\") " pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:14 crc kubenswrapper[4972]: I1121 11:39:14.739074 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:15 crc kubenswrapper[4972]: I1121 11:39:15.247416 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dm9gl"] Nov 21 11:39:15 crc kubenswrapper[4972]: I1121 11:39:15.555014 4972 generic.go:334] "Generic (PLEG): container finished" podID="a77713dc-c645-4d18-b1b4-750673cae4d4" containerID="9e6eb7ab78e4c91316809b8fbf57c8057835bccf81479752c13a7cb9ad51e413" exitCode=0 Nov 21 11:39:15 crc kubenswrapper[4972]: I1121 11:39:15.555286 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm9gl" event={"ID":"a77713dc-c645-4d18-b1b4-750673cae4d4","Type":"ContainerDied","Data":"9e6eb7ab78e4c91316809b8fbf57c8057835bccf81479752c13a7cb9ad51e413"} Nov 21 11:39:15 crc kubenswrapper[4972]: I1121 11:39:15.555310 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm9gl" event={"ID":"a77713dc-c645-4d18-b1b4-750673cae4d4","Type":"ContainerStarted","Data":"1075ebca19d983001f52d7cd0c909eaa1b4f97031a39201b374770992be3471d"} Nov 21 11:39:15 crc kubenswrapper[4972]: I1121 11:39:15.558195 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 11:39:17 crc kubenswrapper[4972]: I1121 11:39:17.582960 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm9gl" event={"ID":"a77713dc-c645-4d18-b1b4-750673cae4d4","Type":"ContainerStarted","Data":"740f8e1b9fcb80e8b721460f718d523283c220b5dac060747c65f29a487d3649"} Nov 21 11:39:22 crc kubenswrapper[4972]: I1121 11:39:22.642701 4972 generic.go:334] "Generic (PLEG): container finished" podID="a77713dc-c645-4d18-b1b4-750673cae4d4" containerID="740f8e1b9fcb80e8b721460f718d523283c220b5dac060747c65f29a487d3649" exitCode=0 Nov 21 11:39:22 crc kubenswrapper[4972]: I1121 11:39:22.642913 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm9gl" event={"ID":"a77713dc-c645-4d18-b1b4-750673cae4d4","Type":"ContainerDied","Data":"740f8e1b9fcb80e8b721460f718d523283c220b5dac060747c65f29a487d3649"} Nov 21 11:39:23 crc kubenswrapper[4972]: I1121 11:39:23.655796 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm9gl" event={"ID":"a77713dc-c645-4d18-b1b4-750673cae4d4","Type":"ContainerStarted","Data":"557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f"} Nov 21 11:39:23 crc kubenswrapper[4972]: I1121 11:39:23.690025 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dm9gl" podStartSLOduration=2.223114872 podStartE2EDuration="9.690008532s" podCreationTimestamp="2025-11-21 11:39:14 +0000 UTC" firstStartedPulling="2025-11-21 11:39:15.557973692 +0000 UTC m=+7100.667116190" lastFinishedPulling="2025-11-21 11:39:23.024867342 +0000 UTC m=+7108.134009850" observedRunningTime="2025-11-21 11:39:23.676900056 +0000 UTC m=+7108.786042554" watchObservedRunningTime="2025-11-21 11:39:23.690008532 +0000 UTC m=+7108.799151030" Nov 21 11:39:24 crc kubenswrapper[4972]: I1121 11:39:24.739604 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:24 crc kubenswrapper[4972]: I1121 11:39:24.739655 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:24 crc kubenswrapper[4972]: I1121 11:39:24.762582 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:39:24 crc kubenswrapper[4972]: E1121 11:39:24.763672 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:39:25 crc kubenswrapper[4972]: I1121 11:39:25.803591 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dm9gl" podUID="a77713dc-c645-4d18-b1b4-750673cae4d4" containerName="registry-server" probeResult="failure" output=< Nov 21 11:39:25 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 11:39:25 crc kubenswrapper[4972]: > Nov 21 11:39:34 crc kubenswrapper[4972]: I1121 11:39:34.805249 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:34 crc kubenswrapper[4972]: I1121 11:39:34.895938 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:35 crc kubenswrapper[4972]: I1121 11:39:35.047323 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dm9gl"] Nov 21 11:39:36 crc kubenswrapper[4972]: I1121 11:39:36.760821 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:39:36 crc kubenswrapper[4972]: E1121 11:39:36.761689 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:39:36 crc kubenswrapper[4972]: I1121 11:39:36.791760 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dm9gl" podUID="a77713dc-c645-4d18-b1b4-750673cae4d4" containerName="registry-server" containerID="cri-o://557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f" gracePeriod=2 Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.416579 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.504774 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrcrb\" (UniqueName: \"kubernetes.io/projected/a77713dc-c645-4d18-b1b4-750673cae4d4-kube-api-access-rrcrb\") pod \"a77713dc-c645-4d18-b1b4-750673cae4d4\" (UID: \"a77713dc-c645-4d18-b1b4-750673cae4d4\") " Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.504919 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77713dc-c645-4d18-b1b4-750673cae4d4-utilities\") pod \"a77713dc-c645-4d18-b1b4-750673cae4d4\" (UID: \"a77713dc-c645-4d18-b1b4-750673cae4d4\") " Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.505021 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77713dc-c645-4d18-b1b4-750673cae4d4-catalog-content\") pod \"a77713dc-c645-4d18-b1b4-750673cae4d4\" (UID: \"a77713dc-c645-4d18-b1b4-750673cae4d4\") " Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.506267 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a77713dc-c645-4d18-b1b4-750673cae4d4-utilities" (OuterVolumeSpecName: "utilities") pod "a77713dc-c645-4d18-b1b4-750673cae4d4" (UID: "a77713dc-c645-4d18-b1b4-750673cae4d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.530270 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a77713dc-c645-4d18-b1b4-750673cae4d4-kube-api-access-rrcrb" (OuterVolumeSpecName: "kube-api-access-rrcrb") pod "a77713dc-c645-4d18-b1b4-750673cae4d4" (UID: "a77713dc-c645-4d18-b1b4-750673cae4d4"). InnerVolumeSpecName "kube-api-access-rrcrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.601626 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a77713dc-c645-4d18-b1b4-750673cae4d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a77713dc-c645-4d18-b1b4-750673cae4d4" (UID: "a77713dc-c645-4d18-b1b4-750673cae4d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.607626 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrcrb\" (UniqueName: \"kubernetes.io/projected/a77713dc-c645-4d18-b1b4-750673cae4d4-kube-api-access-rrcrb\") on node \"crc\" DevicePath \"\"" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.607652 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77713dc-c645-4d18-b1b4-750673cae4d4-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.607679 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77713dc-c645-4d18-b1b4-750673cae4d4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.814786 4972 generic.go:334] "Generic (PLEG): container finished" podID="a77713dc-c645-4d18-b1b4-750673cae4d4" containerID="557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f" exitCode=0 Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.814871 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm9gl" event={"ID":"a77713dc-c645-4d18-b1b4-750673cae4d4","Type":"ContainerDied","Data":"557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f"} Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.814913 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm9gl" event={"ID":"a77713dc-c645-4d18-b1b4-750673cae4d4","Type":"ContainerDied","Data":"1075ebca19d983001f52d7cd0c909eaa1b4f97031a39201b374770992be3471d"} Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.814944 4972 scope.go:117] "RemoveContainer" containerID="557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.815971 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dm9gl" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.865057 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dm9gl"] Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.872027 4972 scope.go:117] "RemoveContainer" containerID="740f8e1b9fcb80e8b721460f718d523283c220b5dac060747c65f29a487d3649" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.874580 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dm9gl"] Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.902881 4972 scope.go:117] "RemoveContainer" containerID="9e6eb7ab78e4c91316809b8fbf57c8057835bccf81479752c13a7cb9ad51e413" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.973166 4972 scope.go:117] "RemoveContainer" containerID="557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f" Nov 21 11:39:37 crc kubenswrapper[4972]: E1121 11:39:37.973586 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f\": container with ID starting with 557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f not found: ID does not exist" containerID="557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.973635 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f"} err="failed to get container status \"557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f\": rpc error: code = NotFound desc = could not find container \"557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f\": container with ID starting with 557822fdca439e3a03c6aa9270fcf219521ceb11debbd26a831b3e1ef3b5da3f not found: ID does not exist" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.973669 4972 scope.go:117] "RemoveContainer" containerID="740f8e1b9fcb80e8b721460f718d523283c220b5dac060747c65f29a487d3649" Nov 21 11:39:37 crc kubenswrapper[4972]: E1121 11:39:37.974038 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740f8e1b9fcb80e8b721460f718d523283c220b5dac060747c65f29a487d3649\": container with ID starting with 740f8e1b9fcb80e8b721460f718d523283c220b5dac060747c65f29a487d3649 not found: ID does not exist" containerID="740f8e1b9fcb80e8b721460f718d523283c220b5dac060747c65f29a487d3649" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.974071 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740f8e1b9fcb80e8b721460f718d523283c220b5dac060747c65f29a487d3649"} err="failed to get container status \"740f8e1b9fcb80e8b721460f718d523283c220b5dac060747c65f29a487d3649\": rpc error: code = NotFound desc = could not find container \"740f8e1b9fcb80e8b721460f718d523283c220b5dac060747c65f29a487d3649\": container with ID starting with 740f8e1b9fcb80e8b721460f718d523283c220b5dac060747c65f29a487d3649 not found: ID does not exist" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.974093 4972 scope.go:117] "RemoveContainer" containerID="9e6eb7ab78e4c91316809b8fbf57c8057835bccf81479752c13a7cb9ad51e413" Nov 21 11:39:37 crc kubenswrapper[4972]: E1121 11:39:37.974321 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e6eb7ab78e4c91316809b8fbf57c8057835bccf81479752c13a7cb9ad51e413\": container with ID starting with 9e6eb7ab78e4c91316809b8fbf57c8057835bccf81479752c13a7cb9ad51e413 not found: ID does not exist" containerID="9e6eb7ab78e4c91316809b8fbf57c8057835bccf81479752c13a7cb9ad51e413" Nov 21 11:39:37 crc kubenswrapper[4972]: I1121 11:39:37.974351 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e6eb7ab78e4c91316809b8fbf57c8057835bccf81479752c13a7cb9ad51e413"} err="failed to get container status \"9e6eb7ab78e4c91316809b8fbf57c8057835bccf81479752c13a7cb9ad51e413\": rpc error: code = NotFound desc = could not find container \"9e6eb7ab78e4c91316809b8fbf57c8057835bccf81479752c13a7cb9ad51e413\": container with ID starting with 9e6eb7ab78e4c91316809b8fbf57c8057835bccf81479752c13a7cb9ad51e413 not found: ID does not exist" Nov 21 11:39:38 crc kubenswrapper[4972]: I1121 11:39:38.828404 4972 generic.go:334] "Generic (PLEG): container finished" podID="0b686359-266d-4e3b-b383-2ed81fb826ed" containerID="27c76edd8498d7937b88ae5c54996105dc976b0a1a4298d79c2ec442aa284ecd" exitCode=0 Nov 21 11:39:38 crc kubenswrapper[4972]: I1121 11:39:38.828503 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" event={"ID":"0b686359-266d-4e3b-b383-2ed81fb826ed","Type":"ContainerDied","Data":"27c76edd8498d7937b88ae5c54996105dc976b0a1a4298d79c2ec442aa284ecd"} Nov 21 11:39:39 crc kubenswrapper[4972]: I1121 11:39:39.782517 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a77713dc-c645-4d18-b1b4-750673cae4d4" path="/var/lib/kubelet/pods/a77713dc-c645-4d18-b1b4-750673cae4d4/volumes" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.303738 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.375690 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-ceph\") pod \"0b686359-266d-4e3b-b383-2ed81fb826ed\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.375755 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt59p\" (UniqueName: \"kubernetes.io/projected/0b686359-266d-4e3b-b383-2ed81fb826ed-kube-api-access-jt59p\") pod \"0b686359-266d-4e3b-b383-2ed81fb826ed\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.375962 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-inventory\") pod \"0b686359-266d-4e3b-b383-2ed81fb826ed\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.376011 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-ssh-key\") pod \"0b686359-266d-4e3b-b383-2ed81fb826ed\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.376241 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-tripleo-cleanup-combined-ca-bundle\") pod \"0b686359-266d-4e3b-b383-2ed81fb826ed\" (UID: \"0b686359-266d-4e3b-b383-2ed81fb826ed\") " Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.382181 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "0b686359-266d-4e3b-b383-2ed81fb826ed" (UID: "0b686359-266d-4e3b-b383-2ed81fb826ed"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.382793 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-ceph" (OuterVolumeSpecName: "ceph") pod "0b686359-266d-4e3b-b383-2ed81fb826ed" (UID: "0b686359-266d-4e3b-b383-2ed81fb826ed"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.385259 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b686359-266d-4e3b-b383-2ed81fb826ed-kube-api-access-jt59p" (OuterVolumeSpecName: "kube-api-access-jt59p") pod "0b686359-266d-4e3b-b383-2ed81fb826ed" (UID: "0b686359-266d-4e3b-b383-2ed81fb826ed"). InnerVolumeSpecName "kube-api-access-jt59p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.407686 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-inventory" (OuterVolumeSpecName: "inventory") pod "0b686359-266d-4e3b-b383-2ed81fb826ed" (UID: "0b686359-266d-4e3b-b383-2ed81fb826ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.417865 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0b686359-266d-4e3b-b383-2ed81fb826ed" (UID: "0b686359-266d-4e3b-b383-2ed81fb826ed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.479645 4972 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.479691 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.479713 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt59p\" (UniqueName: \"kubernetes.io/projected/0b686359-266d-4e3b-b383-2ed81fb826ed-kube-api-access-jt59p\") on node \"crc\" DevicePath \"\"" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.479731 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.479747 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b686359-266d-4e3b-b383-2ed81fb826ed-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.854960 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.855046 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54" event={"ID":"0b686359-266d-4e3b-b383-2ed81fb826ed","Type":"ContainerDied","Data":"93b889d8ad66769e4f6ca2908a30681e00f179a3e3297dfc4ee5472a816f8607"} Nov 21 11:39:40 crc kubenswrapper[4972]: I1121 11:39:40.855103 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93b889d8ad66769e4f6ca2908a30681e00f179a3e3297dfc4ee5472a816f8607" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.233740 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-2mfkz"] Nov 21 11:39:45 crc kubenswrapper[4972]: E1121 11:39:45.236367 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b686359-266d-4e3b-b383-2ed81fb826ed" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.236522 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b686359-266d-4e3b-b383-2ed81fb826ed" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 21 11:39:45 crc kubenswrapper[4972]: E1121 11:39:45.236729 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77713dc-c645-4d18-b1b4-750673cae4d4" containerName="registry-server" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.236893 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77713dc-c645-4d18-b1b4-750673cae4d4" containerName="registry-server" Nov 21 11:39:45 crc kubenswrapper[4972]: E1121 11:39:45.237047 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77713dc-c645-4d18-b1b4-750673cae4d4" containerName="extract-content" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.237168 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77713dc-c645-4d18-b1b4-750673cae4d4" containerName="extract-content" Nov 21 11:39:45 crc kubenswrapper[4972]: E1121 11:39:45.237355 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77713dc-c645-4d18-b1b4-750673cae4d4" containerName="extract-utilities" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.237467 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77713dc-c645-4d18-b1b4-750673cae4d4" containerName="extract-utilities" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.238101 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a77713dc-c645-4d18-b1b4-750673cae4d4" containerName="registry-server" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.238309 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b686359-266d-4e3b-b383-2ed81fb826ed" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.239997 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.243591 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.254432 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.254765 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.255140 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.261650 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-2mfkz"] Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.292858 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9pt\" (UniqueName: \"kubernetes.io/projected/c828a840-b09c-419e-ab5c-1771ecceeed8-kube-api-access-6f9pt\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.292957 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.292992 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-ceph\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.293009 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.293120 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-inventory\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.395065 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f9pt\" (UniqueName: \"kubernetes.io/projected/c828a840-b09c-419e-ab5c-1771ecceeed8-kube-api-access-6f9pt\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.395145 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.395176 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.395193 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-ceph\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.395318 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-inventory\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.401074 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.401327 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-inventory\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.401560 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-ceph\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.401667 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.421324 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f9pt\" (UniqueName: \"kubernetes.io/projected/c828a840-b09c-419e-ab5c-1771ecceeed8-kube-api-access-6f9pt\") pod \"bootstrap-openstack-openstack-cell1-2mfkz\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:45 crc kubenswrapper[4972]: I1121 11:39:45.572708 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:39:46 crc kubenswrapper[4972]: I1121 11:39:46.242187 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-2mfkz"] Nov 21 11:39:46 crc kubenswrapper[4972]: I1121 11:39:46.925981 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" event={"ID":"c828a840-b09c-419e-ab5c-1771ecceeed8","Type":"ContainerStarted","Data":"2ad48e4de067666680f71b0a74c0dc0316d59b76530a43b92b32fee3c0c1a70f"} Nov 21 11:39:47 crc kubenswrapper[4972]: I1121 11:39:47.939457 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" event={"ID":"c828a840-b09c-419e-ab5c-1771ecceeed8","Type":"ContainerStarted","Data":"d01e79f7c7ff61f95422918ed0aaa6b8117d0c0a6952ff1d387e745d391ae512"} Nov 21 11:39:47 crc kubenswrapper[4972]: I1121 11:39:47.968673 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" podStartSLOduration=2.470651472 podStartE2EDuration="2.968655652s" podCreationTimestamp="2025-11-21 11:39:45 +0000 UTC" firstStartedPulling="2025-11-21 11:39:46.246121855 +0000 UTC m=+7131.355264393" lastFinishedPulling="2025-11-21 11:39:46.744126075 +0000 UTC m=+7131.853268573" observedRunningTime="2025-11-21 11:39:47.958527555 +0000 UTC m=+7133.067670063" watchObservedRunningTime="2025-11-21 11:39:47.968655652 +0000 UTC m=+7133.077798140" Nov 21 11:39:50 crc kubenswrapper[4972]: I1121 11:39:50.760241 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:39:50 crc kubenswrapper[4972]: E1121 11:39:50.761078 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.094372 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hcrfh"] Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.101633 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.114104 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hcrfh"] Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.193558 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ba34ab-39a2-432e-9fbb-c5ea6237eada-catalog-content\") pod \"redhat-marketplace-hcrfh\" (UID: \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\") " pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.193818 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ba34ab-39a2-432e-9fbb-c5ea6237eada-utilities\") pod \"redhat-marketplace-hcrfh\" (UID: \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\") " pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.194325 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8ppx\" (UniqueName: \"kubernetes.io/projected/11ba34ab-39a2-432e-9fbb-c5ea6237eada-kube-api-access-m8ppx\") pod \"redhat-marketplace-hcrfh\" (UID: \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\") " pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.296679 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8ppx\" (UniqueName: \"kubernetes.io/projected/11ba34ab-39a2-432e-9fbb-c5ea6237eada-kube-api-access-m8ppx\") pod \"redhat-marketplace-hcrfh\" (UID: \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\") " pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.296928 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ba34ab-39a2-432e-9fbb-c5ea6237eada-catalog-content\") pod \"redhat-marketplace-hcrfh\" (UID: \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\") " pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.296977 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ba34ab-39a2-432e-9fbb-c5ea6237eada-utilities\") pod \"redhat-marketplace-hcrfh\" (UID: \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\") " pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.297629 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ba34ab-39a2-432e-9fbb-c5ea6237eada-utilities\") pod \"redhat-marketplace-hcrfh\" (UID: \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\") " pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.297702 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ba34ab-39a2-432e-9fbb-c5ea6237eada-catalog-content\") pod \"redhat-marketplace-hcrfh\" (UID: \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\") " pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.322298 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8ppx\" (UniqueName: \"kubernetes.io/projected/11ba34ab-39a2-432e-9fbb-c5ea6237eada-kube-api-access-m8ppx\") pod \"redhat-marketplace-hcrfh\" (UID: \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\") " pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:39:57 crc kubenswrapper[4972]: I1121 11:39:57.433616 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:39:58 crc kubenswrapper[4972]: I1121 11:39:58.062220 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hcrfh"] Nov 21 11:39:59 crc kubenswrapper[4972]: I1121 11:39:59.083802 4972 generic.go:334] "Generic (PLEG): container finished" podID="11ba34ab-39a2-432e-9fbb-c5ea6237eada" containerID="4d6cabe075331e6438819d822e193864f2b0feb75079c6e10b3a3b1c41ef93df" exitCode=0 Nov 21 11:39:59 crc kubenswrapper[4972]: I1121 11:39:59.083928 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcrfh" event={"ID":"11ba34ab-39a2-432e-9fbb-c5ea6237eada","Type":"ContainerDied","Data":"4d6cabe075331e6438819d822e193864f2b0feb75079c6e10b3a3b1c41ef93df"} Nov 21 11:39:59 crc kubenswrapper[4972]: I1121 11:39:59.084248 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcrfh" event={"ID":"11ba34ab-39a2-432e-9fbb-c5ea6237eada","Type":"ContainerStarted","Data":"4329883361c063081dcfea097a2bf7b3b56b293ed0c2c5d616cd4be741bb3f7c"} Nov 21 11:40:00 crc kubenswrapper[4972]: I1121 11:40:00.098500 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcrfh" event={"ID":"11ba34ab-39a2-432e-9fbb-c5ea6237eada","Type":"ContainerStarted","Data":"f24bddf50a316be753956d9b6e0308ba5ee08fe9cfb3097a9fc7d3d3ba85f2ac"} Nov 21 11:40:01 crc kubenswrapper[4972]: I1121 11:40:01.114414 4972 generic.go:334] "Generic (PLEG): container finished" podID="11ba34ab-39a2-432e-9fbb-c5ea6237eada" containerID="f24bddf50a316be753956d9b6e0308ba5ee08fe9cfb3097a9fc7d3d3ba85f2ac" exitCode=0 Nov 21 11:40:01 crc kubenswrapper[4972]: I1121 11:40:01.114492 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcrfh" event={"ID":"11ba34ab-39a2-432e-9fbb-c5ea6237eada","Type":"ContainerDied","Data":"f24bddf50a316be753956d9b6e0308ba5ee08fe9cfb3097a9fc7d3d3ba85f2ac"} Nov 21 11:40:02 crc kubenswrapper[4972]: I1121 11:40:02.131130 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcrfh" event={"ID":"11ba34ab-39a2-432e-9fbb-c5ea6237eada","Type":"ContainerStarted","Data":"f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b"} Nov 21 11:40:02 crc kubenswrapper[4972]: I1121 11:40:02.162884 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hcrfh" podStartSLOduration=2.705049077 podStartE2EDuration="5.162819179s" podCreationTimestamp="2025-11-21 11:39:57 +0000 UTC" firstStartedPulling="2025-11-21 11:39:59.085517038 +0000 UTC m=+7144.194659576" lastFinishedPulling="2025-11-21 11:40:01.54328714 +0000 UTC m=+7146.652429678" observedRunningTime="2025-11-21 11:40:02.152774135 +0000 UTC m=+7147.261916633" watchObservedRunningTime="2025-11-21 11:40:02.162819179 +0000 UTC m=+7147.271961717" Nov 21 11:40:02 crc kubenswrapper[4972]: I1121 11:40:02.760426 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:40:02 crc kubenswrapper[4972]: E1121 11:40:02.761014 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:40:07 crc kubenswrapper[4972]: I1121 11:40:07.436123 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:40:07 crc kubenswrapper[4972]: I1121 11:40:07.436646 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:40:07 crc kubenswrapper[4972]: I1121 11:40:07.505436 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:40:08 crc kubenswrapper[4972]: I1121 11:40:08.260125 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:40:08 crc kubenswrapper[4972]: I1121 11:40:08.322685 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hcrfh"] Nov 21 11:40:10 crc kubenswrapper[4972]: I1121 11:40:10.221956 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hcrfh" podUID="11ba34ab-39a2-432e-9fbb-c5ea6237eada" containerName="registry-server" containerID="cri-o://f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b" gracePeriod=2 Nov 21 11:40:10 crc kubenswrapper[4972]: I1121 11:40:10.777297 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:40:10 crc kubenswrapper[4972]: I1121 11:40:10.858815 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8ppx\" (UniqueName: \"kubernetes.io/projected/11ba34ab-39a2-432e-9fbb-c5ea6237eada-kube-api-access-m8ppx\") pod \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\" (UID: \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\") " Nov 21 11:40:10 crc kubenswrapper[4972]: I1121 11:40:10.859091 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ba34ab-39a2-432e-9fbb-c5ea6237eada-utilities\") pod \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\" (UID: \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\") " Nov 21 11:40:10 crc kubenswrapper[4972]: I1121 11:40:10.859967 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11ba34ab-39a2-432e-9fbb-c5ea6237eada-utilities" (OuterVolumeSpecName: "utilities") pod "11ba34ab-39a2-432e-9fbb-c5ea6237eada" (UID: "11ba34ab-39a2-432e-9fbb-c5ea6237eada"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:40:10 crc kubenswrapper[4972]: I1121 11:40:10.860094 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ba34ab-39a2-432e-9fbb-c5ea6237eada-catalog-content\") pod \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\" (UID: \"11ba34ab-39a2-432e-9fbb-c5ea6237eada\") " Nov 21 11:40:10 crc kubenswrapper[4972]: I1121 11:40:10.864096 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ba34ab-39a2-432e-9fbb-c5ea6237eada-kube-api-access-m8ppx" (OuterVolumeSpecName: "kube-api-access-m8ppx") pod "11ba34ab-39a2-432e-9fbb-c5ea6237eada" (UID: "11ba34ab-39a2-432e-9fbb-c5ea6237eada"). InnerVolumeSpecName "kube-api-access-m8ppx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:40:10 crc kubenswrapper[4972]: I1121 11:40:10.878379 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ba34ab-39a2-432e-9fbb-c5ea6237eada-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:40:10 crc kubenswrapper[4972]: I1121 11:40:10.878416 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8ppx\" (UniqueName: \"kubernetes.io/projected/11ba34ab-39a2-432e-9fbb-c5ea6237eada-kube-api-access-m8ppx\") on node \"crc\" DevicePath \"\"" Nov 21 11:40:10 crc kubenswrapper[4972]: I1121 11:40:10.887393 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11ba34ab-39a2-432e-9fbb-c5ea6237eada-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11ba34ab-39a2-432e-9fbb-c5ea6237eada" (UID: "11ba34ab-39a2-432e-9fbb-c5ea6237eada"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:40:10 crc kubenswrapper[4972]: I1121 11:40:10.981053 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ba34ab-39a2-432e-9fbb-c5ea6237eada-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.239743 4972 generic.go:334] "Generic (PLEG): container finished" podID="11ba34ab-39a2-432e-9fbb-c5ea6237eada" containerID="f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b" exitCode=0 Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.239817 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcrfh" event={"ID":"11ba34ab-39a2-432e-9fbb-c5ea6237eada","Type":"ContainerDied","Data":"f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b"} Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.239918 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcrfh" event={"ID":"11ba34ab-39a2-432e-9fbb-c5ea6237eada","Type":"ContainerDied","Data":"4329883361c063081dcfea097a2bf7b3b56b293ed0c2c5d616cd4be741bb3f7c"} Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.239938 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hcrfh" Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.239961 4972 scope.go:117] "RemoveContainer" containerID="f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b" Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.301574 4972 scope.go:117] "RemoveContainer" containerID="f24bddf50a316be753956d9b6e0308ba5ee08fe9cfb3097a9fc7d3d3ba85f2ac" Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.312396 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hcrfh"] Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.331315 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hcrfh"] Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.350918 4972 scope.go:117] "RemoveContainer" containerID="4d6cabe075331e6438819d822e193864f2b0feb75079c6e10b3a3b1c41ef93df" Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.385546 4972 scope.go:117] "RemoveContainer" containerID="f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b" Nov 21 11:40:11 crc kubenswrapper[4972]: E1121 11:40:11.386014 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b\": container with ID starting with f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b not found: ID does not exist" containerID="f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b" Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.386074 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b"} err="failed to get container status \"f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b\": rpc error: code = NotFound desc = could not find container \"f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b\": container with ID starting with f761d4150728ab573cb873061d87098603082a8dcfc5b65052d74d5394e9305b not found: ID does not exist" Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.386114 4972 scope.go:117] "RemoveContainer" containerID="f24bddf50a316be753956d9b6e0308ba5ee08fe9cfb3097a9fc7d3d3ba85f2ac" Nov 21 11:40:11 crc kubenswrapper[4972]: E1121 11:40:11.386623 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f24bddf50a316be753956d9b6e0308ba5ee08fe9cfb3097a9fc7d3d3ba85f2ac\": container with ID starting with f24bddf50a316be753956d9b6e0308ba5ee08fe9cfb3097a9fc7d3d3ba85f2ac not found: ID does not exist" containerID="f24bddf50a316be753956d9b6e0308ba5ee08fe9cfb3097a9fc7d3d3ba85f2ac" Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.386667 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f24bddf50a316be753956d9b6e0308ba5ee08fe9cfb3097a9fc7d3d3ba85f2ac"} err="failed to get container status \"f24bddf50a316be753956d9b6e0308ba5ee08fe9cfb3097a9fc7d3d3ba85f2ac\": rpc error: code = NotFound desc = could not find container \"f24bddf50a316be753956d9b6e0308ba5ee08fe9cfb3097a9fc7d3d3ba85f2ac\": container with ID starting with f24bddf50a316be753956d9b6e0308ba5ee08fe9cfb3097a9fc7d3d3ba85f2ac not found: ID does not exist" Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.386699 4972 scope.go:117] "RemoveContainer" containerID="4d6cabe075331e6438819d822e193864f2b0feb75079c6e10b3a3b1c41ef93df" Nov 21 11:40:11 crc kubenswrapper[4972]: E1121 11:40:11.387219 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6cabe075331e6438819d822e193864f2b0feb75079c6e10b3a3b1c41ef93df\": container with ID starting with 4d6cabe075331e6438819d822e193864f2b0feb75079c6e10b3a3b1c41ef93df not found: ID does not exist" containerID="4d6cabe075331e6438819d822e193864f2b0feb75079c6e10b3a3b1c41ef93df" Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.387247 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6cabe075331e6438819d822e193864f2b0feb75079c6e10b3a3b1c41ef93df"} err="failed to get container status \"4d6cabe075331e6438819d822e193864f2b0feb75079c6e10b3a3b1c41ef93df\": rpc error: code = NotFound desc = could not find container \"4d6cabe075331e6438819d822e193864f2b0feb75079c6e10b3a3b1c41ef93df\": container with ID starting with 4d6cabe075331e6438819d822e193864f2b0feb75079c6e10b3a3b1c41ef93df not found: ID does not exist" Nov 21 11:40:11 crc kubenswrapper[4972]: I1121 11:40:11.773052 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ba34ab-39a2-432e-9fbb-c5ea6237eada" path="/var/lib/kubelet/pods/11ba34ab-39a2-432e-9fbb-c5ea6237eada/volumes" Nov 21 11:40:13 crc kubenswrapper[4972]: I1121 11:40:13.760760 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:40:13 crc kubenswrapper[4972]: E1121 11:40:13.761866 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:40:25 crc kubenswrapper[4972]: I1121 11:40:25.777326 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:40:25 crc kubenswrapper[4972]: E1121 11:40:25.778733 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:40:36 crc kubenswrapper[4972]: I1121 11:40:36.759472 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:40:36 crc kubenswrapper[4972]: E1121 11:40:36.760740 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:40:51 crc kubenswrapper[4972]: I1121 11:40:51.760267 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:40:51 crc kubenswrapper[4972]: E1121 11:40:51.761618 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:41:02 crc kubenswrapper[4972]: I1121 11:41:02.759306 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:41:02 crc kubenswrapper[4972]: E1121 11:41:02.760617 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:41:16 crc kubenswrapper[4972]: I1121 11:41:16.765349 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:41:16 crc kubenswrapper[4972]: E1121 11:41:16.766777 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:41:30 crc kubenswrapper[4972]: I1121 11:41:30.759777 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:41:30 crc kubenswrapper[4972]: E1121 11:41:30.762715 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.626777 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w999q"] Nov 21 11:41:32 crc kubenswrapper[4972]: E1121 11:41:32.627657 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ba34ab-39a2-432e-9fbb-c5ea6237eada" containerName="extract-utilities" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.627674 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ba34ab-39a2-432e-9fbb-c5ea6237eada" containerName="extract-utilities" Nov 21 11:41:32 crc kubenswrapper[4972]: E1121 11:41:32.627709 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ba34ab-39a2-432e-9fbb-c5ea6237eada" containerName="extract-content" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.627718 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ba34ab-39a2-432e-9fbb-c5ea6237eada" containerName="extract-content" Nov 21 11:41:32 crc kubenswrapper[4972]: E1121 11:41:32.627758 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ba34ab-39a2-432e-9fbb-c5ea6237eada" containerName="registry-server" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.627768 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ba34ab-39a2-432e-9fbb-c5ea6237eada" containerName="registry-server" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.628031 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ba34ab-39a2-432e-9fbb-c5ea6237eada" containerName="registry-server" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.632201 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.668918 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w999q"] Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.722306 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbbwz\" (UniqueName: \"kubernetes.io/projected/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-kube-api-access-kbbwz\") pod \"certified-operators-w999q\" (UID: \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\") " pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.722806 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-utilities\") pod \"certified-operators-w999q\" (UID: \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\") " pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.722919 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-catalog-content\") pod \"certified-operators-w999q\" (UID: \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\") " pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.824685 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbbwz\" (UniqueName: \"kubernetes.io/projected/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-kube-api-access-kbbwz\") pod \"certified-operators-w999q\" (UID: \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\") " pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.825062 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-utilities\") pod \"certified-operators-w999q\" (UID: \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\") " pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.825136 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-catalog-content\") pod \"certified-operators-w999q\" (UID: \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\") " pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.825979 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-utilities\") pod \"certified-operators-w999q\" (UID: \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\") " pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.826013 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-catalog-content\") pod \"certified-operators-w999q\" (UID: \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\") " pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.851820 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbbwz\" (UniqueName: \"kubernetes.io/projected/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-kube-api-access-kbbwz\") pod \"certified-operators-w999q\" (UID: \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\") " pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:32 crc kubenswrapper[4972]: I1121 11:41:32.959545 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:33 crc kubenswrapper[4972]: I1121 11:41:33.459985 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w999q"] Nov 21 11:41:34 crc kubenswrapper[4972]: I1121 11:41:34.296719 4972 generic.go:334] "Generic (PLEG): container finished" podID="0dcdea2f-13bb-4210-ac5d-95f4df6348b9" containerID="06144aafc6187cbb2791294fbffbe0a380f01e5c16dd41de1efbf335b39e9325" exitCode=0 Nov 21 11:41:34 crc kubenswrapper[4972]: I1121 11:41:34.296907 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w999q" event={"ID":"0dcdea2f-13bb-4210-ac5d-95f4df6348b9","Type":"ContainerDied","Data":"06144aafc6187cbb2791294fbffbe0a380f01e5c16dd41de1efbf335b39e9325"} Nov 21 11:41:34 crc kubenswrapper[4972]: I1121 11:41:34.297020 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w999q" event={"ID":"0dcdea2f-13bb-4210-ac5d-95f4df6348b9","Type":"ContainerStarted","Data":"2f5e4368a63a1de1899d33b0cb86051391f1a039ef9ebb4eca2d972e630f90c6"} Nov 21 11:41:35 crc kubenswrapper[4972]: I1121 11:41:35.343444 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w999q" event={"ID":"0dcdea2f-13bb-4210-ac5d-95f4df6348b9","Type":"ContainerStarted","Data":"81f33f717ce6b2f636853a54b49d1c8a1ab2d6cc0b2f6281906b43d8aa50b3c5"} Nov 21 11:41:37 crc kubenswrapper[4972]: I1121 11:41:37.366562 4972 generic.go:334] "Generic (PLEG): container finished" podID="0dcdea2f-13bb-4210-ac5d-95f4df6348b9" containerID="81f33f717ce6b2f636853a54b49d1c8a1ab2d6cc0b2f6281906b43d8aa50b3c5" exitCode=0 Nov 21 11:41:37 crc kubenswrapper[4972]: I1121 11:41:37.366628 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w999q" event={"ID":"0dcdea2f-13bb-4210-ac5d-95f4df6348b9","Type":"ContainerDied","Data":"81f33f717ce6b2f636853a54b49d1c8a1ab2d6cc0b2f6281906b43d8aa50b3c5"} Nov 21 11:41:38 crc kubenswrapper[4972]: I1121 11:41:38.380317 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w999q" event={"ID":"0dcdea2f-13bb-4210-ac5d-95f4df6348b9","Type":"ContainerStarted","Data":"6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf"} Nov 21 11:41:38 crc kubenswrapper[4972]: I1121 11:41:38.402723 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w999q" podStartSLOduration=2.928735604 podStartE2EDuration="6.402703098s" podCreationTimestamp="2025-11-21 11:41:32 +0000 UTC" firstStartedPulling="2025-11-21 11:41:34.307722035 +0000 UTC m=+7239.416864543" lastFinishedPulling="2025-11-21 11:41:37.781689539 +0000 UTC m=+7242.890832037" observedRunningTime="2025-11-21 11:41:38.402056971 +0000 UTC m=+7243.511199499" watchObservedRunningTime="2025-11-21 11:41:38.402703098 +0000 UTC m=+7243.511845596" Nov 21 11:41:42 crc kubenswrapper[4972]: I1121 11:41:42.960003 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:42 crc kubenswrapper[4972]: I1121 11:41:42.960758 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:43 crc kubenswrapper[4972]: I1121 11:41:43.028739 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:43 crc kubenswrapper[4972]: I1121 11:41:43.532321 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:43 crc kubenswrapper[4972]: I1121 11:41:43.588620 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w999q"] Nov 21 11:41:44 crc kubenswrapper[4972]: I1121 11:41:44.761090 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:41:44 crc kubenswrapper[4972]: E1121 11:41:44.762593 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:41:45 crc kubenswrapper[4972]: I1121 11:41:45.478653 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w999q" podUID="0dcdea2f-13bb-4210-ac5d-95f4df6348b9" containerName="registry-server" containerID="cri-o://6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf" gracePeriod=2 Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.022518 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.088239 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbbwz\" (UniqueName: \"kubernetes.io/projected/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-kube-api-access-kbbwz\") pod \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\" (UID: \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\") " Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.088464 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-catalog-content\") pod \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\" (UID: \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\") " Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.088592 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-utilities\") pod \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\" (UID: \"0dcdea2f-13bb-4210-ac5d-95f4df6348b9\") " Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.089550 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-utilities" (OuterVolumeSpecName: "utilities") pod "0dcdea2f-13bb-4210-ac5d-95f4df6348b9" (UID: "0dcdea2f-13bb-4210-ac5d-95f4df6348b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.100221 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-kube-api-access-kbbwz" (OuterVolumeSpecName: "kube-api-access-kbbwz") pod "0dcdea2f-13bb-4210-ac5d-95f4df6348b9" (UID: "0dcdea2f-13bb-4210-ac5d-95f4df6348b9"). InnerVolumeSpecName "kube-api-access-kbbwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.154333 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dcdea2f-13bb-4210-ac5d-95f4df6348b9" (UID: "0dcdea2f-13bb-4210-ac5d-95f4df6348b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.191725 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbbwz\" (UniqueName: \"kubernetes.io/projected/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-kube-api-access-kbbwz\") on node \"crc\" DevicePath \"\"" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.191767 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.191779 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcdea2f-13bb-4210-ac5d-95f4df6348b9-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.491665 4972 generic.go:334] "Generic (PLEG): container finished" podID="0dcdea2f-13bb-4210-ac5d-95f4df6348b9" containerID="6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf" exitCode=0 Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.491720 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w999q" event={"ID":"0dcdea2f-13bb-4210-ac5d-95f4df6348b9","Type":"ContainerDied","Data":"6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf"} Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.492017 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w999q" event={"ID":"0dcdea2f-13bb-4210-ac5d-95f4df6348b9","Type":"ContainerDied","Data":"2f5e4368a63a1de1899d33b0cb86051391f1a039ef9ebb4eca2d972e630f90c6"} Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.491761 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w999q" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.492036 4972 scope.go:117] "RemoveContainer" containerID="6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.530544 4972 scope.go:117] "RemoveContainer" containerID="81f33f717ce6b2f636853a54b49d1c8a1ab2d6cc0b2f6281906b43d8aa50b3c5" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.557285 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w999q"] Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.567790 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w999q"] Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.572749 4972 scope.go:117] "RemoveContainer" containerID="06144aafc6187cbb2791294fbffbe0a380f01e5c16dd41de1efbf335b39e9325" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.641404 4972 scope.go:117] "RemoveContainer" containerID="6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf" Nov 21 11:41:46 crc kubenswrapper[4972]: E1121 11:41:46.642877 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf\": container with ID starting with 6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf not found: ID does not exist" containerID="6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.643016 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf"} err="failed to get container status \"6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf\": rpc error: code = NotFound desc = could not find container \"6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf\": container with ID starting with 6c8c1b036b93847ed37718f6d7a858f0e38a2620fe028e07f69324ed8104ffdf not found: ID does not exist" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.643152 4972 scope.go:117] "RemoveContainer" containerID="81f33f717ce6b2f636853a54b49d1c8a1ab2d6cc0b2f6281906b43d8aa50b3c5" Nov 21 11:41:46 crc kubenswrapper[4972]: E1121 11:41:46.643740 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f33f717ce6b2f636853a54b49d1c8a1ab2d6cc0b2f6281906b43d8aa50b3c5\": container with ID starting with 81f33f717ce6b2f636853a54b49d1c8a1ab2d6cc0b2f6281906b43d8aa50b3c5 not found: ID does not exist" containerID="81f33f717ce6b2f636853a54b49d1c8a1ab2d6cc0b2f6281906b43d8aa50b3c5" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.643809 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f33f717ce6b2f636853a54b49d1c8a1ab2d6cc0b2f6281906b43d8aa50b3c5"} err="failed to get container status \"81f33f717ce6b2f636853a54b49d1c8a1ab2d6cc0b2f6281906b43d8aa50b3c5\": rpc error: code = NotFound desc = could not find container \"81f33f717ce6b2f636853a54b49d1c8a1ab2d6cc0b2f6281906b43d8aa50b3c5\": container with ID starting with 81f33f717ce6b2f636853a54b49d1c8a1ab2d6cc0b2f6281906b43d8aa50b3c5 not found: ID does not exist" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.643857 4972 scope.go:117] "RemoveContainer" containerID="06144aafc6187cbb2791294fbffbe0a380f01e5c16dd41de1efbf335b39e9325" Nov 21 11:41:46 crc kubenswrapper[4972]: E1121 11:41:46.646418 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06144aafc6187cbb2791294fbffbe0a380f01e5c16dd41de1efbf335b39e9325\": container with ID starting with 06144aafc6187cbb2791294fbffbe0a380f01e5c16dd41de1efbf335b39e9325 not found: ID does not exist" containerID="06144aafc6187cbb2791294fbffbe0a380f01e5c16dd41de1efbf335b39e9325" Nov 21 11:41:46 crc kubenswrapper[4972]: I1121 11:41:46.646451 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06144aafc6187cbb2791294fbffbe0a380f01e5c16dd41de1efbf335b39e9325"} err="failed to get container status \"06144aafc6187cbb2791294fbffbe0a380f01e5c16dd41de1efbf335b39e9325\": rpc error: code = NotFound desc = could not find container \"06144aafc6187cbb2791294fbffbe0a380f01e5c16dd41de1efbf335b39e9325\": container with ID starting with 06144aafc6187cbb2791294fbffbe0a380f01e5c16dd41de1efbf335b39e9325 not found: ID does not exist" Nov 21 11:41:47 crc kubenswrapper[4972]: I1121 11:41:47.780696 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dcdea2f-13bb-4210-ac5d-95f4df6348b9" path="/var/lib/kubelet/pods/0dcdea2f-13bb-4210-ac5d-95f4df6348b9/volumes" Nov 21 11:41:55 crc kubenswrapper[4972]: I1121 11:41:55.768409 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:41:55 crc kubenswrapper[4972]: E1121 11:41:55.769087 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:42:09 crc kubenswrapper[4972]: I1121 11:42:09.763211 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:42:09 crc kubenswrapper[4972]: E1121 11:42:09.764387 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:42:23 crc kubenswrapper[4972]: I1121 11:42:23.760043 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:42:23 crc kubenswrapper[4972]: E1121 11:42:23.761215 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:42:35 crc kubenswrapper[4972]: I1121 11:42:35.781339 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:42:35 crc kubenswrapper[4972]: E1121 11:42:35.782293 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:42:46 crc kubenswrapper[4972]: I1121 11:42:46.760366 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:42:46 crc kubenswrapper[4972]: E1121 11:42:46.761579 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:42:50 crc kubenswrapper[4972]: I1121 11:42:50.290112 4972 generic.go:334] "Generic (PLEG): container finished" podID="c828a840-b09c-419e-ab5c-1771ecceeed8" containerID="d01e79f7c7ff61f95422918ed0aaa6b8117d0c0a6952ff1d387e745d391ae512" exitCode=0 Nov 21 11:42:50 crc kubenswrapper[4972]: I1121 11:42:50.290211 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" event={"ID":"c828a840-b09c-419e-ab5c-1771ecceeed8","Type":"ContainerDied","Data":"d01e79f7c7ff61f95422918ed0aaa6b8117d0c0a6952ff1d387e745d391ae512"} Nov 21 11:42:51 crc kubenswrapper[4972]: I1121 11:42:51.897573 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:42:51 crc kubenswrapper[4972]: I1121 11:42:51.959477 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-bootstrap-combined-ca-bundle\") pod \"c828a840-b09c-419e-ab5c-1771ecceeed8\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " Nov 21 11:42:51 crc kubenswrapper[4972]: I1121 11:42:51.959640 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-ceph\") pod \"c828a840-b09c-419e-ab5c-1771ecceeed8\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " Nov 21 11:42:51 crc kubenswrapper[4972]: I1121 11:42:51.959780 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-inventory\") pod \"c828a840-b09c-419e-ab5c-1771ecceeed8\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " Nov 21 11:42:51 crc kubenswrapper[4972]: I1121 11:42:51.959867 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f9pt\" (UniqueName: \"kubernetes.io/projected/c828a840-b09c-419e-ab5c-1771ecceeed8-kube-api-access-6f9pt\") pod \"c828a840-b09c-419e-ab5c-1771ecceeed8\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " Nov 21 11:42:51 crc kubenswrapper[4972]: I1121 11:42:51.959897 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-ssh-key\") pod \"c828a840-b09c-419e-ab5c-1771ecceeed8\" (UID: \"c828a840-b09c-419e-ab5c-1771ecceeed8\") " Nov 21 11:42:51 crc kubenswrapper[4972]: I1121 11:42:51.969449 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c828a840-b09c-419e-ab5c-1771ecceeed8" (UID: "c828a840-b09c-419e-ab5c-1771ecceeed8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:42:51 crc kubenswrapper[4972]: I1121 11:42:51.970541 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-ceph" (OuterVolumeSpecName: "ceph") pod "c828a840-b09c-419e-ab5c-1771ecceeed8" (UID: "c828a840-b09c-419e-ab5c-1771ecceeed8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:42:51 crc kubenswrapper[4972]: I1121 11:42:51.972011 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c828a840-b09c-419e-ab5c-1771ecceeed8-kube-api-access-6f9pt" (OuterVolumeSpecName: "kube-api-access-6f9pt") pod "c828a840-b09c-419e-ab5c-1771ecceeed8" (UID: "c828a840-b09c-419e-ab5c-1771ecceeed8"). InnerVolumeSpecName "kube-api-access-6f9pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.002371 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-inventory" (OuterVolumeSpecName: "inventory") pod "c828a840-b09c-419e-ab5c-1771ecceeed8" (UID: "c828a840-b09c-419e-ab5c-1771ecceeed8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.012615 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c828a840-b09c-419e-ab5c-1771ecceeed8" (UID: "c828a840-b09c-419e-ab5c-1771ecceeed8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.062137 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f9pt\" (UniqueName: \"kubernetes.io/projected/c828a840-b09c-419e-ab5c-1771ecceeed8-kube-api-access-6f9pt\") on node \"crc\" DevicePath \"\"" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.062169 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.062178 4972 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.062188 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.062212 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c828a840-b09c-419e-ab5c-1771ecceeed8-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.322557 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" event={"ID":"c828a840-b09c-419e-ab5c-1771ecceeed8","Type":"ContainerDied","Data":"2ad48e4de067666680f71b0a74c0dc0316d59b76530a43b92b32fee3c0c1a70f"} Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.322606 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ad48e4de067666680f71b0a74c0dc0316d59b76530a43b92b32fee3c0c1a70f" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.322621 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-2mfkz" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.419051 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-6zqxn"] Nov 21 11:42:52 crc kubenswrapper[4972]: E1121 11:42:52.419655 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c828a840-b09c-419e-ab5c-1771ecceeed8" containerName="bootstrap-openstack-openstack-cell1" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.419676 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c828a840-b09c-419e-ab5c-1771ecceeed8" containerName="bootstrap-openstack-openstack-cell1" Nov 21 11:42:52 crc kubenswrapper[4972]: E1121 11:42:52.419688 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcdea2f-13bb-4210-ac5d-95f4df6348b9" containerName="extract-utilities" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.419696 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcdea2f-13bb-4210-ac5d-95f4df6348b9" containerName="extract-utilities" Nov 21 11:42:52 crc kubenswrapper[4972]: E1121 11:42:52.419720 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcdea2f-13bb-4210-ac5d-95f4df6348b9" containerName="extract-content" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.419727 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcdea2f-13bb-4210-ac5d-95f4df6348b9" containerName="extract-content" Nov 21 11:42:52 crc kubenswrapper[4972]: E1121 11:42:52.419742 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcdea2f-13bb-4210-ac5d-95f4df6348b9" containerName="registry-server" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.419750 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcdea2f-13bb-4210-ac5d-95f4df6348b9" containerName="registry-server" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.420034 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c828a840-b09c-419e-ab5c-1771ecceeed8" containerName="bootstrap-openstack-openstack-cell1" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.420051 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dcdea2f-13bb-4210-ac5d-95f4df6348b9" containerName="registry-server" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.420989 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.427481 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.427490 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.427681 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.431487 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.455649 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-6zqxn"] Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.573556 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-ceph\") pod \"download-cache-openstack-openstack-cell1-6zqxn\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.573663 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shntg\" (UniqueName: \"kubernetes.io/projected/e18adcb1-7956-4cae-874f-40130f05621b-kube-api-access-shntg\") pod \"download-cache-openstack-openstack-cell1-6zqxn\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.574692 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-ssh-key\") pod \"download-cache-openstack-openstack-cell1-6zqxn\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.574871 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-inventory\") pod \"download-cache-openstack-openstack-cell1-6zqxn\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: E1121 11:42:52.606647 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc828a840_b09c_419e_ab5c_1771ecceeed8.slice/crio-2ad48e4de067666680f71b0a74c0dc0316d59b76530a43b92b32fee3c0c1a70f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc828a840_b09c_419e_ab5c_1771ecceeed8.slice\": RecentStats: unable to find data in memory cache]" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.677354 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-ssh-key\") pod \"download-cache-openstack-openstack-cell1-6zqxn\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.677440 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-inventory\") pod \"download-cache-openstack-openstack-cell1-6zqxn\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.677477 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-ceph\") pod \"download-cache-openstack-openstack-cell1-6zqxn\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.677524 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shntg\" (UniqueName: \"kubernetes.io/projected/e18adcb1-7956-4cae-874f-40130f05621b-kube-api-access-shntg\") pod \"download-cache-openstack-openstack-cell1-6zqxn\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.685481 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-ssh-key\") pod \"download-cache-openstack-openstack-cell1-6zqxn\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.685500 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-ceph\") pod \"download-cache-openstack-openstack-cell1-6zqxn\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.686259 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-inventory\") pod \"download-cache-openstack-openstack-cell1-6zqxn\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.696054 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shntg\" (UniqueName: \"kubernetes.io/projected/e18adcb1-7956-4cae-874f-40130f05621b-kube-api-access-shntg\") pod \"download-cache-openstack-openstack-cell1-6zqxn\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:52 crc kubenswrapper[4972]: I1121 11:42:52.739154 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:42:53 crc kubenswrapper[4972]: I1121 11:42:53.306738 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-6zqxn"] Nov 21 11:42:53 crc kubenswrapper[4972]: I1121 11:42:53.332989 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" event={"ID":"e18adcb1-7956-4cae-874f-40130f05621b","Type":"ContainerStarted","Data":"5691fd6cb3cd64a1f61371a26f2725c63595e6157708c19979b8f6e69d3af047"} Nov 21 11:42:54 crc kubenswrapper[4972]: I1121 11:42:54.342912 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" event={"ID":"e18adcb1-7956-4cae-874f-40130f05621b","Type":"ContainerStarted","Data":"a819ebc39baadf5160895639834ccbaf90573ea7c8011a3be79de7c9ef268bc4"} Nov 21 11:42:54 crc kubenswrapper[4972]: I1121 11:42:54.361091 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" podStartSLOduration=1.945339723 podStartE2EDuration="2.361074475s" podCreationTimestamp="2025-11-21 11:42:52 +0000 UTC" firstStartedPulling="2025-11-21 11:42:53.312950884 +0000 UTC m=+7318.422093402" lastFinishedPulling="2025-11-21 11:42:53.728685626 +0000 UTC m=+7318.837828154" observedRunningTime="2025-11-21 11:42:54.3590079 +0000 UTC m=+7319.468150408" watchObservedRunningTime="2025-11-21 11:42:54.361074475 +0000 UTC m=+7319.470216963" Nov 21 11:42:59 crc kubenswrapper[4972]: I1121 11:42:59.759633 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:42:59 crc kubenswrapper[4972]: E1121 11:42:59.760496 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:43:12 crc kubenswrapper[4972]: I1121 11:43:12.760259 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:43:12 crc kubenswrapper[4972]: E1121 11:43:12.761273 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:43:24 crc kubenswrapper[4972]: I1121 11:43:24.760203 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:43:24 crc kubenswrapper[4972]: E1121 11:43:24.763369 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:43:38 crc kubenswrapper[4972]: I1121 11:43:38.760140 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:43:38 crc kubenswrapper[4972]: E1121 11:43:38.760997 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:43:53 crc kubenswrapper[4972]: I1121 11:43:53.759853 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:43:53 crc kubenswrapper[4972]: E1121 11:43:53.760919 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:44:06 crc kubenswrapper[4972]: I1121 11:44:06.759107 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:44:07 crc kubenswrapper[4972]: I1121 11:44:07.377287 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"b79c2b857a1fdd2c8360d6a73525adb3af3295fc033655a7e49f7c9eebd3b913"} Nov 21 11:44:27 crc kubenswrapper[4972]: I1121 11:44:27.659481 4972 generic.go:334] "Generic (PLEG): container finished" podID="e18adcb1-7956-4cae-874f-40130f05621b" containerID="a819ebc39baadf5160895639834ccbaf90573ea7c8011a3be79de7c9ef268bc4" exitCode=0 Nov 21 11:44:27 crc kubenswrapper[4972]: I1121 11:44:27.659711 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" event={"ID":"e18adcb1-7956-4cae-874f-40130f05621b","Type":"ContainerDied","Data":"a819ebc39baadf5160895639834ccbaf90573ea7c8011a3be79de7c9ef268bc4"} Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.274853 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.418487 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shntg\" (UniqueName: \"kubernetes.io/projected/e18adcb1-7956-4cae-874f-40130f05621b-kube-api-access-shntg\") pod \"e18adcb1-7956-4cae-874f-40130f05621b\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.418652 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-ssh-key\") pod \"e18adcb1-7956-4cae-874f-40130f05621b\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.418697 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-ceph\") pod \"e18adcb1-7956-4cae-874f-40130f05621b\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.418744 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-inventory\") pod \"e18adcb1-7956-4cae-874f-40130f05621b\" (UID: \"e18adcb1-7956-4cae-874f-40130f05621b\") " Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.425081 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e18adcb1-7956-4cae-874f-40130f05621b-kube-api-access-shntg" (OuterVolumeSpecName: "kube-api-access-shntg") pod "e18adcb1-7956-4cae-874f-40130f05621b" (UID: "e18adcb1-7956-4cae-874f-40130f05621b"). InnerVolumeSpecName "kube-api-access-shntg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.426755 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-ceph" (OuterVolumeSpecName: "ceph") pod "e18adcb1-7956-4cae-874f-40130f05621b" (UID: "e18adcb1-7956-4cae-874f-40130f05621b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.449218 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-inventory" (OuterVolumeSpecName: "inventory") pod "e18adcb1-7956-4cae-874f-40130f05621b" (UID: "e18adcb1-7956-4cae-874f-40130f05621b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.456741 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e18adcb1-7956-4cae-874f-40130f05621b" (UID: "e18adcb1-7956-4cae-874f-40130f05621b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.521763 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.521808 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.521871 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shntg\" (UniqueName: \"kubernetes.io/projected/e18adcb1-7956-4cae-874f-40130f05621b-kube-api-access-shntg\") on node \"crc\" DevicePath \"\"" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.521891 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e18adcb1-7956-4cae-874f-40130f05621b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.719393 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" event={"ID":"e18adcb1-7956-4cae-874f-40130f05621b","Type":"ContainerDied","Data":"5691fd6cb3cd64a1f61371a26f2725c63595e6157708c19979b8f6e69d3af047"} Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.719428 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5691fd6cb3cd64a1f61371a26f2725c63595e6157708c19979b8f6e69d3af047" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.719514 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-6zqxn" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.799231 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-ckn27"] Nov 21 11:44:29 crc kubenswrapper[4972]: E1121 11:44:29.799747 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e18adcb1-7956-4cae-874f-40130f05621b" containerName="download-cache-openstack-openstack-cell1" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.799767 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e18adcb1-7956-4cae-874f-40130f05621b" containerName="download-cache-openstack-openstack-cell1" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.800046 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e18adcb1-7956-4cae-874f-40130f05621b" containerName="download-cache-openstack-openstack-cell1" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.800929 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.803456 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.803686 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.803961 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.805423 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.816105 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-ckn27"] Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.936790 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-ssh-key\") pod \"configure-network-openstack-openstack-cell1-ckn27\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.936894 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvq6n\" (UniqueName: \"kubernetes.io/projected/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-kube-api-access-mvq6n\") pod \"configure-network-openstack-openstack-cell1-ckn27\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.936948 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-inventory\") pod \"configure-network-openstack-openstack-cell1-ckn27\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:29 crc kubenswrapper[4972]: I1121 11:44:29.937071 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-ceph\") pod \"configure-network-openstack-openstack-cell1-ckn27\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:30 crc kubenswrapper[4972]: I1121 11:44:30.039823 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-ceph\") pod \"configure-network-openstack-openstack-cell1-ckn27\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:30 crc kubenswrapper[4972]: I1121 11:44:30.039938 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-ssh-key\") pod \"configure-network-openstack-openstack-cell1-ckn27\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:30 crc kubenswrapper[4972]: I1121 11:44:30.040010 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvq6n\" (UniqueName: \"kubernetes.io/projected/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-kube-api-access-mvq6n\") pod \"configure-network-openstack-openstack-cell1-ckn27\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:30 crc kubenswrapper[4972]: I1121 11:44:30.040442 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-inventory\") pod \"configure-network-openstack-openstack-cell1-ckn27\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:30 crc kubenswrapper[4972]: I1121 11:44:30.043493 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-ssh-key\") pod \"configure-network-openstack-openstack-cell1-ckn27\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:30 crc kubenswrapper[4972]: I1121 11:44:30.044319 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-inventory\") pod \"configure-network-openstack-openstack-cell1-ckn27\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:30 crc kubenswrapper[4972]: I1121 11:44:30.045296 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-ceph\") pod \"configure-network-openstack-openstack-cell1-ckn27\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:30 crc kubenswrapper[4972]: I1121 11:44:30.059641 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvq6n\" (UniqueName: \"kubernetes.io/projected/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-kube-api-access-mvq6n\") pod \"configure-network-openstack-openstack-cell1-ckn27\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:30 crc kubenswrapper[4972]: I1121 11:44:30.135516 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:44:30 crc kubenswrapper[4972]: I1121 11:44:30.767291 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-ckn27"] Nov 21 11:44:30 crc kubenswrapper[4972]: I1121 11:44:30.776595 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 11:44:31 crc kubenswrapper[4972]: I1121 11:44:31.748666 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-ckn27" event={"ID":"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3","Type":"ContainerStarted","Data":"60e8ea8a1e777fc08d25d9854c4535d212ea7176b05c75eb3d8569792d50a0aa"} Nov 21 11:44:31 crc kubenswrapper[4972]: I1121 11:44:31.749051 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-ckn27" event={"ID":"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3","Type":"ContainerStarted","Data":"a195b8306f5513a3b8787adfdff23befd8c4498c1ab14ce4916465393a555858"} Nov 21 11:44:31 crc kubenswrapper[4972]: I1121 11:44:31.767253 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-ckn27" podStartSLOduration=2.37193865 podStartE2EDuration="2.767234455s" podCreationTimestamp="2025-11-21 11:44:29 +0000 UTC" firstStartedPulling="2025-11-21 11:44:30.77641886 +0000 UTC m=+7415.885561358" lastFinishedPulling="2025-11-21 11:44:31.171714665 +0000 UTC m=+7416.280857163" observedRunningTime="2025-11-21 11:44:31.76094976 +0000 UTC m=+7416.870092268" watchObservedRunningTime="2025-11-21 11:44:31.767234455 +0000 UTC m=+7416.876376953" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.169223 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5"] Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.171500 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.175211 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.175447 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.180696 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5"] Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.275084 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c92b46-f7f0-4914-8c54-2192c7997dee-config-volume\") pod \"collect-profiles-29395425-xv7q5\" (UID: \"c7c92b46-f7f0-4914-8c54-2192c7997dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.275404 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqlgn\" (UniqueName: \"kubernetes.io/projected/c7c92b46-f7f0-4914-8c54-2192c7997dee-kube-api-access-fqlgn\") pod \"collect-profiles-29395425-xv7q5\" (UID: \"c7c92b46-f7f0-4914-8c54-2192c7997dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.275565 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c92b46-f7f0-4914-8c54-2192c7997dee-secret-volume\") pod \"collect-profiles-29395425-xv7q5\" (UID: \"c7c92b46-f7f0-4914-8c54-2192c7997dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.377689 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c92b46-f7f0-4914-8c54-2192c7997dee-config-volume\") pod \"collect-profiles-29395425-xv7q5\" (UID: \"c7c92b46-f7f0-4914-8c54-2192c7997dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.377730 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqlgn\" (UniqueName: \"kubernetes.io/projected/c7c92b46-f7f0-4914-8c54-2192c7997dee-kube-api-access-fqlgn\") pod \"collect-profiles-29395425-xv7q5\" (UID: \"c7c92b46-f7f0-4914-8c54-2192c7997dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.377758 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c92b46-f7f0-4914-8c54-2192c7997dee-secret-volume\") pod \"collect-profiles-29395425-xv7q5\" (UID: \"c7c92b46-f7f0-4914-8c54-2192c7997dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.379967 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c92b46-f7f0-4914-8c54-2192c7997dee-config-volume\") pod \"collect-profiles-29395425-xv7q5\" (UID: \"c7c92b46-f7f0-4914-8c54-2192c7997dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.386273 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c92b46-f7f0-4914-8c54-2192c7997dee-secret-volume\") pod \"collect-profiles-29395425-xv7q5\" (UID: \"c7c92b46-f7f0-4914-8c54-2192c7997dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.406500 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqlgn\" (UniqueName: \"kubernetes.io/projected/c7c92b46-f7f0-4914-8c54-2192c7997dee-kube-api-access-fqlgn\") pod \"collect-profiles-29395425-xv7q5\" (UID: \"c7c92b46-f7f0-4914-8c54-2192c7997dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:00 crc kubenswrapper[4972]: I1121 11:45:00.494051 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:01 crc kubenswrapper[4972]: I1121 11:45:01.001633 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5"] Nov 21 11:45:01 crc kubenswrapper[4972]: W1121 11:45:01.011096 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7c92b46_f7f0_4914_8c54_2192c7997dee.slice/crio-785dca29a9b675a4b9a2ad9844b86ba01da8bccdd6beb85de8a692aa6b79bf6f WatchSource:0}: Error finding container 785dca29a9b675a4b9a2ad9844b86ba01da8bccdd6beb85de8a692aa6b79bf6f: Status 404 returned error can't find the container with id 785dca29a9b675a4b9a2ad9844b86ba01da8bccdd6beb85de8a692aa6b79bf6f Nov 21 11:45:01 crc kubenswrapper[4972]: I1121 11:45:01.104785 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" event={"ID":"c7c92b46-f7f0-4914-8c54-2192c7997dee","Type":"ContainerStarted","Data":"785dca29a9b675a4b9a2ad9844b86ba01da8bccdd6beb85de8a692aa6b79bf6f"} Nov 21 11:45:02 crc kubenswrapper[4972]: I1121 11:45:02.118302 4972 generic.go:334] "Generic (PLEG): container finished" podID="c7c92b46-f7f0-4914-8c54-2192c7997dee" containerID="9c19f18541a59b02b79ea06c99f10cb13c912040e39757c1bbf891d3e2f3c14e" exitCode=0 Nov 21 11:45:02 crc kubenswrapper[4972]: I1121 11:45:02.118375 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" event={"ID":"c7c92b46-f7f0-4914-8c54-2192c7997dee","Type":"ContainerDied","Data":"9c19f18541a59b02b79ea06c99f10cb13c912040e39757c1bbf891d3e2f3c14e"} Nov 21 11:45:03 crc kubenswrapper[4972]: I1121 11:45:03.546978 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:03 crc kubenswrapper[4972]: I1121 11:45:03.663747 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c92b46-f7f0-4914-8c54-2192c7997dee-secret-volume\") pod \"c7c92b46-f7f0-4914-8c54-2192c7997dee\" (UID: \"c7c92b46-f7f0-4914-8c54-2192c7997dee\") " Nov 21 11:45:03 crc kubenswrapper[4972]: I1121 11:45:03.664016 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c92b46-f7f0-4914-8c54-2192c7997dee-config-volume\") pod \"c7c92b46-f7f0-4914-8c54-2192c7997dee\" (UID: \"c7c92b46-f7f0-4914-8c54-2192c7997dee\") " Nov 21 11:45:03 crc kubenswrapper[4972]: I1121 11:45:03.664166 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqlgn\" (UniqueName: \"kubernetes.io/projected/c7c92b46-f7f0-4914-8c54-2192c7997dee-kube-api-access-fqlgn\") pod \"c7c92b46-f7f0-4914-8c54-2192c7997dee\" (UID: \"c7c92b46-f7f0-4914-8c54-2192c7997dee\") " Nov 21 11:45:03 crc kubenswrapper[4972]: I1121 11:45:03.664932 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7c92b46-f7f0-4914-8c54-2192c7997dee-config-volume" (OuterVolumeSpecName: "config-volume") pod "c7c92b46-f7f0-4914-8c54-2192c7997dee" (UID: "c7c92b46-f7f0-4914-8c54-2192c7997dee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:45:03 crc kubenswrapper[4972]: I1121 11:45:03.672270 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7c92b46-f7f0-4914-8c54-2192c7997dee-kube-api-access-fqlgn" (OuterVolumeSpecName: "kube-api-access-fqlgn") pod "c7c92b46-f7f0-4914-8c54-2192c7997dee" (UID: "c7c92b46-f7f0-4914-8c54-2192c7997dee"). InnerVolumeSpecName "kube-api-access-fqlgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:45:03 crc kubenswrapper[4972]: I1121 11:45:03.672381 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7c92b46-f7f0-4914-8c54-2192c7997dee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c7c92b46-f7f0-4914-8c54-2192c7997dee" (UID: "c7c92b46-f7f0-4914-8c54-2192c7997dee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:45:03 crc kubenswrapper[4972]: I1121 11:45:03.767253 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c92b46-f7f0-4914-8c54-2192c7997dee-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:03 crc kubenswrapper[4972]: I1121 11:45:03.767622 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqlgn\" (UniqueName: \"kubernetes.io/projected/c7c92b46-f7f0-4914-8c54-2192c7997dee-kube-api-access-fqlgn\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:03 crc kubenswrapper[4972]: I1121 11:45:03.767637 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c92b46-f7f0-4914-8c54-2192c7997dee-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:04 crc kubenswrapper[4972]: I1121 11:45:04.141358 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" event={"ID":"c7c92b46-f7f0-4914-8c54-2192c7997dee","Type":"ContainerDied","Data":"785dca29a9b675a4b9a2ad9844b86ba01da8bccdd6beb85de8a692aa6b79bf6f"} Nov 21 11:45:04 crc kubenswrapper[4972]: I1121 11:45:04.141412 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="785dca29a9b675a4b9a2ad9844b86ba01da8bccdd6beb85de8a692aa6b79bf6f" Nov 21 11:45:04 crc kubenswrapper[4972]: I1121 11:45:04.141451 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5" Nov 21 11:45:04 crc kubenswrapper[4972]: I1121 11:45:04.646374 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j"] Nov 21 11:45:04 crc kubenswrapper[4972]: I1121 11:45:04.655989 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395380-92w7j"] Nov 21 11:45:05 crc kubenswrapper[4972]: I1121 11:45:05.785507 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e659f22f-1804-4119-a907-353634f17737" path="/var/lib/kubelet/pods/e659f22f-1804-4119-a907-353634f17737/volumes" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.533894 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sbdcf"] Nov 21 11:45:37 crc kubenswrapper[4972]: E1121 11:45:37.535455 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c92b46-f7f0-4914-8c54-2192c7997dee" containerName="collect-profiles" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.535480 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c92b46-f7f0-4914-8c54-2192c7997dee" containerName="collect-profiles" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.535779 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7c92b46-f7f0-4914-8c54-2192c7997dee" containerName="collect-profiles" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.538755 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.547991 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sbdcf"] Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.594003 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee85abd-f909-4d6a-8268-410b4f17c9ba-utilities\") pod \"community-operators-sbdcf\" (UID: \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\") " pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.594078 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee85abd-f909-4d6a-8268-410b4f17c9ba-catalog-content\") pod \"community-operators-sbdcf\" (UID: \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\") " pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.594194 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6sgs\" (UniqueName: \"kubernetes.io/projected/5ee85abd-f909-4d6a-8268-410b4f17c9ba-kube-api-access-j6sgs\") pod \"community-operators-sbdcf\" (UID: \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\") " pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.696175 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6sgs\" (UniqueName: \"kubernetes.io/projected/5ee85abd-f909-4d6a-8268-410b4f17c9ba-kube-api-access-j6sgs\") pod \"community-operators-sbdcf\" (UID: \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\") " pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.696330 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee85abd-f909-4d6a-8268-410b4f17c9ba-utilities\") pod \"community-operators-sbdcf\" (UID: \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\") " pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.696408 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee85abd-f909-4d6a-8268-410b4f17c9ba-catalog-content\") pod \"community-operators-sbdcf\" (UID: \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\") " pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.696773 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee85abd-f909-4d6a-8268-410b4f17c9ba-utilities\") pod \"community-operators-sbdcf\" (UID: \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\") " pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.696860 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee85abd-f909-4d6a-8268-410b4f17c9ba-catalog-content\") pod \"community-operators-sbdcf\" (UID: \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\") " pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.724704 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6sgs\" (UniqueName: \"kubernetes.io/projected/5ee85abd-f909-4d6a-8268-410b4f17c9ba-kube-api-access-j6sgs\") pod \"community-operators-sbdcf\" (UID: \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\") " pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:37 crc kubenswrapper[4972]: I1121 11:45:37.886431 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:38 crc kubenswrapper[4972]: I1121 11:45:38.381992 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sbdcf"] Nov 21 11:45:38 crc kubenswrapper[4972]: W1121 11:45:38.383419 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee85abd_f909_4d6a_8268_410b4f17c9ba.slice/crio-b1c3f869baa867d29d5951f15ec4441995c72b3b41cfa30e50c2b2786c7c372f WatchSource:0}: Error finding container b1c3f869baa867d29d5951f15ec4441995c72b3b41cfa30e50c2b2786c7c372f: Status 404 returned error can't find the container with id b1c3f869baa867d29d5951f15ec4441995c72b3b41cfa30e50c2b2786c7c372f Nov 21 11:45:38 crc kubenswrapper[4972]: I1121 11:45:38.551074 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbdcf" event={"ID":"5ee85abd-f909-4d6a-8268-410b4f17c9ba","Type":"ContainerStarted","Data":"b1c3f869baa867d29d5951f15ec4441995c72b3b41cfa30e50c2b2786c7c372f"} Nov 21 11:45:38 crc kubenswrapper[4972]: I1121 11:45:38.689980 4972 scope.go:117] "RemoveContainer" containerID="1470278944ae5d717505841a1b68e12438ad03ec783d8cda584e3e690b3c85c4" Nov 21 11:45:39 crc kubenswrapper[4972]: I1121 11:45:39.566619 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ee85abd-f909-4d6a-8268-410b4f17c9ba" containerID="0d6ce3426b05350c0d6302c8e72a0e97a674b3d29b088b31f260edde43c904ee" exitCode=0 Nov 21 11:45:39 crc kubenswrapper[4972]: I1121 11:45:39.566866 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbdcf" event={"ID":"5ee85abd-f909-4d6a-8268-410b4f17c9ba","Type":"ContainerDied","Data":"0d6ce3426b05350c0d6302c8e72a0e97a674b3d29b088b31f260edde43c904ee"} Nov 21 11:45:41 crc kubenswrapper[4972]: I1121 11:45:41.610939 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbdcf" event={"ID":"5ee85abd-f909-4d6a-8268-410b4f17c9ba","Type":"ContainerStarted","Data":"5d75da336c096bf6053c4dfed19351da45dc40906daa8eb6a16a3f097808ee73"} Nov 21 11:45:43 crc kubenswrapper[4972]: I1121 11:45:43.643510 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ee85abd-f909-4d6a-8268-410b4f17c9ba" containerID="5d75da336c096bf6053c4dfed19351da45dc40906daa8eb6a16a3f097808ee73" exitCode=0 Nov 21 11:45:43 crc kubenswrapper[4972]: I1121 11:45:43.643980 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbdcf" event={"ID":"5ee85abd-f909-4d6a-8268-410b4f17c9ba","Type":"ContainerDied","Data":"5d75da336c096bf6053c4dfed19351da45dc40906daa8eb6a16a3f097808ee73"} Nov 21 11:45:44 crc kubenswrapper[4972]: I1121 11:45:44.658458 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbdcf" event={"ID":"5ee85abd-f909-4d6a-8268-410b4f17c9ba","Type":"ContainerStarted","Data":"feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b"} Nov 21 11:45:44 crc kubenswrapper[4972]: I1121 11:45:44.691351 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sbdcf" podStartSLOduration=3.192932631 podStartE2EDuration="7.691293819s" podCreationTimestamp="2025-11-21 11:45:37 +0000 UTC" firstStartedPulling="2025-11-21 11:45:39.569319563 +0000 UTC m=+7484.678462071" lastFinishedPulling="2025-11-21 11:45:44.067680761 +0000 UTC m=+7489.176823259" observedRunningTime="2025-11-21 11:45:44.675870764 +0000 UTC m=+7489.785013312" watchObservedRunningTime="2025-11-21 11:45:44.691293819 +0000 UTC m=+7489.800436357" Nov 21 11:45:47 crc kubenswrapper[4972]: I1121 11:45:47.886777 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:47 crc kubenswrapper[4972]: I1121 11:45:47.887626 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:47 crc kubenswrapper[4972]: I1121 11:45:47.957224 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:48 crc kubenswrapper[4972]: I1121 11:45:48.717812 4972 generic.go:334] "Generic (PLEG): container finished" podID="6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3" containerID="60e8ea8a1e777fc08d25d9854c4535d212ea7176b05c75eb3d8569792d50a0aa" exitCode=0 Nov 21 11:45:48 crc kubenswrapper[4972]: I1121 11:45:48.717891 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-ckn27" event={"ID":"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3","Type":"ContainerDied","Data":"60e8ea8a1e777fc08d25d9854c4535d212ea7176b05c75eb3d8569792d50a0aa"} Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.184892 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.331599 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-ssh-key\") pod \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.331666 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-inventory\") pod \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.331737 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvq6n\" (UniqueName: \"kubernetes.io/projected/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-kube-api-access-mvq6n\") pod \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.331763 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-ceph\") pod \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\" (UID: \"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3\") " Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.337496 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-ceph" (OuterVolumeSpecName: "ceph") pod "6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3" (UID: "6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.339422 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-kube-api-access-mvq6n" (OuterVolumeSpecName: "kube-api-access-mvq6n") pod "6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3" (UID: "6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3"). InnerVolumeSpecName "kube-api-access-mvq6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.362485 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-inventory" (OuterVolumeSpecName: "inventory") pod "6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3" (UID: "6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.365195 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3" (UID: "6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.434994 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvq6n\" (UniqueName: \"kubernetes.io/projected/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-kube-api-access-mvq6n\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.435041 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.435055 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.435066 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.745987 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-ckn27" event={"ID":"6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3","Type":"ContainerDied","Data":"a195b8306f5513a3b8787adfdff23befd8c4498c1ab14ce4916465393a555858"} Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.746428 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a195b8306f5513a3b8787adfdff23befd8c4498c1ab14ce4916465393a555858" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.746070 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-ckn27" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.846627 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-x5qbs"] Nov 21 11:45:50 crc kubenswrapper[4972]: E1121 11:45:50.847683 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3" containerName="configure-network-openstack-openstack-cell1" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.847771 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3" containerName="configure-network-openstack-openstack-cell1" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.848181 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3" containerName="configure-network-openstack-openstack-cell1" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.849326 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.854821 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.855002 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.855004 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.855444 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:45:50 crc kubenswrapper[4972]: I1121 11:45:50.878581 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-x5qbs"] Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.051131 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtv5f\" (UniqueName: \"kubernetes.io/projected/ea334c61-e29f-432c-99c2-6e8463dde290-kube-api-access-qtv5f\") pod \"validate-network-openstack-openstack-cell1-x5qbs\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.051186 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-ceph\") pod \"validate-network-openstack-openstack-cell1-x5qbs\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.051278 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-inventory\") pod \"validate-network-openstack-openstack-cell1-x5qbs\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.051538 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-ssh-key\") pod \"validate-network-openstack-openstack-cell1-x5qbs\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.155154 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-ssh-key\") pod \"validate-network-openstack-openstack-cell1-x5qbs\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.155471 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtv5f\" (UniqueName: \"kubernetes.io/projected/ea334c61-e29f-432c-99c2-6e8463dde290-kube-api-access-qtv5f\") pod \"validate-network-openstack-openstack-cell1-x5qbs\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.155538 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-ceph\") pod \"validate-network-openstack-openstack-cell1-x5qbs\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.155676 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-inventory\") pod \"validate-network-openstack-openstack-cell1-x5qbs\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.161481 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-ssh-key\") pod \"validate-network-openstack-openstack-cell1-x5qbs\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.162947 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-inventory\") pod \"validate-network-openstack-openstack-cell1-x5qbs\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.170203 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-ceph\") pod \"validate-network-openstack-openstack-cell1-x5qbs\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.174182 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtv5f\" (UniqueName: \"kubernetes.io/projected/ea334c61-e29f-432c-99c2-6e8463dde290-kube-api-access-qtv5f\") pod \"validate-network-openstack-openstack-cell1-x5qbs\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.183301 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:51 crc kubenswrapper[4972]: I1121 11:45:51.799689 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-x5qbs"] Nov 21 11:45:52 crc kubenswrapper[4972]: I1121 11:45:52.773500 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" event={"ID":"ea334c61-e29f-432c-99c2-6e8463dde290","Type":"ContainerStarted","Data":"0973f8cc49fee50d950bf7defe88d97ac83376171954ff68df1b1320419c4569"} Nov 21 11:45:52 crc kubenswrapper[4972]: I1121 11:45:52.774298 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" event={"ID":"ea334c61-e29f-432c-99c2-6e8463dde290","Type":"ContainerStarted","Data":"d1a08661974915b54eff17f4d715fadddffd8172ac3a957b840eaa6efde386ff"} Nov 21 11:45:52 crc kubenswrapper[4972]: I1121 11:45:52.805009 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" podStartSLOduration=2.249803307 podStartE2EDuration="2.804985209s" podCreationTimestamp="2025-11-21 11:45:50 +0000 UTC" firstStartedPulling="2025-11-21 11:45:51.812106908 +0000 UTC m=+7496.921249416" lastFinishedPulling="2025-11-21 11:45:52.36728882 +0000 UTC m=+7497.476431318" observedRunningTime="2025-11-21 11:45:52.795995713 +0000 UTC m=+7497.905138261" watchObservedRunningTime="2025-11-21 11:45:52.804985209 +0000 UTC m=+7497.914127707" Nov 21 11:45:57 crc kubenswrapper[4972]: I1121 11:45:57.832800 4972 generic.go:334] "Generic (PLEG): container finished" podID="ea334c61-e29f-432c-99c2-6e8463dde290" containerID="0973f8cc49fee50d950bf7defe88d97ac83376171954ff68df1b1320419c4569" exitCode=0 Nov 21 11:45:57 crc kubenswrapper[4972]: I1121 11:45:57.833988 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" event={"ID":"ea334c61-e29f-432c-99c2-6e8463dde290","Type":"ContainerDied","Data":"0973f8cc49fee50d950bf7defe88d97ac83376171954ff68df1b1320419c4569"} Nov 21 11:45:57 crc kubenswrapper[4972]: I1121 11:45:57.945808 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:58 crc kubenswrapper[4972]: I1121 11:45:58.013553 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sbdcf"] Nov 21 11:45:58 crc kubenswrapper[4972]: I1121 11:45:58.842354 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sbdcf" podUID="5ee85abd-f909-4d6a-8268-410b4f17c9ba" containerName="registry-server" containerID="cri-o://feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b" gracePeriod=2 Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.456204 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.465527 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.557427 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtv5f\" (UniqueName: \"kubernetes.io/projected/ea334c61-e29f-432c-99c2-6e8463dde290-kube-api-access-qtv5f\") pod \"ea334c61-e29f-432c-99c2-6e8463dde290\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.557649 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-ssh-key\") pod \"ea334c61-e29f-432c-99c2-6e8463dde290\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.557793 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee85abd-f909-4d6a-8268-410b4f17c9ba-catalog-content\") pod \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\" (UID: \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\") " Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.557882 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-ceph\") pod \"ea334c61-e29f-432c-99c2-6e8463dde290\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.558048 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee85abd-f909-4d6a-8268-410b4f17c9ba-utilities\") pod \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\" (UID: \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\") " Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.558214 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-inventory\") pod \"ea334c61-e29f-432c-99c2-6e8463dde290\" (UID: \"ea334c61-e29f-432c-99c2-6e8463dde290\") " Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.558287 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6sgs\" (UniqueName: \"kubernetes.io/projected/5ee85abd-f909-4d6a-8268-410b4f17c9ba-kube-api-access-j6sgs\") pod \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\" (UID: \"5ee85abd-f909-4d6a-8268-410b4f17c9ba\") " Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.560663 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ee85abd-f909-4d6a-8268-410b4f17c9ba-utilities" (OuterVolumeSpecName: "utilities") pod "5ee85abd-f909-4d6a-8268-410b4f17c9ba" (UID: "5ee85abd-f909-4d6a-8268-410b4f17c9ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.566311 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-ceph" (OuterVolumeSpecName: "ceph") pod "ea334c61-e29f-432c-99c2-6e8463dde290" (UID: "ea334c61-e29f-432c-99c2-6e8463dde290"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.566874 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee85abd-f909-4d6a-8268-410b4f17c9ba-kube-api-access-j6sgs" (OuterVolumeSpecName: "kube-api-access-j6sgs") pod "5ee85abd-f909-4d6a-8268-410b4f17c9ba" (UID: "5ee85abd-f909-4d6a-8268-410b4f17c9ba"). InnerVolumeSpecName "kube-api-access-j6sgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.579600 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea334c61-e29f-432c-99c2-6e8463dde290-kube-api-access-qtv5f" (OuterVolumeSpecName: "kube-api-access-qtv5f") pod "ea334c61-e29f-432c-99c2-6e8463dde290" (UID: "ea334c61-e29f-432c-99c2-6e8463dde290"). InnerVolumeSpecName "kube-api-access-qtv5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.593881 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ea334c61-e29f-432c-99c2-6e8463dde290" (UID: "ea334c61-e29f-432c-99c2-6e8463dde290"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.596682 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-inventory" (OuterVolumeSpecName: "inventory") pod "ea334c61-e29f-432c-99c2-6e8463dde290" (UID: "ea334c61-e29f-432c-99c2-6e8463dde290"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.629748 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ee85abd-f909-4d6a-8268-410b4f17c9ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ee85abd-f909-4d6a-8268-410b4f17c9ba" (UID: "5ee85abd-f909-4d6a-8268-410b4f17c9ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.661065 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.661390 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee85abd-f909-4d6a-8268-410b4f17c9ba-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.661474 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.661541 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6sgs\" (UniqueName: \"kubernetes.io/projected/5ee85abd-f909-4d6a-8268-410b4f17c9ba-kube-api-access-j6sgs\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.661601 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtv5f\" (UniqueName: \"kubernetes.io/projected/ea334c61-e29f-432c-99c2-6e8463dde290-kube-api-access-qtv5f\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.661671 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea334c61-e29f-432c-99c2-6e8463dde290-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.661749 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee85abd-f909-4d6a-8268-410b4f17c9ba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.856766 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.856772 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-x5qbs" event={"ID":"ea334c61-e29f-432c-99c2-6e8463dde290","Type":"ContainerDied","Data":"d1a08661974915b54eff17f4d715fadddffd8172ac3a957b840eaa6efde386ff"} Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.857404 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1a08661974915b54eff17f4d715fadddffd8172ac3a957b840eaa6efde386ff" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.861279 4972 generic.go:334] "Generic (PLEG): container finished" podID="5ee85abd-f909-4d6a-8268-410b4f17c9ba" containerID="feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b" exitCode=0 Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.861370 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbdcf" event={"ID":"5ee85abd-f909-4d6a-8268-410b4f17c9ba","Type":"ContainerDied","Data":"feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b"} Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.861540 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbdcf" event={"ID":"5ee85abd-f909-4d6a-8268-410b4f17c9ba","Type":"ContainerDied","Data":"b1c3f869baa867d29d5951f15ec4441995c72b3b41cfa30e50c2b2786c7c372f"} Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.861572 4972 scope.go:117] "RemoveContainer" containerID="feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.861870 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbdcf" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.930545 4972 scope.go:117] "RemoveContainer" containerID="5d75da336c096bf6053c4dfed19351da45dc40906daa8eb6a16a3f097808ee73" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.944426 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sbdcf"] Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.964253 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sbdcf"] Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.988837 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-cp4kb"] Nov 21 11:45:59 crc kubenswrapper[4972]: E1121 11:45:59.989254 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea334c61-e29f-432c-99c2-6e8463dde290" containerName="validate-network-openstack-openstack-cell1" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.989271 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea334c61-e29f-432c-99c2-6e8463dde290" containerName="validate-network-openstack-openstack-cell1" Nov 21 11:45:59 crc kubenswrapper[4972]: E1121 11:45:59.989291 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee85abd-f909-4d6a-8268-410b4f17c9ba" containerName="extract-content" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.989297 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee85abd-f909-4d6a-8268-410b4f17c9ba" containerName="extract-content" Nov 21 11:45:59 crc kubenswrapper[4972]: E1121 11:45:59.989315 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee85abd-f909-4d6a-8268-410b4f17c9ba" containerName="registry-server" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.989321 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee85abd-f909-4d6a-8268-410b4f17c9ba" containerName="registry-server" Nov 21 11:45:59 crc kubenswrapper[4972]: E1121 11:45:59.989334 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee85abd-f909-4d6a-8268-410b4f17c9ba" containerName="extract-utilities" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.989341 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee85abd-f909-4d6a-8268-410b4f17c9ba" containerName="extract-utilities" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.989552 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ee85abd-f909-4d6a-8268-410b4f17c9ba" containerName="registry-server" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.989585 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea334c61-e29f-432c-99c2-6e8463dde290" containerName="validate-network-openstack-openstack-cell1" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.990504 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.993245 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.993477 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.993689 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:45:59 crc kubenswrapper[4972]: I1121 11:45:59.993764 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.006947 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-cp4kb"] Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.041736 4972 scope.go:117] "RemoveContainer" containerID="0d6ce3426b05350c0d6302c8e72a0e97a674b3d29b088b31f260edde43c904ee" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.072009 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-ceph\") pod \"install-os-openstack-openstack-cell1-cp4kb\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.072273 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-inventory\") pod \"install-os-openstack-openstack-cell1-cp4kb\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.072579 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4zjf\" (UniqueName: \"kubernetes.io/projected/a312b75b-8159-44e4-a2aa-c83ef0991eb4-kube-api-access-m4zjf\") pod \"install-os-openstack-openstack-cell1-cp4kb\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.072899 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-ssh-key\") pod \"install-os-openstack-openstack-cell1-cp4kb\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.088893 4972 scope.go:117] "RemoveContainer" containerID="feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b" Nov 21 11:46:00 crc kubenswrapper[4972]: E1121 11:46:00.089571 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b\": container with ID starting with feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b not found: ID does not exist" containerID="feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.089604 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b"} err="failed to get container status \"feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b\": rpc error: code = NotFound desc = could not find container \"feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b\": container with ID starting with feb3dc54d7b65757b0c203379ad97a2493560aba41501f1f56bacc6a5726381b not found: ID does not exist" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.089627 4972 scope.go:117] "RemoveContainer" containerID="5d75da336c096bf6053c4dfed19351da45dc40906daa8eb6a16a3f097808ee73" Nov 21 11:46:00 crc kubenswrapper[4972]: E1121 11:46:00.090025 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d75da336c096bf6053c4dfed19351da45dc40906daa8eb6a16a3f097808ee73\": container with ID starting with 5d75da336c096bf6053c4dfed19351da45dc40906daa8eb6a16a3f097808ee73 not found: ID does not exist" containerID="5d75da336c096bf6053c4dfed19351da45dc40906daa8eb6a16a3f097808ee73" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.090053 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d75da336c096bf6053c4dfed19351da45dc40906daa8eb6a16a3f097808ee73"} err="failed to get container status \"5d75da336c096bf6053c4dfed19351da45dc40906daa8eb6a16a3f097808ee73\": rpc error: code = NotFound desc = could not find container \"5d75da336c096bf6053c4dfed19351da45dc40906daa8eb6a16a3f097808ee73\": container with ID starting with 5d75da336c096bf6053c4dfed19351da45dc40906daa8eb6a16a3f097808ee73 not found: ID does not exist" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.090068 4972 scope.go:117] "RemoveContainer" containerID="0d6ce3426b05350c0d6302c8e72a0e97a674b3d29b088b31f260edde43c904ee" Nov 21 11:46:00 crc kubenswrapper[4972]: E1121 11:46:00.090388 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d6ce3426b05350c0d6302c8e72a0e97a674b3d29b088b31f260edde43c904ee\": container with ID starting with 0d6ce3426b05350c0d6302c8e72a0e97a674b3d29b088b31f260edde43c904ee not found: ID does not exist" containerID="0d6ce3426b05350c0d6302c8e72a0e97a674b3d29b088b31f260edde43c904ee" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.090410 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6ce3426b05350c0d6302c8e72a0e97a674b3d29b088b31f260edde43c904ee"} err="failed to get container status \"0d6ce3426b05350c0d6302c8e72a0e97a674b3d29b088b31f260edde43c904ee\": rpc error: code = NotFound desc = could not find container \"0d6ce3426b05350c0d6302c8e72a0e97a674b3d29b088b31f260edde43c904ee\": container with ID starting with 0d6ce3426b05350c0d6302c8e72a0e97a674b3d29b088b31f260edde43c904ee not found: ID does not exist" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.175563 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-ceph\") pod \"install-os-openstack-openstack-cell1-cp4kb\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.175713 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-inventory\") pod \"install-os-openstack-openstack-cell1-cp4kb\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.175827 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4zjf\" (UniqueName: \"kubernetes.io/projected/a312b75b-8159-44e4-a2aa-c83ef0991eb4-kube-api-access-m4zjf\") pod \"install-os-openstack-openstack-cell1-cp4kb\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.175963 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-ssh-key\") pod \"install-os-openstack-openstack-cell1-cp4kb\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.181652 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-ssh-key\") pod \"install-os-openstack-openstack-cell1-cp4kb\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.182351 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-inventory\") pod \"install-os-openstack-openstack-cell1-cp4kb\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.183866 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-ceph\") pod \"install-os-openstack-openstack-cell1-cp4kb\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.195128 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4zjf\" (UniqueName: \"kubernetes.io/projected/a312b75b-8159-44e4-a2aa-c83ef0991eb4-kube-api-access-m4zjf\") pod \"install-os-openstack-openstack-cell1-cp4kb\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:00 crc kubenswrapper[4972]: I1121 11:46:00.428247 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:01 crc kubenswrapper[4972]: I1121 11:46:01.054350 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-cp4kb"] Nov 21 11:46:01 crc kubenswrapper[4972]: I1121 11:46:01.771529 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee85abd-f909-4d6a-8268-410b4f17c9ba" path="/var/lib/kubelet/pods/5ee85abd-f909-4d6a-8268-410b4f17c9ba/volumes" Nov 21 11:46:01 crc kubenswrapper[4972]: I1121 11:46:01.886233 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-cp4kb" event={"ID":"a312b75b-8159-44e4-a2aa-c83ef0991eb4","Type":"ContainerStarted","Data":"a209cfe0511bd3a825a90572f9d574e1d90e8911896c0ff32249ba156f67a417"} Nov 21 11:46:01 crc kubenswrapper[4972]: I1121 11:46:01.886277 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-cp4kb" event={"ID":"a312b75b-8159-44e4-a2aa-c83ef0991eb4","Type":"ContainerStarted","Data":"8ef42fecc2a899a80386dbf1a43d3730f49a465d7b73f372b134b39deb509982"} Nov 21 11:46:01 crc kubenswrapper[4972]: I1121 11:46:01.922387 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-cp4kb" podStartSLOduration=2.458699191 podStartE2EDuration="2.922363551s" podCreationTimestamp="2025-11-21 11:45:59 +0000 UTC" firstStartedPulling="2025-11-21 11:46:01.064912805 +0000 UTC m=+7506.174055343" lastFinishedPulling="2025-11-21 11:46:01.528577205 +0000 UTC m=+7506.637719703" observedRunningTime="2025-11-21 11:46:01.903936507 +0000 UTC m=+7507.013079005" watchObservedRunningTime="2025-11-21 11:46:01.922363551 +0000 UTC m=+7507.031506079" Nov 21 11:46:26 crc kubenswrapper[4972]: I1121 11:46:26.179176 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:46:26 crc kubenswrapper[4972]: I1121 11:46:26.179806 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:46:46 crc kubenswrapper[4972]: I1121 11:46:46.371393 4972 generic.go:334] "Generic (PLEG): container finished" podID="a312b75b-8159-44e4-a2aa-c83ef0991eb4" containerID="a209cfe0511bd3a825a90572f9d574e1d90e8911896c0ff32249ba156f67a417" exitCode=0 Nov 21 11:46:46 crc kubenswrapper[4972]: I1121 11:46:46.371532 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-cp4kb" event={"ID":"a312b75b-8159-44e4-a2aa-c83ef0991eb4","Type":"ContainerDied","Data":"a209cfe0511bd3a825a90572f9d574e1d90e8911896c0ff32249ba156f67a417"} Nov 21 11:46:47 crc kubenswrapper[4972]: I1121 11:46:47.956524 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.084855 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-ceph\") pod \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.085352 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-ssh-key\") pod \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.085455 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4zjf\" (UniqueName: \"kubernetes.io/projected/a312b75b-8159-44e4-a2aa-c83ef0991eb4-kube-api-access-m4zjf\") pod \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.085583 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-inventory\") pod \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\" (UID: \"a312b75b-8159-44e4-a2aa-c83ef0991eb4\") " Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.092272 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-ceph" (OuterVolumeSpecName: "ceph") pod "a312b75b-8159-44e4-a2aa-c83ef0991eb4" (UID: "a312b75b-8159-44e4-a2aa-c83ef0991eb4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.093269 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a312b75b-8159-44e4-a2aa-c83ef0991eb4-kube-api-access-m4zjf" (OuterVolumeSpecName: "kube-api-access-m4zjf") pod "a312b75b-8159-44e4-a2aa-c83ef0991eb4" (UID: "a312b75b-8159-44e4-a2aa-c83ef0991eb4"). InnerVolumeSpecName "kube-api-access-m4zjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.115070 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a312b75b-8159-44e4-a2aa-c83ef0991eb4" (UID: "a312b75b-8159-44e4-a2aa-c83ef0991eb4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.123530 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-inventory" (OuterVolumeSpecName: "inventory") pod "a312b75b-8159-44e4-a2aa-c83ef0991eb4" (UID: "a312b75b-8159-44e4-a2aa-c83ef0991eb4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.190059 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.190131 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.190161 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4zjf\" (UniqueName: \"kubernetes.io/projected/a312b75b-8159-44e4-a2aa-c83ef0991eb4-kube-api-access-m4zjf\") on node \"crc\" DevicePath \"\"" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.190181 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a312b75b-8159-44e4-a2aa-c83ef0991eb4-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.421198 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-cp4kb" event={"ID":"a312b75b-8159-44e4-a2aa-c83ef0991eb4","Type":"ContainerDied","Data":"8ef42fecc2a899a80386dbf1a43d3730f49a465d7b73f372b134b39deb509982"} Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.421264 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ef42fecc2a899a80386dbf1a43d3730f49a465d7b73f372b134b39deb509982" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.421523 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-cp4kb" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.533205 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-7mgs5"] Nov 21 11:46:48 crc kubenswrapper[4972]: E1121 11:46:48.533820 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a312b75b-8159-44e4-a2aa-c83ef0991eb4" containerName="install-os-openstack-openstack-cell1" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.533880 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a312b75b-8159-44e4-a2aa-c83ef0991eb4" containerName="install-os-openstack-openstack-cell1" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.534263 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a312b75b-8159-44e4-a2aa-c83ef0991eb4" containerName="install-os-openstack-openstack-cell1" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.537164 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.541160 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.542109 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.542337 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.542854 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.552573 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-7mgs5"] Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.704960 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dphzl\" (UniqueName: \"kubernetes.io/projected/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-kube-api-access-dphzl\") pod \"configure-os-openstack-openstack-cell1-7mgs5\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.705455 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-ssh-key\") pod \"configure-os-openstack-openstack-cell1-7mgs5\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.705733 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-inventory\") pod \"configure-os-openstack-openstack-cell1-7mgs5\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.705893 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-ceph\") pod \"configure-os-openstack-openstack-cell1-7mgs5\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.808089 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-ssh-key\") pod \"configure-os-openstack-openstack-cell1-7mgs5\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.808238 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-inventory\") pod \"configure-os-openstack-openstack-cell1-7mgs5\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.808271 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-ceph\") pod \"configure-os-openstack-openstack-cell1-7mgs5\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.808304 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dphzl\" (UniqueName: \"kubernetes.io/projected/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-kube-api-access-dphzl\") pod \"configure-os-openstack-openstack-cell1-7mgs5\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.813724 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-ceph\") pod \"configure-os-openstack-openstack-cell1-7mgs5\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.814456 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-ssh-key\") pod \"configure-os-openstack-openstack-cell1-7mgs5\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.815526 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-inventory\") pod \"configure-os-openstack-openstack-cell1-7mgs5\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.837575 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dphzl\" (UniqueName: \"kubernetes.io/projected/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-kube-api-access-dphzl\") pod \"configure-os-openstack-openstack-cell1-7mgs5\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:48 crc kubenswrapper[4972]: I1121 11:46:48.861251 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:46:49 crc kubenswrapper[4972]: I1121 11:46:49.535544 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-7mgs5"] Nov 21 11:46:50 crc kubenswrapper[4972]: I1121 11:46:50.447381 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" event={"ID":"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097","Type":"ContainerStarted","Data":"5b76bbe55bd755391e5ecc300c4613512afc29fd59384334ccbe9f00a457391b"} Nov 21 11:46:50 crc kubenswrapper[4972]: I1121 11:46:50.447737 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" event={"ID":"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097","Type":"ContainerStarted","Data":"d172663226b78a223c62223850b606b4b370ac27841d964bf8407b29dcd7bc4a"} Nov 21 11:46:50 crc kubenswrapper[4972]: I1121 11:46:50.471505 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" podStartSLOduration=1.987084981 podStartE2EDuration="2.471486966s" podCreationTimestamp="2025-11-21 11:46:48 +0000 UTC" firstStartedPulling="2025-11-21 11:46:49.543751935 +0000 UTC m=+7554.652894433" lastFinishedPulling="2025-11-21 11:46:50.02815391 +0000 UTC m=+7555.137296418" observedRunningTime="2025-11-21 11:46:50.461938855 +0000 UTC m=+7555.571081353" watchObservedRunningTime="2025-11-21 11:46:50.471486966 +0000 UTC m=+7555.580629464" Nov 21 11:46:56 crc kubenswrapper[4972]: I1121 11:46:56.179230 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:46:56 crc kubenswrapper[4972]: I1121 11:46:56.179997 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:47:21 crc kubenswrapper[4972]: I1121 11:47:21.748662 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4a94fb30-1130-45e4-8ce8-9b0cdf0401b4" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Nov 21 11:47:26 crc kubenswrapper[4972]: I1121 11:47:26.179064 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:47:26 crc kubenswrapper[4972]: I1121 11:47:26.179713 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:47:26 crc kubenswrapper[4972]: I1121 11:47:26.179775 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 11:47:26 crc kubenswrapper[4972]: I1121 11:47:26.180645 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b79c2b857a1fdd2c8360d6a73525adb3af3295fc033655a7e49f7c9eebd3b913"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 11:47:26 crc kubenswrapper[4972]: I1121 11:47:26.180727 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://b79c2b857a1fdd2c8360d6a73525adb3af3295fc033655a7e49f7c9eebd3b913" gracePeriod=600 Nov 21 11:47:27 crc kubenswrapper[4972]: I1121 11:47:27.902224 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="b79c2b857a1fdd2c8360d6a73525adb3af3295fc033655a7e49f7c9eebd3b913" exitCode=0 Nov 21 11:47:27 crc kubenswrapper[4972]: I1121 11:47:27.902925 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"b79c2b857a1fdd2c8360d6a73525adb3af3295fc033655a7e49f7c9eebd3b913"} Nov 21 11:47:27 crc kubenswrapper[4972]: I1121 11:47:27.902959 4972 scope.go:117] "RemoveContainer" containerID="4c69498c05072a2e2abbc200219ff40af14c76a8cc756c2f65446578eeee0cc8" Nov 21 11:47:28 crc kubenswrapper[4972]: I1121 11:47:28.919133 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0"} Nov 21 11:47:46 crc kubenswrapper[4972]: I1121 11:47:46.135585 4972 generic.go:334] "Generic (PLEG): container finished" podID="536c00fd-ea0e-42f4-b0d9-4d6f9ee96097" containerID="5b76bbe55bd755391e5ecc300c4613512afc29fd59384334ccbe9f00a457391b" exitCode=0 Nov 21 11:47:46 crc kubenswrapper[4972]: I1121 11:47:46.135631 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" event={"ID":"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097","Type":"ContainerDied","Data":"5b76bbe55bd755391e5ecc300c4613512afc29fd59384334ccbe9f00a457391b"} Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.696038 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.791038 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-inventory\") pod \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.791451 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-ssh-key\") pod \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.791500 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dphzl\" (UniqueName: \"kubernetes.io/projected/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-kube-api-access-dphzl\") pod \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.791526 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-ceph\") pod \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\" (UID: \"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097\") " Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.798016 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-ceph" (OuterVolumeSpecName: "ceph") pod "536c00fd-ea0e-42f4-b0d9-4d6f9ee96097" (UID: "536c00fd-ea0e-42f4-b0d9-4d6f9ee96097"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.798038 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-kube-api-access-dphzl" (OuterVolumeSpecName: "kube-api-access-dphzl") pod "536c00fd-ea0e-42f4-b0d9-4d6f9ee96097" (UID: "536c00fd-ea0e-42f4-b0d9-4d6f9ee96097"). InnerVolumeSpecName "kube-api-access-dphzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.832766 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-inventory" (OuterVolumeSpecName: "inventory") pod "536c00fd-ea0e-42f4-b0d9-4d6f9ee96097" (UID: "536c00fd-ea0e-42f4-b0d9-4d6f9ee96097"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.837677 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "536c00fd-ea0e-42f4-b0d9-4d6f9ee96097" (UID: "536c00fd-ea0e-42f4-b0d9-4d6f9ee96097"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.894730 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.894761 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.894771 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dphzl\" (UniqueName: \"kubernetes.io/projected/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-kube-api-access-dphzl\") on node \"crc\" DevicePath \"\"" Nov 21 11:47:47 crc kubenswrapper[4972]: I1121 11:47:47.894781 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/536c00fd-ea0e-42f4-b0d9-4d6f9ee96097-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.157597 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" event={"ID":"536c00fd-ea0e-42f4-b0d9-4d6f9ee96097","Type":"ContainerDied","Data":"d172663226b78a223c62223850b606b4b370ac27841d964bf8407b29dcd7bc4a"} Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.157636 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d172663226b78a223c62223850b606b4b370ac27841d964bf8407b29dcd7bc4a" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.157699 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-7mgs5" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.262666 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-jhnpb"] Nov 21 11:47:48 crc kubenswrapper[4972]: E1121 11:47:48.263227 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536c00fd-ea0e-42f4-b0d9-4d6f9ee96097" containerName="configure-os-openstack-openstack-cell1" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.263247 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="536c00fd-ea0e-42f4-b0d9-4d6f9ee96097" containerName="configure-os-openstack-openstack-cell1" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.263522 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="536c00fd-ea0e-42f4-b0d9-4d6f9ee96097" containerName="configure-os-openstack-openstack-cell1" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.264438 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.268103 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.268157 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.268221 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.268254 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.277367 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-jhnpb"] Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.406032 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-inventory-0\") pod \"ssh-known-hosts-openstack-jhnpb\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.406356 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqcxg\" (UniqueName: \"kubernetes.io/projected/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-kube-api-access-vqcxg\") pod \"ssh-known-hosts-openstack-jhnpb\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.406514 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-jhnpb\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.406790 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-ceph\") pod \"ssh-known-hosts-openstack-jhnpb\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.508553 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-ceph\") pod \"ssh-known-hosts-openstack-jhnpb\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.509072 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-inventory-0\") pod \"ssh-known-hosts-openstack-jhnpb\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.509137 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqcxg\" (UniqueName: \"kubernetes.io/projected/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-kube-api-access-vqcxg\") pod \"ssh-known-hosts-openstack-jhnpb\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.509184 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-jhnpb\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.515439 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-jhnpb\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.517438 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-inventory-0\") pod \"ssh-known-hosts-openstack-jhnpb\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.529064 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-ceph\") pod \"ssh-known-hosts-openstack-jhnpb\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.530080 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqcxg\" (UniqueName: \"kubernetes.io/projected/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-kube-api-access-vqcxg\") pod \"ssh-known-hosts-openstack-jhnpb\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:48 crc kubenswrapper[4972]: I1121 11:47:48.587559 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:47:49 crc kubenswrapper[4972]: I1121 11:47:49.156180 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-jhnpb"] Nov 21 11:47:50 crc kubenswrapper[4972]: I1121 11:47:50.186196 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-jhnpb" event={"ID":"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38","Type":"ContainerStarted","Data":"ed4a180d4564b0f541eada08cd3a7b993256455725ef26e4ed98995c6615f774"} Nov 21 11:47:50 crc kubenswrapper[4972]: I1121 11:47:50.186716 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-jhnpb" event={"ID":"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38","Type":"ContainerStarted","Data":"0001685803c3489ed3b268cddb08d8cb4c4b73aa443952d93684d25ccdc52bfd"} Nov 21 11:47:50 crc kubenswrapper[4972]: I1121 11:47:50.207011 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-jhnpb" podStartSLOduration=1.690252433 podStartE2EDuration="2.206991926s" podCreationTimestamp="2025-11-21 11:47:48 +0000 UTC" firstStartedPulling="2025-11-21 11:47:49.169399282 +0000 UTC m=+7614.278541780" lastFinishedPulling="2025-11-21 11:47:49.686138765 +0000 UTC m=+7614.795281273" observedRunningTime="2025-11-21 11:47:50.20333247 +0000 UTC m=+7615.312474998" watchObservedRunningTime="2025-11-21 11:47:50.206991926 +0000 UTC m=+7615.316134424" Nov 21 11:47:59 crc kubenswrapper[4972]: I1121 11:47:59.286892 4972 generic.go:334] "Generic (PLEG): container finished" podID="d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38" containerID="ed4a180d4564b0f541eada08cd3a7b993256455725ef26e4ed98995c6615f774" exitCode=0 Nov 21 11:47:59 crc kubenswrapper[4972]: I1121 11:47:59.287013 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-jhnpb" event={"ID":"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38","Type":"ContainerDied","Data":"ed4a180d4564b0f541eada08cd3a7b993256455725ef26e4ed98995c6615f774"} Nov 21 11:48:00 crc kubenswrapper[4972]: I1121 11:48:00.767920 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:48:00 crc kubenswrapper[4972]: I1121 11:48:00.948254 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-ssh-key-openstack-cell1\") pod \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " Nov 21 11:48:00 crc kubenswrapper[4972]: I1121 11:48:00.948415 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqcxg\" (UniqueName: \"kubernetes.io/projected/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-kube-api-access-vqcxg\") pod \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " Nov 21 11:48:00 crc kubenswrapper[4972]: I1121 11:48:00.948515 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-ceph\") pod \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " Nov 21 11:48:00 crc kubenswrapper[4972]: I1121 11:48:00.948680 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-inventory-0\") pod \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\" (UID: \"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38\") " Nov 21 11:48:00 crc kubenswrapper[4972]: I1121 11:48:00.964606 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-ceph" (OuterVolumeSpecName: "ceph") pod "d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38" (UID: "d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:00 crc kubenswrapper[4972]: I1121 11:48:00.965343 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-kube-api-access-vqcxg" (OuterVolumeSpecName: "kube-api-access-vqcxg") pod "d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38" (UID: "d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38"). InnerVolumeSpecName "kube-api-access-vqcxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:48:00 crc kubenswrapper[4972]: I1121 11:48:00.988474 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38" (UID: "d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.000645 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38" (UID: "d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.051789 4972 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.051953 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.051970 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqcxg\" (UniqueName: \"kubernetes.io/projected/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-kube-api-access-vqcxg\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.052031 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.318485 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-jhnpb" event={"ID":"d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38","Type":"ContainerDied","Data":"0001685803c3489ed3b268cddb08d8cb4c4b73aa443952d93684d25ccdc52bfd"} Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.318526 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0001685803c3489ed3b268cddb08d8cb4c4b73aa443952d93684d25ccdc52bfd" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.318627 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-jhnpb" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.408441 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-45bfc"] Nov 21 11:48:01 crc kubenswrapper[4972]: E1121 11:48:01.409041 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38" containerName="ssh-known-hosts-openstack" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.409060 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38" containerName="ssh-known-hosts-openstack" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.409300 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38" containerName="ssh-known-hosts-openstack" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.410352 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.413555 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.413826 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.413891 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.414096 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.420750 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-45bfc"] Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.566993 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-inventory\") pod \"run-os-openstack-openstack-cell1-45bfc\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.567101 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nkdt\" (UniqueName: \"kubernetes.io/projected/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-kube-api-access-2nkdt\") pod \"run-os-openstack-openstack-cell1-45bfc\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.567140 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-ssh-key\") pod \"run-os-openstack-openstack-cell1-45bfc\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.567193 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-ceph\") pod \"run-os-openstack-openstack-cell1-45bfc\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.668942 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-ceph\") pod \"run-os-openstack-openstack-cell1-45bfc\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.669129 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-inventory\") pod \"run-os-openstack-openstack-cell1-45bfc\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.669295 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nkdt\" (UniqueName: \"kubernetes.io/projected/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-kube-api-access-2nkdt\") pod \"run-os-openstack-openstack-cell1-45bfc\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.669381 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-ssh-key\") pod \"run-os-openstack-openstack-cell1-45bfc\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.674187 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-ssh-key\") pod \"run-os-openstack-openstack-cell1-45bfc\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.674593 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-ceph\") pod \"run-os-openstack-openstack-cell1-45bfc\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.676992 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-inventory\") pod \"run-os-openstack-openstack-cell1-45bfc\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.692581 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nkdt\" (UniqueName: \"kubernetes.io/projected/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-kube-api-access-2nkdt\") pod \"run-os-openstack-openstack-cell1-45bfc\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:01 crc kubenswrapper[4972]: I1121 11:48:01.734027 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:02 crc kubenswrapper[4972]: W1121 11:48:02.278915 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0489a8cf_eca4_430a_a3e2_73fcfdc437c1.slice/crio-3814d5d66e1f802efaef2523fd3e6b8ed1d6d067b883a3304a5069cffadb8bfa WatchSource:0}: Error finding container 3814d5d66e1f802efaef2523fd3e6b8ed1d6d067b883a3304a5069cffadb8bfa: Status 404 returned error can't find the container with id 3814d5d66e1f802efaef2523fd3e6b8ed1d6d067b883a3304a5069cffadb8bfa Nov 21 11:48:02 crc kubenswrapper[4972]: I1121 11:48:02.281704 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-45bfc"] Nov 21 11:48:02 crc kubenswrapper[4972]: I1121 11:48:02.333026 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-45bfc" event={"ID":"0489a8cf-eca4-430a-a3e2-73fcfdc437c1","Type":"ContainerStarted","Data":"3814d5d66e1f802efaef2523fd3e6b8ed1d6d067b883a3304a5069cffadb8bfa"} Nov 21 11:48:03 crc kubenswrapper[4972]: I1121 11:48:03.344213 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-45bfc" event={"ID":"0489a8cf-eca4-430a-a3e2-73fcfdc437c1","Type":"ContainerStarted","Data":"139d5598c1fc5c208a9d451d8e0fe1646f0ecb9c9df3d4dafebcc20834d33e75"} Nov 21 11:48:03 crc kubenswrapper[4972]: I1121 11:48:03.369894 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-45bfc" podStartSLOduration=1.90806729 podStartE2EDuration="2.36987499s" podCreationTimestamp="2025-11-21 11:48:01 +0000 UTC" firstStartedPulling="2025-11-21 11:48:02.282349877 +0000 UTC m=+7627.391492415" lastFinishedPulling="2025-11-21 11:48:02.744157627 +0000 UTC m=+7627.853300115" observedRunningTime="2025-11-21 11:48:03.364326455 +0000 UTC m=+7628.473468953" watchObservedRunningTime="2025-11-21 11:48:03.36987499 +0000 UTC m=+7628.479017488" Nov 21 11:48:11 crc kubenswrapper[4972]: I1121 11:48:11.438608 4972 generic.go:334] "Generic (PLEG): container finished" podID="0489a8cf-eca4-430a-a3e2-73fcfdc437c1" containerID="139d5598c1fc5c208a9d451d8e0fe1646f0ecb9c9df3d4dafebcc20834d33e75" exitCode=0 Nov 21 11:48:11 crc kubenswrapper[4972]: I1121 11:48:11.438679 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-45bfc" event={"ID":"0489a8cf-eca4-430a-a3e2-73fcfdc437c1","Type":"ContainerDied","Data":"139d5598c1fc5c208a9d451d8e0fe1646f0ecb9c9df3d4dafebcc20834d33e75"} Nov 21 11:48:12 crc kubenswrapper[4972]: I1121 11:48:12.982946 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.176635 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-ssh-key\") pod \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.176919 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-ceph\") pod \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.177161 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-inventory\") pod \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.177206 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nkdt\" (UniqueName: \"kubernetes.io/projected/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-kube-api-access-2nkdt\") pod \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\" (UID: \"0489a8cf-eca4-430a-a3e2-73fcfdc437c1\") " Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.183534 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-kube-api-access-2nkdt" (OuterVolumeSpecName: "kube-api-access-2nkdt") pod "0489a8cf-eca4-430a-a3e2-73fcfdc437c1" (UID: "0489a8cf-eca4-430a-a3e2-73fcfdc437c1"). InnerVolumeSpecName "kube-api-access-2nkdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.197861 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-ceph" (OuterVolumeSpecName: "ceph") pod "0489a8cf-eca4-430a-a3e2-73fcfdc437c1" (UID: "0489a8cf-eca4-430a-a3e2-73fcfdc437c1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.217716 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-inventory" (OuterVolumeSpecName: "inventory") pod "0489a8cf-eca4-430a-a3e2-73fcfdc437c1" (UID: "0489a8cf-eca4-430a-a3e2-73fcfdc437c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.221957 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0489a8cf-eca4-430a-a3e2-73fcfdc437c1" (UID: "0489a8cf-eca4-430a-a3e2-73fcfdc437c1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.280645 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.280696 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.280707 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.280719 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nkdt\" (UniqueName: \"kubernetes.io/projected/0489a8cf-eca4-430a-a3e2-73fcfdc437c1-kube-api-access-2nkdt\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.469957 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-45bfc" event={"ID":"0489a8cf-eca4-430a-a3e2-73fcfdc437c1","Type":"ContainerDied","Data":"3814d5d66e1f802efaef2523fd3e6b8ed1d6d067b883a3304a5069cffadb8bfa"} Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.470604 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3814d5d66e1f802efaef2523fd3e6b8ed1d6d067b883a3304a5069cffadb8bfa" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.470012 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-45bfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.560744 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-p4vfc"] Nov 21 11:48:13 crc kubenswrapper[4972]: E1121 11:48:13.561499 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0489a8cf-eca4-430a-a3e2-73fcfdc437c1" containerName="run-os-openstack-openstack-cell1" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.561525 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0489a8cf-eca4-430a-a3e2-73fcfdc437c1" containerName="run-os-openstack-openstack-cell1" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.561876 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="0489a8cf-eca4-430a-a3e2-73fcfdc437c1" containerName="run-os-openstack-openstack-cell1" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.562909 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.572523 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.572573 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.572803 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.572792 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.596367 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zll8b\" (UniqueName: \"kubernetes.io/projected/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-kube-api-access-zll8b\") pod \"reboot-os-openstack-openstack-cell1-p4vfc\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.596458 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-ceph\") pod \"reboot-os-openstack-openstack-cell1-p4vfc\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.596553 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-inventory\") pod \"reboot-os-openstack-openstack-cell1-p4vfc\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.596582 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-p4vfc\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.598245 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-p4vfc"] Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.698753 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-ceph\") pod \"reboot-os-openstack-openstack-cell1-p4vfc\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.698967 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-inventory\") pod \"reboot-os-openstack-openstack-cell1-p4vfc\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.699003 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-p4vfc\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.699356 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zll8b\" (UniqueName: \"kubernetes.io/projected/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-kube-api-access-zll8b\") pod \"reboot-os-openstack-openstack-cell1-p4vfc\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.704324 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-p4vfc\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.704447 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-ceph\") pod \"reboot-os-openstack-openstack-cell1-p4vfc\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.706855 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-inventory\") pod \"reboot-os-openstack-openstack-cell1-p4vfc\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.717155 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zll8b\" (UniqueName: \"kubernetes.io/projected/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-kube-api-access-zll8b\") pod \"reboot-os-openstack-openstack-cell1-p4vfc\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:13 crc kubenswrapper[4972]: I1121 11:48:13.892197 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:14 crc kubenswrapper[4972]: I1121 11:48:14.465952 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-p4vfc"] Nov 21 11:48:14 crc kubenswrapper[4972]: I1121 11:48:14.487212 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" event={"ID":"9269b8c1-5c0a-4bd4-9cea-210af0f082d0","Type":"ContainerStarted","Data":"13f68c519d8c6c05f170f5617723331fe260c1fe8c09317d9b384b2d39656a92"} Nov 21 11:48:15 crc kubenswrapper[4972]: I1121 11:48:15.499408 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" event={"ID":"9269b8c1-5c0a-4bd4-9cea-210af0f082d0","Type":"ContainerStarted","Data":"57a5bfe72fad68f1c65f20aa308b571c7c62efc93c73c0ba69c524252e279b3c"} Nov 21 11:48:15 crc kubenswrapper[4972]: I1121 11:48:15.530947 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" podStartSLOduration=1.980583936 podStartE2EDuration="2.530926071s" podCreationTimestamp="2025-11-21 11:48:13 +0000 UTC" firstStartedPulling="2025-11-21 11:48:14.470373544 +0000 UTC m=+7639.579516052" lastFinishedPulling="2025-11-21 11:48:15.020715689 +0000 UTC m=+7640.129858187" observedRunningTime="2025-11-21 11:48:15.517446807 +0000 UTC m=+7640.626589315" watchObservedRunningTime="2025-11-21 11:48:15.530926071 +0000 UTC m=+7640.640068569" Nov 21 11:48:31 crc kubenswrapper[4972]: I1121 11:48:31.699822 4972 generic.go:334] "Generic (PLEG): container finished" podID="9269b8c1-5c0a-4bd4-9cea-210af0f082d0" containerID="57a5bfe72fad68f1c65f20aa308b571c7c62efc93c73c0ba69c524252e279b3c" exitCode=0 Nov 21 11:48:31 crc kubenswrapper[4972]: I1121 11:48:31.699926 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" event={"ID":"9269b8c1-5c0a-4bd4-9cea-210af0f082d0","Type":"ContainerDied","Data":"57a5bfe72fad68f1c65f20aa308b571c7c62efc93c73c0ba69c524252e279b3c"} Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.226928 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.348269 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zll8b\" (UniqueName: \"kubernetes.io/projected/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-kube-api-access-zll8b\") pod \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.348363 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-ceph\") pod \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.348431 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-inventory\") pod \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.348609 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-ssh-key\") pod \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\" (UID: \"9269b8c1-5c0a-4bd4-9cea-210af0f082d0\") " Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.358487 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-kube-api-access-zll8b" (OuterVolumeSpecName: "kube-api-access-zll8b") pod "9269b8c1-5c0a-4bd4-9cea-210af0f082d0" (UID: "9269b8c1-5c0a-4bd4-9cea-210af0f082d0"). InnerVolumeSpecName "kube-api-access-zll8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.358756 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-ceph" (OuterVolumeSpecName: "ceph") pod "9269b8c1-5c0a-4bd4-9cea-210af0f082d0" (UID: "9269b8c1-5c0a-4bd4-9cea-210af0f082d0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.379028 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9269b8c1-5c0a-4bd4-9cea-210af0f082d0" (UID: "9269b8c1-5c0a-4bd4-9cea-210af0f082d0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.393745 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-inventory" (OuterVolumeSpecName: "inventory") pod "9269b8c1-5c0a-4bd4-9cea-210af0f082d0" (UID: "9269b8c1-5c0a-4bd4-9cea-210af0f082d0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.452096 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zll8b\" (UniqueName: \"kubernetes.io/projected/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-kube-api-access-zll8b\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.452148 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.452169 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.452184 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9269b8c1-5c0a-4bd4-9cea-210af0f082d0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.726641 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" event={"ID":"9269b8c1-5c0a-4bd4-9cea-210af0f082d0","Type":"ContainerDied","Data":"13f68c519d8c6c05f170f5617723331fe260c1fe8c09317d9b384b2d39656a92"} Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.727126 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13f68c519d8c6c05f170f5617723331fe260c1fe8c09317d9b384b2d39656a92" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.726694 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-p4vfc" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.843924 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-46gr5"] Nov 21 11:48:33 crc kubenswrapper[4972]: E1121 11:48:33.844706 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9269b8c1-5c0a-4bd4-9cea-210af0f082d0" containerName="reboot-os-openstack-openstack-cell1" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.844734 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9269b8c1-5c0a-4bd4-9cea-210af0f082d0" containerName="reboot-os-openstack-openstack-cell1" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.845063 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9269b8c1-5c0a-4bd4-9cea-210af0f082d0" containerName="reboot-os-openstack-openstack-cell1" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.846383 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.850330 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.850563 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.850801 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.851140 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.866685 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzk46\" (UniqueName: \"kubernetes.io/projected/96c9f1ec-3df3-45ef-be1e-b12185862f03-kube-api-access-pzk46\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.867108 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.867372 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.867503 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.868376 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.868506 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.868622 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.868796 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.869106 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.870688 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-inventory\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.871364 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ceph\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.871531 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ssh-key\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.882166 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-46gr5"] Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.974102 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.974213 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.974251 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.974275 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.974298 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.974324 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.974382 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.974459 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-inventory\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.974532 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ceph\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.974576 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ssh-key\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.974601 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzk46\" (UniqueName: \"kubernetes.io/projected/96c9f1ec-3df3-45ef-be1e-b12185862f03-kube-api-access-pzk46\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.974645 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.981999 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.982953 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.983404 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.983538 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ssh-key\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.983746 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.984039 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.984515 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-inventory\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.989788 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.990512 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.998674 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ceph\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:33 crc kubenswrapper[4972]: I1121 11:48:33.999567 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:34 crc kubenswrapper[4972]: I1121 11:48:34.002876 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzk46\" (UniqueName: \"kubernetes.io/projected/96c9f1ec-3df3-45ef-be1e-b12185862f03-kube-api-access-pzk46\") pod \"install-certs-openstack-openstack-cell1-46gr5\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:34 crc kubenswrapper[4972]: I1121 11:48:34.182945 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:34 crc kubenswrapper[4972]: I1121 11:48:34.747950 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-46gr5"] Nov 21 11:48:34 crc kubenswrapper[4972]: W1121 11:48:34.750143 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96c9f1ec_3df3_45ef_be1e_b12185862f03.slice/crio-2cf8de68c254210fa505531a4a4fd3fd4d074b507c9401e4ac066a8fc2b88a82 WatchSource:0}: Error finding container 2cf8de68c254210fa505531a4a4fd3fd4d074b507c9401e4ac066a8fc2b88a82: Status 404 returned error can't find the container with id 2cf8de68c254210fa505531a4a4fd3fd4d074b507c9401e4ac066a8fc2b88a82 Nov 21 11:48:35 crc kubenswrapper[4972]: I1121 11:48:35.753091 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-46gr5" event={"ID":"96c9f1ec-3df3-45ef-be1e-b12185862f03","Type":"ContainerStarted","Data":"9271a7a1bae3482498b33bd740f6099a25ca03db22e4109ddb8b95e2f0492bad"} Nov 21 11:48:35 crc kubenswrapper[4972]: I1121 11:48:35.754003 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-46gr5" event={"ID":"96c9f1ec-3df3-45ef-be1e-b12185862f03","Type":"ContainerStarted","Data":"2cf8de68c254210fa505531a4a4fd3fd4d074b507c9401e4ac066a8fc2b88a82"} Nov 21 11:48:35 crc kubenswrapper[4972]: I1121 11:48:35.802664 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-46gr5" podStartSLOduration=2.124491922 podStartE2EDuration="2.80263212s" podCreationTimestamp="2025-11-21 11:48:33 +0000 UTC" firstStartedPulling="2025-11-21 11:48:34.753034902 +0000 UTC m=+7659.862177430" lastFinishedPulling="2025-11-21 11:48:35.43117513 +0000 UTC m=+7660.540317628" observedRunningTime="2025-11-21 11:48:35.782656696 +0000 UTC m=+7660.891799204" watchObservedRunningTime="2025-11-21 11:48:35.80263212 +0000 UTC m=+7660.911774618" Nov 21 11:48:53 crc kubenswrapper[4972]: I1121 11:48:53.959864 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-46gr5" event={"ID":"96c9f1ec-3df3-45ef-be1e-b12185862f03","Type":"ContainerDied","Data":"9271a7a1bae3482498b33bd740f6099a25ca03db22e4109ddb8b95e2f0492bad"} Nov 21 11:48:53 crc kubenswrapper[4972]: I1121 11:48:53.961371 4972 generic.go:334] "Generic (PLEG): container finished" podID="96c9f1ec-3df3-45ef-be1e-b12185862f03" containerID="9271a7a1bae3482498b33bd740f6099a25ca03db22e4109ddb8b95e2f0492bad" exitCode=0 Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.540898 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.592708 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ceph\") pod \"96c9f1ec-3df3-45ef-be1e-b12185862f03\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.592749 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ssh-key\") pod \"96c9f1ec-3df3-45ef-be1e-b12185862f03\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.592791 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzk46\" (UniqueName: \"kubernetes.io/projected/96c9f1ec-3df3-45ef-be1e-b12185862f03-kube-api-access-pzk46\") pod \"96c9f1ec-3df3-45ef-be1e-b12185862f03\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.592815 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-bootstrap-combined-ca-bundle\") pod \"96c9f1ec-3df3-45ef-be1e-b12185862f03\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.592854 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-telemetry-combined-ca-bundle\") pod \"96c9f1ec-3df3-45ef-be1e-b12185862f03\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.592898 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-inventory\") pod \"96c9f1ec-3df3-45ef-be1e-b12185862f03\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.592921 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-metadata-combined-ca-bundle\") pod \"96c9f1ec-3df3-45ef-be1e-b12185862f03\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.593031 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-sriov-combined-ca-bundle\") pod \"96c9f1ec-3df3-45ef-be1e-b12185862f03\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.593128 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-libvirt-combined-ca-bundle\") pod \"96c9f1ec-3df3-45ef-be1e-b12185862f03\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.593200 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-dhcp-combined-ca-bundle\") pod \"96c9f1ec-3df3-45ef-be1e-b12185862f03\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.593214 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-nova-combined-ca-bundle\") pod \"96c9f1ec-3df3-45ef-be1e-b12185862f03\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.593246 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ovn-combined-ca-bundle\") pod \"96c9f1ec-3df3-45ef-be1e-b12185862f03\" (UID: \"96c9f1ec-3df3-45ef-be1e-b12185862f03\") " Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.598992 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "96c9f1ec-3df3-45ef-be1e-b12185862f03" (UID: "96c9f1ec-3df3-45ef-be1e-b12185862f03"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.599466 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "96c9f1ec-3df3-45ef-be1e-b12185862f03" (UID: "96c9f1ec-3df3-45ef-be1e-b12185862f03"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.600293 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "96c9f1ec-3df3-45ef-be1e-b12185862f03" (UID: "96c9f1ec-3df3-45ef-be1e-b12185862f03"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.600873 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "96c9f1ec-3df3-45ef-be1e-b12185862f03" (UID: "96c9f1ec-3df3-45ef-be1e-b12185862f03"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.602167 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ceph" (OuterVolumeSpecName: "ceph") pod "96c9f1ec-3df3-45ef-be1e-b12185862f03" (UID: "96c9f1ec-3df3-45ef-be1e-b12185862f03"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.604342 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c9f1ec-3df3-45ef-be1e-b12185862f03-kube-api-access-pzk46" (OuterVolumeSpecName: "kube-api-access-pzk46") pod "96c9f1ec-3df3-45ef-be1e-b12185862f03" (UID: "96c9f1ec-3df3-45ef-be1e-b12185862f03"). InnerVolumeSpecName "kube-api-access-pzk46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.606079 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "96c9f1ec-3df3-45ef-be1e-b12185862f03" (UID: "96c9f1ec-3df3-45ef-be1e-b12185862f03"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.617402 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "96c9f1ec-3df3-45ef-be1e-b12185862f03" (UID: "96c9f1ec-3df3-45ef-be1e-b12185862f03"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.623376 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "96c9f1ec-3df3-45ef-be1e-b12185862f03" (UID: "96c9f1ec-3df3-45ef-be1e-b12185862f03"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.626814 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "96c9f1ec-3df3-45ef-be1e-b12185862f03" (UID: "96c9f1ec-3df3-45ef-be1e-b12185862f03"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.646511 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-inventory" (OuterVolumeSpecName: "inventory") pod "96c9f1ec-3df3-45ef-be1e-b12185862f03" (UID: "96c9f1ec-3df3-45ef-be1e-b12185862f03"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.647386 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "96c9f1ec-3df3-45ef-be1e-b12185862f03" (UID: "96c9f1ec-3df3-45ef-be1e-b12185862f03"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.696073 4972 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.696105 4972 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.696118 4972 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.696172 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.696183 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.696194 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzk46\" (UniqueName: \"kubernetes.io/projected/96c9f1ec-3df3-45ef-be1e-b12185862f03-kube-api-access-pzk46\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.696205 4972 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.696215 4972 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.696226 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.696237 4972 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.696249 4972 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.696260 4972 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c9f1ec-3df3-45ef-be1e-b12185862f03-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.990200 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-46gr5" event={"ID":"96c9f1ec-3df3-45ef-be1e-b12185862f03","Type":"ContainerDied","Data":"2cf8de68c254210fa505531a4a4fd3fd4d074b507c9401e4ac066a8fc2b88a82"} Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.990605 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cf8de68c254210fa505531a4a4fd3fd4d074b507c9401e4ac066a8fc2b88a82" Nov 21 11:48:55 crc kubenswrapper[4972]: I1121 11:48:55.990294 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-46gr5" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.095535 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-qwz69"] Nov 21 11:48:56 crc kubenswrapper[4972]: E1121 11:48:56.096012 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c9f1ec-3df3-45ef-be1e-b12185862f03" containerName="install-certs-openstack-openstack-cell1" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.096037 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c9f1ec-3df3-45ef-be1e-b12185862f03" containerName="install-certs-openstack-openstack-cell1" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.096384 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c9f1ec-3df3-45ef-be1e-b12185862f03" containerName="install-certs-openstack-openstack-cell1" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.097395 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.100886 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.101293 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.105669 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-qwz69"] Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.105700 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.105799 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.206767 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-inventory\") pod \"ceph-client-openstack-openstack-cell1-qwz69\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.207128 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-ceph\") pod \"ceph-client-openstack-openstack-cell1-qwz69\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.207214 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqgb4\" (UniqueName: \"kubernetes.io/projected/cfae87d2-f93e-49af-9b66-59d9e7208dd1-kube-api-access-lqgb4\") pod \"ceph-client-openstack-openstack-cell1-qwz69\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.207441 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-qwz69\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.310540 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-inventory\") pod \"ceph-client-openstack-openstack-cell1-qwz69\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.310697 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-ceph\") pod \"ceph-client-openstack-openstack-cell1-qwz69\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.310749 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqgb4\" (UniqueName: \"kubernetes.io/projected/cfae87d2-f93e-49af-9b66-59d9e7208dd1-kube-api-access-lqgb4\") pod \"ceph-client-openstack-openstack-cell1-qwz69\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.310910 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-qwz69\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.316010 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-ceph\") pod \"ceph-client-openstack-openstack-cell1-qwz69\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.316538 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-inventory\") pod \"ceph-client-openstack-openstack-cell1-qwz69\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.316729 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-qwz69\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.340047 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqgb4\" (UniqueName: \"kubernetes.io/projected/cfae87d2-f93e-49af-9b66-59d9e7208dd1-kube-api-access-lqgb4\") pod \"ceph-client-openstack-openstack-cell1-qwz69\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:56 crc kubenswrapper[4972]: I1121 11:48:56.416328 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:48:57 crc kubenswrapper[4972]: I1121 11:48:57.003230 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-qwz69"] Nov 21 11:48:58 crc kubenswrapper[4972]: I1121 11:48:58.011688 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" event={"ID":"cfae87d2-f93e-49af-9b66-59d9e7208dd1","Type":"ContainerStarted","Data":"dc324354adc460248ed0df83ca4633d7a027c7f14343b6e2281855ac16f162e1"} Nov 21 11:48:58 crc kubenswrapper[4972]: I1121 11:48:58.012304 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" event={"ID":"cfae87d2-f93e-49af-9b66-59d9e7208dd1","Type":"ContainerStarted","Data":"f77828ae31a1959c41c61e120b12e3432e4f72e6a2ed0d039b8d7f1b3353ddde"} Nov 21 11:48:58 crc kubenswrapper[4972]: I1121 11:48:58.029858 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" podStartSLOduration=1.552442044 podStartE2EDuration="2.029823894s" podCreationTimestamp="2025-11-21 11:48:56 +0000 UTC" firstStartedPulling="2025-11-21 11:48:57.00328321 +0000 UTC m=+7682.112425708" lastFinishedPulling="2025-11-21 11:48:57.48066506 +0000 UTC m=+7682.589807558" observedRunningTime="2025-11-21 11:48:58.029361102 +0000 UTC m=+7683.138503620" watchObservedRunningTime="2025-11-21 11:48:58.029823894 +0000 UTC m=+7683.138966392" Nov 21 11:49:03 crc kubenswrapper[4972]: I1121 11:49:03.069392 4972 generic.go:334] "Generic (PLEG): container finished" podID="cfae87d2-f93e-49af-9b66-59d9e7208dd1" containerID="dc324354adc460248ed0df83ca4633d7a027c7f14343b6e2281855ac16f162e1" exitCode=0 Nov 21 11:49:03 crc kubenswrapper[4972]: I1121 11:49:03.069517 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" event={"ID":"cfae87d2-f93e-49af-9b66-59d9e7208dd1","Type":"ContainerDied","Data":"dc324354adc460248ed0df83ca4633d7a027c7f14343b6e2281855ac16f162e1"} Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.590926 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.741597 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-inventory\") pod \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.741633 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-ssh-key\") pod \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.741741 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqgb4\" (UniqueName: \"kubernetes.io/projected/cfae87d2-f93e-49af-9b66-59d9e7208dd1-kube-api-access-lqgb4\") pod \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.741927 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-ceph\") pod \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\" (UID: \"cfae87d2-f93e-49af-9b66-59d9e7208dd1\") " Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.748282 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-ceph" (OuterVolumeSpecName: "ceph") pod "cfae87d2-f93e-49af-9b66-59d9e7208dd1" (UID: "cfae87d2-f93e-49af-9b66-59d9e7208dd1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.750644 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfae87d2-f93e-49af-9b66-59d9e7208dd1-kube-api-access-lqgb4" (OuterVolumeSpecName: "kube-api-access-lqgb4") pod "cfae87d2-f93e-49af-9b66-59d9e7208dd1" (UID: "cfae87d2-f93e-49af-9b66-59d9e7208dd1"). InnerVolumeSpecName "kube-api-access-lqgb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.794523 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cfae87d2-f93e-49af-9b66-59d9e7208dd1" (UID: "cfae87d2-f93e-49af-9b66-59d9e7208dd1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.795366 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-inventory" (OuterVolumeSpecName: "inventory") pod "cfae87d2-f93e-49af-9b66-59d9e7208dd1" (UID: "cfae87d2-f93e-49af-9b66-59d9e7208dd1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.845631 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqgb4\" (UniqueName: \"kubernetes.io/projected/cfae87d2-f93e-49af-9b66-59d9e7208dd1-kube-api-access-lqgb4\") on node \"crc\" DevicePath \"\"" Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.846006 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.846031 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:49:04 crc kubenswrapper[4972]: I1121 11:49:04.846048 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfae87d2-f93e-49af-9b66-59d9e7208dd1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.098229 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" event={"ID":"cfae87d2-f93e-49af-9b66-59d9e7208dd1","Type":"ContainerDied","Data":"f77828ae31a1959c41c61e120b12e3432e4f72e6a2ed0d039b8d7f1b3353ddde"} Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.098275 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f77828ae31a1959c41c61e120b12e3432e4f72e6a2ed0d039b8d7f1b3353ddde" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.098304 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-qwz69" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.193005 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-mbltd"] Nov 21 11:49:05 crc kubenswrapper[4972]: E1121 11:49:05.193797 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfae87d2-f93e-49af-9b66-59d9e7208dd1" containerName="ceph-client-openstack-openstack-cell1" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.193912 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfae87d2-f93e-49af-9b66-59d9e7208dd1" containerName="ceph-client-openstack-openstack-cell1" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.194270 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfae87d2-f93e-49af-9b66-59d9e7208dd1" containerName="ceph-client-openstack-openstack-cell1" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.195295 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.199226 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.199297 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.199653 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.199667 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.200068 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.213602 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-mbltd"] Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.359696 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ssh-key\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.360463 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhsp2\" (UniqueName: \"kubernetes.io/projected/a889438a-ad36-4ef0-9994-da151114b722-kube-api-access-hhsp2\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.360902 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ceph\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.361066 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a889438a-ad36-4ef0-9994-da151114b722-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.361116 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.361282 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-inventory\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.463885 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ceph\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.463990 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a889438a-ad36-4ef0-9994-da151114b722-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.464025 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.464075 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-inventory\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.464121 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ssh-key\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.464234 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhsp2\" (UniqueName: \"kubernetes.io/projected/a889438a-ad36-4ef0-9994-da151114b722-kube-api-access-hhsp2\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.465472 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a889438a-ad36-4ef0-9994-da151114b722-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.468632 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-inventory\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.469952 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ssh-key\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.470261 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ceph\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.470382 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.483882 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhsp2\" (UniqueName: \"kubernetes.io/projected/a889438a-ad36-4ef0-9994-da151114b722-kube-api-access-hhsp2\") pod \"ovn-openstack-openstack-cell1-mbltd\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:05 crc kubenswrapper[4972]: I1121 11:49:05.532350 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:49:06 crc kubenswrapper[4972]: I1121 11:49:06.151583 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-mbltd"] Nov 21 11:49:07 crc kubenswrapper[4972]: I1121 11:49:07.127909 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-mbltd" event={"ID":"a889438a-ad36-4ef0-9994-da151114b722","Type":"ContainerStarted","Data":"8903d020e551c14b82c283e817f5753724795a79e3d88fdd0bf057fa200aa4e0"} Nov 21 11:49:07 crc kubenswrapper[4972]: I1121 11:49:07.128567 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-mbltd" event={"ID":"a889438a-ad36-4ef0-9994-da151114b722","Type":"ContainerStarted","Data":"270a86972e1904ae784f6a14ab290a4a26f9c40dae16bf26fd9f0c3317911291"} Nov 21 11:49:07 crc kubenswrapper[4972]: I1121 11:49:07.173471 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-mbltd" podStartSLOduration=1.733273202 podStartE2EDuration="2.173448245s" podCreationTimestamp="2025-11-21 11:49:05 +0000 UTC" firstStartedPulling="2025-11-21 11:49:06.154512542 +0000 UTC m=+7691.263655080" lastFinishedPulling="2025-11-21 11:49:06.594687625 +0000 UTC m=+7691.703830123" observedRunningTime="2025-11-21 11:49:07.159858379 +0000 UTC m=+7692.269000907" watchObservedRunningTime="2025-11-21 11:49:07.173448245 +0000 UTC m=+7692.282590763" Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.336526 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gm6bw"] Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.339425 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.385428 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gm6bw"] Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.441193 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71ebdad2-0376-43e6-bbeb-795fe1193eea-catalog-content\") pod \"redhat-operators-gm6bw\" (UID: \"71ebdad2-0376-43e6-bbeb-795fe1193eea\") " pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.441249 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vj7\" (UniqueName: \"kubernetes.io/projected/71ebdad2-0376-43e6-bbeb-795fe1193eea-kube-api-access-t4vj7\") pod \"redhat-operators-gm6bw\" (UID: \"71ebdad2-0376-43e6-bbeb-795fe1193eea\") " pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.441344 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71ebdad2-0376-43e6-bbeb-795fe1193eea-utilities\") pod \"redhat-operators-gm6bw\" (UID: \"71ebdad2-0376-43e6-bbeb-795fe1193eea\") " pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.543232 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71ebdad2-0376-43e6-bbeb-795fe1193eea-catalog-content\") pod \"redhat-operators-gm6bw\" (UID: \"71ebdad2-0376-43e6-bbeb-795fe1193eea\") " pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.543283 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vj7\" (UniqueName: \"kubernetes.io/projected/71ebdad2-0376-43e6-bbeb-795fe1193eea-kube-api-access-t4vj7\") pod \"redhat-operators-gm6bw\" (UID: \"71ebdad2-0376-43e6-bbeb-795fe1193eea\") " pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.543328 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71ebdad2-0376-43e6-bbeb-795fe1193eea-utilities\") pod \"redhat-operators-gm6bw\" (UID: \"71ebdad2-0376-43e6-bbeb-795fe1193eea\") " pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.543802 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71ebdad2-0376-43e6-bbeb-795fe1193eea-catalog-content\") pod \"redhat-operators-gm6bw\" (UID: \"71ebdad2-0376-43e6-bbeb-795fe1193eea\") " pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.544099 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71ebdad2-0376-43e6-bbeb-795fe1193eea-utilities\") pod \"redhat-operators-gm6bw\" (UID: \"71ebdad2-0376-43e6-bbeb-795fe1193eea\") " pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.565516 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vj7\" (UniqueName: \"kubernetes.io/projected/71ebdad2-0376-43e6-bbeb-795fe1193eea-kube-api-access-t4vj7\") pod \"redhat-operators-gm6bw\" (UID: \"71ebdad2-0376-43e6-bbeb-795fe1193eea\") " pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:25 crc kubenswrapper[4972]: I1121 11:49:25.673187 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:26 crc kubenswrapper[4972]: I1121 11:49:26.190479 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gm6bw"] Nov 21 11:49:26 crc kubenswrapper[4972]: I1121 11:49:26.393756 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm6bw" event={"ID":"71ebdad2-0376-43e6-bbeb-795fe1193eea","Type":"ContainerStarted","Data":"2e010516eb15a2ab431ae085954681c6b1c827197d33d9d380ab7b924ea5585e"} Nov 21 11:49:27 crc kubenswrapper[4972]: I1121 11:49:27.409215 4972 generic.go:334] "Generic (PLEG): container finished" podID="71ebdad2-0376-43e6-bbeb-795fe1193eea" containerID="09e8a6994803b63c73b289ab7ff9fb6fa979d66e665af13c3bdc7c1e5926ad8b" exitCode=0 Nov 21 11:49:27 crc kubenswrapper[4972]: I1121 11:49:27.409313 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm6bw" event={"ID":"71ebdad2-0376-43e6-bbeb-795fe1193eea","Type":"ContainerDied","Data":"09e8a6994803b63c73b289ab7ff9fb6fa979d66e665af13c3bdc7c1e5926ad8b"} Nov 21 11:49:30 crc kubenswrapper[4972]: I1121 11:49:30.447893 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm6bw" event={"ID":"71ebdad2-0376-43e6-bbeb-795fe1193eea","Type":"ContainerStarted","Data":"5468cd2c411b24b2b6eb4440616dc6a7fddf2ef6170a69bff2067955fb64f729"} Nov 21 11:49:37 crc kubenswrapper[4972]: I1121 11:49:37.536989 4972 generic.go:334] "Generic (PLEG): container finished" podID="71ebdad2-0376-43e6-bbeb-795fe1193eea" containerID="5468cd2c411b24b2b6eb4440616dc6a7fddf2ef6170a69bff2067955fb64f729" exitCode=0 Nov 21 11:49:37 crc kubenswrapper[4972]: I1121 11:49:37.537450 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm6bw" event={"ID":"71ebdad2-0376-43e6-bbeb-795fe1193eea","Type":"ContainerDied","Data":"5468cd2c411b24b2b6eb4440616dc6a7fddf2ef6170a69bff2067955fb64f729"} Nov 21 11:49:37 crc kubenswrapper[4972]: I1121 11:49:37.540238 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 11:49:38 crc kubenswrapper[4972]: I1121 11:49:38.563765 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm6bw" event={"ID":"71ebdad2-0376-43e6-bbeb-795fe1193eea","Type":"ContainerStarted","Data":"df9ae36870707140dd2ffbe38277860aab0f1d910cda80ae2b671990c3a14de2"} Nov 21 11:49:38 crc kubenswrapper[4972]: I1121 11:49:38.591906 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gm6bw" podStartSLOduration=2.892582672 podStartE2EDuration="13.591883455s" podCreationTimestamp="2025-11-21 11:49:25 +0000 UTC" firstStartedPulling="2025-11-21 11:49:27.412522632 +0000 UTC m=+7712.521665130" lastFinishedPulling="2025-11-21 11:49:38.111823415 +0000 UTC m=+7723.220965913" observedRunningTime="2025-11-21 11:49:38.589445311 +0000 UTC m=+7723.698587859" watchObservedRunningTime="2025-11-21 11:49:38.591883455 +0000 UTC m=+7723.701025953" Nov 21 11:49:45 crc kubenswrapper[4972]: I1121 11:49:45.673704 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:45 crc kubenswrapper[4972]: I1121 11:49:45.674246 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:45 crc kubenswrapper[4972]: I1121 11:49:45.731189 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:46 crc kubenswrapper[4972]: I1121 11:49:46.724951 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:49 crc kubenswrapper[4972]: I1121 11:49:49.498104 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gm6bw"] Nov 21 11:49:49 crc kubenswrapper[4972]: I1121 11:49:49.498893 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gm6bw" podUID="71ebdad2-0376-43e6-bbeb-795fe1193eea" containerName="registry-server" containerID="cri-o://df9ae36870707140dd2ffbe38277860aab0f1d910cda80ae2b671990c3a14de2" gracePeriod=2 Nov 21 11:49:49 crc kubenswrapper[4972]: I1121 11:49:49.682333 4972 generic.go:334] "Generic (PLEG): container finished" podID="71ebdad2-0376-43e6-bbeb-795fe1193eea" containerID="df9ae36870707140dd2ffbe38277860aab0f1d910cda80ae2b671990c3a14de2" exitCode=0 Nov 21 11:49:49 crc kubenswrapper[4972]: I1121 11:49:49.682379 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm6bw" event={"ID":"71ebdad2-0376-43e6-bbeb-795fe1193eea","Type":"ContainerDied","Data":"df9ae36870707140dd2ffbe38277860aab0f1d910cda80ae2b671990c3a14de2"} Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.079580 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.210145 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71ebdad2-0376-43e6-bbeb-795fe1193eea-utilities\") pod \"71ebdad2-0376-43e6-bbeb-795fe1193eea\" (UID: \"71ebdad2-0376-43e6-bbeb-795fe1193eea\") " Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.210396 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4vj7\" (UniqueName: \"kubernetes.io/projected/71ebdad2-0376-43e6-bbeb-795fe1193eea-kube-api-access-t4vj7\") pod \"71ebdad2-0376-43e6-bbeb-795fe1193eea\" (UID: \"71ebdad2-0376-43e6-bbeb-795fe1193eea\") " Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.210523 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71ebdad2-0376-43e6-bbeb-795fe1193eea-catalog-content\") pod \"71ebdad2-0376-43e6-bbeb-795fe1193eea\" (UID: \"71ebdad2-0376-43e6-bbeb-795fe1193eea\") " Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.211362 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71ebdad2-0376-43e6-bbeb-795fe1193eea-utilities" (OuterVolumeSpecName: "utilities") pod "71ebdad2-0376-43e6-bbeb-795fe1193eea" (UID: "71ebdad2-0376-43e6-bbeb-795fe1193eea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.217002 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ebdad2-0376-43e6-bbeb-795fe1193eea-kube-api-access-t4vj7" (OuterVolumeSpecName: "kube-api-access-t4vj7") pod "71ebdad2-0376-43e6-bbeb-795fe1193eea" (UID: "71ebdad2-0376-43e6-bbeb-795fe1193eea"). InnerVolumeSpecName "kube-api-access-t4vj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.305276 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71ebdad2-0376-43e6-bbeb-795fe1193eea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71ebdad2-0376-43e6-bbeb-795fe1193eea" (UID: "71ebdad2-0376-43e6-bbeb-795fe1193eea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.313524 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71ebdad2-0376-43e6-bbeb-795fe1193eea-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.313570 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4vj7\" (UniqueName: \"kubernetes.io/projected/71ebdad2-0376-43e6-bbeb-795fe1193eea-kube-api-access-t4vj7\") on node \"crc\" DevicePath \"\"" Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.313583 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71ebdad2-0376-43e6-bbeb-795fe1193eea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.694993 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm6bw" event={"ID":"71ebdad2-0376-43e6-bbeb-795fe1193eea","Type":"ContainerDied","Data":"2e010516eb15a2ab431ae085954681c6b1c827197d33d9d380ab7b924ea5585e"} Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.695040 4972 scope.go:117] "RemoveContainer" containerID="df9ae36870707140dd2ffbe38277860aab0f1d910cda80ae2b671990c3a14de2" Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.696160 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gm6bw" Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.720939 4972 scope.go:117] "RemoveContainer" containerID="5468cd2c411b24b2b6eb4440616dc6a7fddf2ef6170a69bff2067955fb64f729" Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.729844 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gm6bw"] Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.738039 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gm6bw"] Nov 21 11:49:50 crc kubenswrapper[4972]: I1121 11:49:50.752966 4972 scope.go:117] "RemoveContainer" containerID="09e8a6994803b63c73b289ab7ff9fb6fa979d66e665af13c3bdc7c1e5926ad8b" Nov 21 11:49:51 crc kubenswrapper[4972]: I1121 11:49:51.776235 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ebdad2-0376-43e6-bbeb-795fe1193eea" path="/var/lib/kubelet/pods/71ebdad2-0376-43e6-bbeb-795fe1193eea/volumes" Nov 21 11:49:56 crc kubenswrapper[4972]: I1121 11:49:56.179573 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:49:56 crc kubenswrapper[4972]: I1121 11:49:56.180236 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.031397 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qmgx2"] Nov 21 11:49:57 crc kubenswrapper[4972]: E1121 11:49:57.031968 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ebdad2-0376-43e6-bbeb-795fe1193eea" containerName="extract-utilities" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.031987 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ebdad2-0376-43e6-bbeb-795fe1193eea" containerName="extract-utilities" Nov 21 11:49:57 crc kubenswrapper[4972]: E1121 11:49:57.032004 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ebdad2-0376-43e6-bbeb-795fe1193eea" containerName="registry-server" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.032013 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ebdad2-0376-43e6-bbeb-795fe1193eea" containerName="registry-server" Nov 21 11:49:57 crc kubenswrapper[4972]: E1121 11:49:57.032033 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ebdad2-0376-43e6-bbeb-795fe1193eea" containerName="extract-content" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.032040 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ebdad2-0376-43e6-bbeb-795fe1193eea" containerName="extract-content" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.032297 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ebdad2-0376-43e6-bbeb-795fe1193eea" containerName="registry-server" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.035451 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.042592 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmgx2"] Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.163526 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmgtx\" (UniqueName: \"kubernetes.io/projected/b2ad91ba-47cc-4665-b125-024dcc322dbd-kube-api-access-mmgtx\") pod \"redhat-marketplace-qmgx2\" (UID: \"b2ad91ba-47cc-4665-b125-024dcc322dbd\") " pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.163728 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ad91ba-47cc-4665-b125-024dcc322dbd-catalog-content\") pod \"redhat-marketplace-qmgx2\" (UID: \"b2ad91ba-47cc-4665-b125-024dcc322dbd\") " pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.163751 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ad91ba-47cc-4665-b125-024dcc322dbd-utilities\") pod \"redhat-marketplace-qmgx2\" (UID: \"b2ad91ba-47cc-4665-b125-024dcc322dbd\") " pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.266918 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ad91ba-47cc-4665-b125-024dcc322dbd-catalog-content\") pod \"redhat-marketplace-qmgx2\" (UID: \"b2ad91ba-47cc-4665-b125-024dcc322dbd\") " pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.267332 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ad91ba-47cc-4665-b125-024dcc322dbd-utilities\") pod \"redhat-marketplace-qmgx2\" (UID: \"b2ad91ba-47cc-4665-b125-024dcc322dbd\") " pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.267428 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ad91ba-47cc-4665-b125-024dcc322dbd-catalog-content\") pod \"redhat-marketplace-qmgx2\" (UID: \"b2ad91ba-47cc-4665-b125-024dcc322dbd\") " pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.267474 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmgtx\" (UniqueName: \"kubernetes.io/projected/b2ad91ba-47cc-4665-b125-024dcc322dbd-kube-api-access-mmgtx\") pod \"redhat-marketplace-qmgx2\" (UID: \"b2ad91ba-47cc-4665-b125-024dcc322dbd\") " pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.268133 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ad91ba-47cc-4665-b125-024dcc322dbd-utilities\") pod \"redhat-marketplace-qmgx2\" (UID: \"b2ad91ba-47cc-4665-b125-024dcc322dbd\") " pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.290023 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmgtx\" (UniqueName: \"kubernetes.io/projected/b2ad91ba-47cc-4665-b125-024dcc322dbd-kube-api-access-mmgtx\") pod \"redhat-marketplace-qmgx2\" (UID: \"b2ad91ba-47cc-4665-b125-024dcc322dbd\") " pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.363079 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:49:57 crc kubenswrapper[4972]: I1121 11:49:57.896475 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmgx2"] Nov 21 11:49:58 crc kubenswrapper[4972]: I1121 11:49:58.781708 4972 generic.go:334] "Generic (PLEG): container finished" podID="b2ad91ba-47cc-4665-b125-024dcc322dbd" containerID="35b495590ea75c3c8e6699e33a867717d1eaceda772883b2d8c8246021b5fcd7" exitCode=0 Nov 21 11:49:58 crc kubenswrapper[4972]: I1121 11:49:58.781858 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmgx2" event={"ID":"b2ad91ba-47cc-4665-b125-024dcc322dbd","Type":"ContainerDied","Data":"35b495590ea75c3c8e6699e33a867717d1eaceda772883b2d8c8246021b5fcd7"} Nov 21 11:49:58 crc kubenswrapper[4972]: I1121 11:49:58.782183 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmgx2" event={"ID":"b2ad91ba-47cc-4665-b125-024dcc322dbd","Type":"ContainerStarted","Data":"108825190d0db31aa110bd05b6e7b4cddeae868ff5ce426e6cd9b49cc59a674a"} Nov 21 11:50:01 crc kubenswrapper[4972]: I1121 11:50:01.826445 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmgx2" event={"ID":"b2ad91ba-47cc-4665-b125-024dcc322dbd","Type":"ContainerStarted","Data":"efb655f701faf8808499ab3634e1e7514aa27592839bd094b77a825238e502af"} Nov 21 11:50:03 crc kubenswrapper[4972]: I1121 11:50:03.861422 4972 generic.go:334] "Generic (PLEG): container finished" podID="b2ad91ba-47cc-4665-b125-024dcc322dbd" containerID="efb655f701faf8808499ab3634e1e7514aa27592839bd094b77a825238e502af" exitCode=0 Nov 21 11:50:03 crc kubenswrapper[4972]: I1121 11:50:03.862364 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmgx2" event={"ID":"b2ad91ba-47cc-4665-b125-024dcc322dbd","Type":"ContainerDied","Data":"efb655f701faf8808499ab3634e1e7514aa27592839bd094b77a825238e502af"} Nov 21 11:50:05 crc kubenswrapper[4972]: I1121 11:50:05.883238 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmgx2" event={"ID":"b2ad91ba-47cc-4665-b125-024dcc322dbd","Type":"ContainerStarted","Data":"1c9f2fa54ce78d3a340a66b06a61edf78173a7fb45d68823cee179aba3db727f"} Nov 21 11:50:05 crc kubenswrapper[4972]: I1121 11:50:05.912687 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qmgx2" podStartSLOduration=2.664786213 podStartE2EDuration="8.91264033s" podCreationTimestamp="2025-11-21 11:49:57 +0000 UTC" firstStartedPulling="2025-11-21 11:49:58.784383385 +0000 UTC m=+7743.893525883" lastFinishedPulling="2025-11-21 11:50:05.032237482 +0000 UTC m=+7750.141380000" observedRunningTime="2025-11-21 11:50:05.90312995 +0000 UTC m=+7751.012272458" watchObservedRunningTime="2025-11-21 11:50:05.91264033 +0000 UTC m=+7751.021782848" Nov 21 11:50:07 crc kubenswrapper[4972]: I1121 11:50:07.363868 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:50:07 crc kubenswrapper[4972]: I1121 11:50:07.364190 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:50:07 crc kubenswrapper[4972]: I1121 11:50:07.427310 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:50:15 crc kubenswrapper[4972]: I1121 11:50:15.991923 4972 generic.go:334] "Generic (PLEG): container finished" podID="a889438a-ad36-4ef0-9994-da151114b722" containerID="8903d020e551c14b82c283e817f5753724795a79e3d88fdd0bf057fa200aa4e0" exitCode=0 Nov 21 11:50:15 crc kubenswrapper[4972]: I1121 11:50:15.991981 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-mbltd" event={"ID":"a889438a-ad36-4ef0-9994-da151114b722","Type":"ContainerDied","Data":"8903d020e551c14b82c283e817f5753724795a79e3d88fdd0bf057fa200aa4e0"} Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.420738 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.501032 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmgx2"] Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.555173 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.602355 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-inventory\") pod \"a889438a-ad36-4ef0-9994-da151114b722\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.602933 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhsp2\" (UniqueName: \"kubernetes.io/projected/a889438a-ad36-4ef0-9994-da151114b722-kube-api-access-hhsp2\") pod \"a889438a-ad36-4ef0-9994-da151114b722\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.603060 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ssh-key\") pod \"a889438a-ad36-4ef0-9994-da151114b722\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.603328 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ovn-combined-ca-bundle\") pod \"a889438a-ad36-4ef0-9994-da151114b722\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.603356 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ceph\") pod \"a889438a-ad36-4ef0-9994-da151114b722\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.603452 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a889438a-ad36-4ef0-9994-da151114b722-ovncontroller-config-0\") pod \"a889438a-ad36-4ef0-9994-da151114b722\" (UID: \"a889438a-ad36-4ef0-9994-da151114b722\") " Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.609296 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ceph" (OuterVolumeSpecName: "ceph") pod "a889438a-ad36-4ef0-9994-da151114b722" (UID: "a889438a-ad36-4ef0-9994-da151114b722"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.609985 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a889438a-ad36-4ef0-9994-da151114b722-kube-api-access-hhsp2" (OuterVolumeSpecName: "kube-api-access-hhsp2") pod "a889438a-ad36-4ef0-9994-da151114b722" (UID: "a889438a-ad36-4ef0-9994-da151114b722"). InnerVolumeSpecName "kube-api-access-hhsp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.623346 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a889438a-ad36-4ef0-9994-da151114b722" (UID: "a889438a-ad36-4ef0-9994-da151114b722"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.634819 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a889438a-ad36-4ef0-9994-da151114b722-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a889438a-ad36-4ef0-9994-da151114b722" (UID: "a889438a-ad36-4ef0-9994-da151114b722"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.638472 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-inventory" (OuterVolumeSpecName: "inventory") pod "a889438a-ad36-4ef0-9994-da151114b722" (UID: "a889438a-ad36-4ef0-9994-da151114b722"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.648884 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a889438a-ad36-4ef0-9994-da151114b722" (UID: "a889438a-ad36-4ef0-9994-da151114b722"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.708009 4972 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.708073 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.708113 4972 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a889438a-ad36-4ef0-9994-da151114b722-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.708149 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.708164 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhsp2\" (UniqueName: \"kubernetes.io/projected/a889438a-ad36-4ef0-9994-da151114b722-kube-api-access-hhsp2\") on node \"crc\" DevicePath \"\"" Nov 21 11:50:17 crc kubenswrapper[4972]: I1121 11:50:17.708177 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a889438a-ad36-4ef0-9994-da151114b722-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.013054 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-mbltd" event={"ID":"a889438a-ad36-4ef0-9994-da151114b722","Type":"ContainerDied","Data":"270a86972e1904ae784f6a14ab290a4a26f9c40dae16bf26fd9f0c3317911291"} Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.013102 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="270a86972e1904ae784f6a14ab290a4a26f9c40dae16bf26fd9f0c3317911291" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.013098 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-mbltd" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.013215 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qmgx2" podUID="b2ad91ba-47cc-4665-b125-024dcc322dbd" containerName="registry-server" containerID="cri-o://1c9f2fa54ce78d3a340a66b06a61edf78173a7fb45d68823cee179aba3db727f" gracePeriod=2 Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.101976 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-9mr8k"] Nov 21 11:50:18 crc kubenswrapper[4972]: E1121 11:50:18.102745 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a889438a-ad36-4ef0-9994-da151114b722" containerName="ovn-openstack-openstack-cell1" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.102762 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="a889438a-ad36-4ef0-9994-da151114b722" containerName="ovn-openstack-openstack-cell1" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.103027 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="a889438a-ad36-4ef0-9994-da151114b722" containerName="ovn-openstack-openstack-cell1" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.104010 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.106307 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.106323 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.106419 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.106311 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.106506 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.106593 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.123429 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-9mr8k"] Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.217757 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.217822 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.217872 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.217931 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.218009 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.218046 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwdmq\" (UniqueName: \"kubernetes.io/projected/af381684-084b-4e6c-990d-db256b17820f-kube-api-access-qwdmq\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.218097 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.320067 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.320205 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.320232 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.320248 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.320281 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.320321 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.320348 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwdmq\" (UniqueName: \"kubernetes.io/projected/af381684-084b-4e6c-990d-db256b17820f-kube-api-access-qwdmq\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.324181 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.324627 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.324695 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.324895 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.325190 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.325348 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.337122 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwdmq\" (UniqueName: \"kubernetes.io/projected/af381684-084b-4e6c-990d-db256b17820f-kube-api-access-qwdmq\") pod \"neutron-metadata-openstack-openstack-cell1-9mr8k\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:18 crc kubenswrapper[4972]: I1121 11:50:18.540992 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.027576 4972 generic.go:334] "Generic (PLEG): container finished" podID="b2ad91ba-47cc-4665-b125-024dcc322dbd" containerID="1c9f2fa54ce78d3a340a66b06a61edf78173a7fb45d68823cee179aba3db727f" exitCode=0 Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.027624 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmgx2" event={"ID":"b2ad91ba-47cc-4665-b125-024dcc322dbd","Type":"ContainerDied","Data":"1c9f2fa54ce78d3a340a66b06a61edf78173a7fb45d68823cee179aba3db727f"} Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.027999 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmgx2" event={"ID":"b2ad91ba-47cc-4665-b125-024dcc322dbd","Type":"ContainerDied","Data":"108825190d0db31aa110bd05b6e7b4cddeae868ff5ce426e6cd9b49cc59a674a"} Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.028017 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="108825190d0db31aa110bd05b6e7b4cddeae868ff5ce426e6cd9b49cc59a674a" Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.054485 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.065601 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-9mr8k"] Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.143734 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmgtx\" (UniqueName: \"kubernetes.io/projected/b2ad91ba-47cc-4665-b125-024dcc322dbd-kube-api-access-mmgtx\") pod \"b2ad91ba-47cc-4665-b125-024dcc322dbd\" (UID: \"b2ad91ba-47cc-4665-b125-024dcc322dbd\") " Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.143992 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ad91ba-47cc-4665-b125-024dcc322dbd-catalog-content\") pod \"b2ad91ba-47cc-4665-b125-024dcc322dbd\" (UID: \"b2ad91ba-47cc-4665-b125-024dcc322dbd\") " Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.144327 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ad91ba-47cc-4665-b125-024dcc322dbd-utilities\") pod \"b2ad91ba-47cc-4665-b125-024dcc322dbd\" (UID: \"b2ad91ba-47cc-4665-b125-024dcc322dbd\") " Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.145183 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ad91ba-47cc-4665-b125-024dcc322dbd-utilities" (OuterVolumeSpecName: "utilities") pod "b2ad91ba-47cc-4665-b125-024dcc322dbd" (UID: "b2ad91ba-47cc-4665-b125-024dcc322dbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.146030 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ad91ba-47cc-4665-b125-024dcc322dbd-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.150926 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ad91ba-47cc-4665-b125-024dcc322dbd-kube-api-access-mmgtx" (OuterVolumeSpecName: "kube-api-access-mmgtx") pod "b2ad91ba-47cc-4665-b125-024dcc322dbd" (UID: "b2ad91ba-47cc-4665-b125-024dcc322dbd"). InnerVolumeSpecName "kube-api-access-mmgtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.161089 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ad91ba-47cc-4665-b125-024dcc322dbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2ad91ba-47cc-4665-b125-024dcc322dbd" (UID: "b2ad91ba-47cc-4665-b125-024dcc322dbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.248501 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmgtx\" (UniqueName: \"kubernetes.io/projected/b2ad91ba-47cc-4665-b125-024dcc322dbd-kube-api-access-mmgtx\") on node \"crc\" DevicePath \"\"" Nov 21 11:50:19 crc kubenswrapper[4972]: I1121 11:50:19.248548 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ad91ba-47cc-4665-b125-024dcc322dbd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:50:20 crc kubenswrapper[4972]: I1121 11:50:20.048380 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" event={"ID":"af381684-084b-4e6c-990d-db256b17820f","Type":"ContainerStarted","Data":"fd9a7abb8dab9ac6b73facd913df6b275a0818b65fcf83c759ad92dfba39121b"} Nov 21 11:50:20 crc kubenswrapper[4972]: I1121 11:50:20.048439 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmgx2" Nov 21 11:50:20 crc kubenswrapper[4972]: I1121 11:50:20.075815 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmgx2"] Nov 21 11:50:20 crc kubenswrapper[4972]: I1121 11:50:20.085874 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmgx2"] Nov 21 11:50:21 crc kubenswrapper[4972]: I1121 11:50:21.771699 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ad91ba-47cc-4665-b125-024dcc322dbd" path="/var/lib/kubelet/pods/b2ad91ba-47cc-4665-b125-024dcc322dbd/volumes" Nov 21 11:50:22 crc kubenswrapper[4972]: I1121 11:50:22.082744 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" event={"ID":"af381684-084b-4e6c-990d-db256b17820f","Type":"ContainerStarted","Data":"0c11d09b371c7ab746325eaf2de1148b2cc6a33834712c754c68fb1808a879ef"} Nov 21 11:50:22 crc kubenswrapper[4972]: I1121 11:50:22.116924 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" podStartSLOduration=2.358475048 podStartE2EDuration="4.116906242s" podCreationTimestamp="2025-11-21 11:50:18 +0000 UTC" firstStartedPulling="2025-11-21 11:50:19.081246324 +0000 UTC m=+7764.190388822" lastFinishedPulling="2025-11-21 11:50:20.839677478 +0000 UTC m=+7765.948820016" observedRunningTime="2025-11-21 11:50:22.113456361 +0000 UTC m=+7767.222598859" watchObservedRunningTime="2025-11-21 11:50:22.116906242 +0000 UTC m=+7767.226048750" Nov 21 11:50:26 crc kubenswrapper[4972]: I1121 11:50:26.179104 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:50:26 crc kubenswrapper[4972]: I1121 11:50:26.179708 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:50:56 crc kubenswrapper[4972]: I1121 11:50:56.178681 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:50:56 crc kubenswrapper[4972]: I1121 11:50:56.179352 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:50:56 crc kubenswrapper[4972]: I1121 11:50:56.179407 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 11:50:56 crc kubenswrapper[4972]: I1121 11:50:56.180284 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 11:50:56 crc kubenswrapper[4972]: I1121 11:50:56.180353 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" gracePeriod=600 Nov 21 11:50:56 crc kubenswrapper[4972]: E1121 11:50:56.314305 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:50:56 crc kubenswrapper[4972]: I1121 11:50:56.472173 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" exitCode=0 Nov 21 11:50:56 crc kubenswrapper[4972]: I1121 11:50:56.472223 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0"} Nov 21 11:50:56 crc kubenswrapper[4972]: I1121 11:50:56.472260 4972 scope.go:117] "RemoveContainer" containerID="b79c2b857a1fdd2c8360d6a73525adb3af3295fc033655a7e49f7c9eebd3b913" Nov 21 11:50:56 crc kubenswrapper[4972]: I1121 11:50:56.473227 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:50:56 crc kubenswrapper[4972]: E1121 11:50:56.475961 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:51:09 crc kubenswrapper[4972]: I1121 11:51:09.760166 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:51:09 crc kubenswrapper[4972]: E1121 11:51:09.760964 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:51:16 crc kubenswrapper[4972]: I1121 11:51:16.704226 4972 generic.go:334] "Generic (PLEG): container finished" podID="af381684-084b-4e6c-990d-db256b17820f" containerID="0c11d09b371c7ab746325eaf2de1148b2cc6a33834712c754c68fb1808a879ef" exitCode=0 Nov 21 11:51:16 crc kubenswrapper[4972]: I1121 11:51:16.704258 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" event={"ID":"af381684-084b-4e6c-990d-db256b17820f","Type":"ContainerDied","Data":"0c11d09b371c7ab746325eaf2de1148b2cc6a33834712c754c68fb1808a879ef"} Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.192619 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.390616 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-ceph\") pod \"af381684-084b-4e6c-990d-db256b17820f\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.391037 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-neutron-metadata-combined-ca-bundle\") pod \"af381684-084b-4e6c-990d-db256b17820f\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.391221 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwdmq\" (UniqueName: \"kubernetes.io/projected/af381684-084b-4e6c-990d-db256b17820f-kube-api-access-qwdmq\") pod \"af381684-084b-4e6c-990d-db256b17820f\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.391400 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-ssh-key\") pod \"af381684-084b-4e6c-990d-db256b17820f\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.392058 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"af381684-084b-4e6c-990d-db256b17820f\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.392136 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-inventory\") pod \"af381684-084b-4e6c-990d-db256b17820f\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.392173 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-nova-metadata-neutron-config-0\") pod \"af381684-084b-4e6c-990d-db256b17820f\" (UID: \"af381684-084b-4e6c-990d-db256b17820f\") " Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.406415 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af381684-084b-4e6c-990d-db256b17820f-kube-api-access-qwdmq" (OuterVolumeSpecName: "kube-api-access-qwdmq") pod "af381684-084b-4e6c-990d-db256b17820f" (UID: "af381684-084b-4e6c-990d-db256b17820f"). InnerVolumeSpecName "kube-api-access-qwdmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.406916 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "af381684-084b-4e6c-990d-db256b17820f" (UID: "af381684-084b-4e6c-990d-db256b17820f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.407167 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-ceph" (OuterVolumeSpecName: "ceph") pod "af381684-084b-4e6c-990d-db256b17820f" (UID: "af381684-084b-4e6c-990d-db256b17820f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.421403 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "af381684-084b-4e6c-990d-db256b17820f" (UID: "af381684-084b-4e6c-990d-db256b17820f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.422316 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "af381684-084b-4e6c-990d-db256b17820f" (UID: "af381684-084b-4e6c-990d-db256b17820f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.436947 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "af381684-084b-4e6c-990d-db256b17820f" (UID: "af381684-084b-4e6c-990d-db256b17820f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.437376 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-inventory" (OuterVolumeSpecName: "inventory") pod "af381684-084b-4e6c-990d-db256b17820f" (UID: "af381684-084b-4e6c-990d-db256b17820f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.494764 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwdmq\" (UniqueName: \"kubernetes.io/projected/af381684-084b-4e6c-990d-db256b17820f-kube-api-access-qwdmq\") on node \"crc\" DevicePath \"\"" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.494808 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.494822 4972 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.494854 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.494868 4972 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.494880 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.494892 4972 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af381684-084b-4e6c-990d-db256b17820f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.727155 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" event={"ID":"af381684-084b-4e6c-990d-db256b17820f","Type":"ContainerDied","Data":"fd9a7abb8dab9ac6b73facd913df6b275a0818b65fcf83c759ad92dfba39121b"} Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.727545 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd9a7abb8dab9ac6b73facd913df6b275a0818b65fcf83c759ad92dfba39121b" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.727264 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-9mr8k" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.817755 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-dmptj"] Nov 21 11:51:18 crc kubenswrapper[4972]: E1121 11:51:18.818373 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af381684-084b-4e6c-990d-db256b17820f" containerName="neutron-metadata-openstack-openstack-cell1" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.818399 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="af381684-084b-4e6c-990d-db256b17820f" containerName="neutron-metadata-openstack-openstack-cell1" Nov 21 11:51:18 crc kubenswrapper[4972]: E1121 11:51:18.818443 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ad91ba-47cc-4665-b125-024dcc322dbd" containerName="extract-utilities" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.818453 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ad91ba-47cc-4665-b125-024dcc322dbd" containerName="extract-utilities" Nov 21 11:51:18 crc kubenswrapper[4972]: E1121 11:51:18.818473 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ad91ba-47cc-4665-b125-024dcc322dbd" containerName="extract-content" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.818482 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ad91ba-47cc-4665-b125-024dcc322dbd" containerName="extract-content" Nov 21 11:51:18 crc kubenswrapper[4972]: E1121 11:51:18.818498 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ad91ba-47cc-4665-b125-024dcc322dbd" containerName="registry-server" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.818506 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ad91ba-47cc-4665-b125-024dcc322dbd" containerName="registry-server" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.818796 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ad91ba-47cc-4665-b125-024dcc322dbd" containerName="registry-server" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.818820 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="af381684-084b-4e6c-990d-db256b17820f" containerName="neutron-metadata-openstack-openstack-cell1" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.819869 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.822668 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.824616 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.824611 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.825208 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.825389 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:51:18 crc kubenswrapper[4972]: I1121 11:51:18.829356 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-dmptj"] Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.005914 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-ceph\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.006169 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-inventory\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.006318 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.006381 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmt89\" (UniqueName: \"kubernetes.io/projected/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-kube-api-access-vmt89\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.006419 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.006492 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-ssh-key\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.109166 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-inventory\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.109401 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.109717 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmt89\" (UniqueName: \"kubernetes.io/projected/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-kube-api-access-vmt89\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.110319 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.110504 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-ssh-key\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.110664 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-ceph\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.114582 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.114809 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-ssh-key\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.115307 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.115765 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-inventory\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.118654 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-ceph\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.132603 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmt89\" (UniqueName: \"kubernetes.io/projected/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-kube-api-access-vmt89\") pod \"libvirt-openstack-openstack-cell1-dmptj\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.138593 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.725590 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-dmptj"] Nov 21 11:51:19 crc kubenswrapper[4972]: I1121 11:51:19.741384 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-dmptj" event={"ID":"1f21798f-b3ad-4a9f-abba-62a3da5ce59a","Type":"ContainerStarted","Data":"8e0ad7bba7f789616800a790b606279dc5376a607579e24ea407446183e4a7ad"} Nov 21 11:51:21 crc kubenswrapper[4972]: I1121 11:51:21.780529 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-dmptj" event={"ID":"1f21798f-b3ad-4a9f-abba-62a3da5ce59a","Type":"ContainerStarted","Data":"1c6b35d02ee3427bc4ea15457ee294109a27bb95c0158a239331188b2190dbff"} Nov 21 11:51:21 crc kubenswrapper[4972]: I1121 11:51:21.810858 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-dmptj" podStartSLOduration=2.790102911 podStartE2EDuration="3.810800812s" podCreationTimestamp="2025-11-21 11:51:18 +0000 UTC" firstStartedPulling="2025-11-21 11:51:19.728898888 +0000 UTC m=+7824.838041386" lastFinishedPulling="2025-11-21 11:51:20.749596779 +0000 UTC m=+7825.858739287" observedRunningTime="2025-11-21 11:51:21.80002783 +0000 UTC m=+7826.909170398" watchObservedRunningTime="2025-11-21 11:51:21.810800812 +0000 UTC m=+7826.919943340" Nov 21 11:51:24 crc kubenswrapper[4972]: I1121 11:51:24.759714 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:51:24 crc kubenswrapper[4972]: E1121 11:51:24.760473 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:51:35 crc kubenswrapper[4972]: I1121 11:51:35.787481 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:51:35 crc kubenswrapper[4972]: E1121 11:51:35.789316 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:51:49 crc kubenswrapper[4972]: I1121 11:51:49.760170 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:51:49 crc kubenswrapper[4972]: E1121 11:51:49.761334 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:52:04 crc kubenswrapper[4972]: I1121 11:52:04.760094 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:52:04 crc kubenswrapper[4972]: E1121 11:52:04.761059 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:52:16 crc kubenswrapper[4972]: I1121 11:52:16.760768 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:52:16 crc kubenswrapper[4972]: E1121 11:52:16.762569 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:52:27 crc kubenswrapper[4972]: I1121 11:52:27.760256 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:52:27 crc kubenswrapper[4972]: E1121 11:52:27.761337 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:52:40 crc kubenswrapper[4972]: I1121 11:52:40.760107 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:52:40 crc kubenswrapper[4972]: E1121 11:52:40.760905 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.002687 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r9wln"] Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.005628 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.031354 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9wln"] Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.070008 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51742387-31c1-4886-9d16-56795b595d75-utilities\") pod \"certified-operators-r9wln\" (UID: \"51742387-31c1-4886-9d16-56795b595d75\") " pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.070053 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51742387-31c1-4886-9d16-56795b595d75-catalog-content\") pod \"certified-operators-r9wln\" (UID: \"51742387-31c1-4886-9d16-56795b595d75\") " pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.070116 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th2v9\" (UniqueName: \"kubernetes.io/projected/51742387-31c1-4886-9d16-56795b595d75-kube-api-access-th2v9\") pod \"certified-operators-r9wln\" (UID: \"51742387-31c1-4886-9d16-56795b595d75\") " pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.171735 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51742387-31c1-4886-9d16-56795b595d75-utilities\") pod \"certified-operators-r9wln\" (UID: \"51742387-31c1-4886-9d16-56795b595d75\") " pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.171780 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51742387-31c1-4886-9d16-56795b595d75-catalog-content\") pod \"certified-operators-r9wln\" (UID: \"51742387-31c1-4886-9d16-56795b595d75\") " pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.171883 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th2v9\" (UniqueName: \"kubernetes.io/projected/51742387-31c1-4886-9d16-56795b595d75-kube-api-access-th2v9\") pod \"certified-operators-r9wln\" (UID: \"51742387-31c1-4886-9d16-56795b595d75\") " pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.172249 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51742387-31c1-4886-9d16-56795b595d75-utilities\") pod \"certified-operators-r9wln\" (UID: \"51742387-31c1-4886-9d16-56795b595d75\") " pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.172281 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51742387-31c1-4886-9d16-56795b595d75-catalog-content\") pod \"certified-operators-r9wln\" (UID: \"51742387-31c1-4886-9d16-56795b595d75\") " pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.195659 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th2v9\" (UniqueName: \"kubernetes.io/projected/51742387-31c1-4886-9d16-56795b595d75-kube-api-access-th2v9\") pod \"certified-operators-r9wln\" (UID: \"51742387-31c1-4886-9d16-56795b595d75\") " pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.329598 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:52:42 crc kubenswrapper[4972]: I1121 11:52:42.801851 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9wln"] Nov 21 11:52:43 crc kubenswrapper[4972]: I1121 11:52:43.229732 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9wln" event={"ID":"51742387-31c1-4886-9d16-56795b595d75","Type":"ContainerStarted","Data":"72c30f7950b3bcac993a065500e65d109b6d159e23ad5a4dfa8238e926384e15"} Nov 21 11:52:43 crc kubenswrapper[4972]: I1121 11:52:43.230163 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9wln" event={"ID":"51742387-31c1-4886-9d16-56795b595d75","Type":"ContainerStarted","Data":"fe31c852cc0bcdebc7c191a9a16de4464676f86f104db3363b6da06811c56c4b"} Nov 21 11:52:44 crc kubenswrapper[4972]: I1121 11:52:44.243259 4972 generic.go:334] "Generic (PLEG): container finished" podID="51742387-31c1-4886-9d16-56795b595d75" containerID="72c30f7950b3bcac993a065500e65d109b6d159e23ad5a4dfa8238e926384e15" exitCode=0 Nov 21 11:52:44 crc kubenswrapper[4972]: I1121 11:52:44.243300 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9wln" event={"ID":"51742387-31c1-4886-9d16-56795b595d75","Type":"ContainerDied","Data":"72c30f7950b3bcac993a065500e65d109b6d159e23ad5a4dfa8238e926384e15"} Nov 21 11:52:47 crc kubenswrapper[4972]: I1121 11:52:47.279391 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9wln" event={"ID":"51742387-31c1-4886-9d16-56795b595d75","Type":"ContainerStarted","Data":"f5eef783fa67c7e0ee92fd6ec96451812d6b3e03209ad67b1cea0c4f07a5a41b"} Nov 21 11:52:51 crc kubenswrapper[4972]: I1121 11:52:51.322331 4972 generic.go:334] "Generic (PLEG): container finished" podID="51742387-31c1-4886-9d16-56795b595d75" containerID="f5eef783fa67c7e0ee92fd6ec96451812d6b3e03209ad67b1cea0c4f07a5a41b" exitCode=0 Nov 21 11:52:51 crc kubenswrapper[4972]: I1121 11:52:51.322409 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9wln" event={"ID":"51742387-31c1-4886-9d16-56795b595d75","Type":"ContainerDied","Data":"f5eef783fa67c7e0ee92fd6ec96451812d6b3e03209ad67b1cea0c4f07a5a41b"} Nov 21 11:52:53 crc kubenswrapper[4972]: I1121 11:52:53.759470 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:52:53 crc kubenswrapper[4972]: E1121 11:52:53.760174 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:52:58 crc kubenswrapper[4972]: I1121 11:52:58.409983 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9wln" event={"ID":"51742387-31c1-4886-9d16-56795b595d75","Type":"ContainerStarted","Data":"8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a"} Nov 21 11:52:58 crc kubenswrapper[4972]: I1121 11:52:58.435664 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r9wln" podStartSLOduration=4.120375635 podStartE2EDuration="17.435639019s" podCreationTimestamp="2025-11-21 11:52:41 +0000 UTC" firstStartedPulling="2025-11-21 11:52:44.246397256 +0000 UTC m=+7909.355539784" lastFinishedPulling="2025-11-21 11:52:57.56166063 +0000 UTC m=+7922.670803168" observedRunningTime="2025-11-21 11:52:58.433311948 +0000 UTC m=+7923.542454486" watchObservedRunningTime="2025-11-21 11:52:58.435639019 +0000 UTC m=+7923.544781537" Nov 21 11:53:02 crc kubenswrapper[4972]: I1121 11:53:02.329819 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:53:02 crc kubenswrapper[4972]: I1121 11:53:02.330541 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:53:02 crc kubenswrapper[4972]: I1121 11:53:02.385467 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:53:02 crc kubenswrapper[4972]: I1121 11:53:02.498589 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:53:02 crc kubenswrapper[4972]: I1121 11:53:02.640955 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9wln"] Nov 21 11:53:04 crc kubenswrapper[4972]: I1121 11:53:04.477523 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r9wln" podUID="51742387-31c1-4886-9d16-56795b595d75" containerName="registry-server" containerID="cri-o://8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a" gracePeriod=2 Nov 21 11:53:04 crc kubenswrapper[4972]: I1121 11:53:04.760780 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:53:04 crc kubenswrapper[4972]: E1121 11:53:04.761586 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.208170 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.322499 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51742387-31c1-4886-9d16-56795b595d75-catalog-content\") pod \"51742387-31c1-4886-9d16-56795b595d75\" (UID: \"51742387-31c1-4886-9d16-56795b595d75\") " Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.322816 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th2v9\" (UniqueName: \"kubernetes.io/projected/51742387-31c1-4886-9d16-56795b595d75-kube-api-access-th2v9\") pod \"51742387-31c1-4886-9d16-56795b595d75\" (UID: \"51742387-31c1-4886-9d16-56795b595d75\") " Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.323051 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51742387-31c1-4886-9d16-56795b595d75-utilities\") pod \"51742387-31c1-4886-9d16-56795b595d75\" (UID: \"51742387-31c1-4886-9d16-56795b595d75\") " Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.323991 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51742387-31c1-4886-9d16-56795b595d75-utilities" (OuterVolumeSpecName: "utilities") pod "51742387-31c1-4886-9d16-56795b595d75" (UID: "51742387-31c1-4886-9d16-56795b595d75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.337085 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51742387-31c1-4886-9d16-56795b595d75-kube-api-access-th2v9" (OuterVolumeSpecName: "kube-api-access-th2v9") pod "51742387-31c1-4886-9d16-56795b595d75" (UID: "51742387-31c1-4886-9d16-56795b595d75"). InnerVolumeSpecName "kube-api-access-th2v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.387482 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51742387-31c1-4886-9d16-56795b595d75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51742387-31c1-4886-9d16-56795b595d75" (UID: "51742387-31c1-4886-9d16-56795b595d75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.426603 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51742387-31c1-4886-9d16-56795b595d75-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.426651 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51742387-31c1-4886-9d16-56795b595d75-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.426665 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th2v9\" (UniqueName: \"kubernetes.io/projected/51742387-31c1-4886-9d16-56795b595d75-kube-api-access-th2v9\") on node \"crc\" DevicePath \"\"" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.490098 4972 generic.go:334] "Generic (PLEG): container finished" podID="51742387-31c1-4886-9d16-56795b595d75" containerID="8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a" exitCode=0 Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.490177 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9wln" event={"ID":"51742387-31c1-4886-9d16-56795b595d75","Type":"ContainerDied","Data":"8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a"} Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.490928 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9wln" event={"ID":"51742387-31c1-4886-9d16-56795b595d75","Type":"ContainerDied","Data":"fe31c852cc0bcdebc7c191a9a16de4464676f86f104db3363b6da06811c56c4b"} Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.491003 4972 scope.go:117] "RemoveContainer" containerID="8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.490263 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9wln" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.513256 4972 scope.go:117] "RemoveContainer" containerID="f5eef783fa67c7e0ee92fd6ec96451812d6b3e03209ad67b1cea0c4f07a5a41b" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.539194 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9wln"] Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.542363 4972 scope.go:117] "RemoveContainer" containerID="72c30f7950b3bcac993a065500e65d109b6d159e23ad5a4dfa8238e926384e15" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.552651 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r9wln"] Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.607870 4972 scope.go:117] "RemoveContainer" containerID="8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a" Nov 21 11:53:05 crc kubenswrapper[4972]: E1121 11:53:05.608443 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a\": container with ID starting with 8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a not found: ID does not exist" containerID="8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.608503 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a"} err="failed to get container status \"8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a\": rpc error: code = NotFound desc = could not find container \"8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a\": container with ID starting with 8744e4ad35fb01c4bea9ba2f465c4d883990fb55546647401326abaeef29dd9a not found: ID does not exist" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.608542 4972 scope.go:117] "RemoveContainer" containerID="f5eef783fa67c7e0ee92fd6ec96451812d6b3e03209ad67b1cea0c4f07a5a41b" Nov 21 11:53:05 crc kubenswrapper[4972]: E1121 11:53:05.609112 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5eef783fa67c7e0ee92fd6ec96451812d6b3e03209ad67b1cea0c4f07a5a41b\": container with ID starting with f5eef783fa67c7e0ee92fd6ec96451812d6b3e03209ad67b1cea0c4f07a5a41b not found: ID does not exist" containerID="f5eef783fa67c7e0ee92fd6ec96451812d6b3e03209ad67b1cea0c4f07a5a41b" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.609155 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5eef783fa67c7e0ee92fd6ec96451812d6b3e03209ad67b1cea0c4f07a5a41b"} err="failed to get container status \"f5eef783fa67c7e0ee92fd6ec96451812d6b3e03209ad67b1cea0c4f07a5a41b\": rpc error: code = NotFound desc = could not find container \"f5eef783fa67c7e0ee92fd6ec96451812d6b3e03209ad67b1cea0c4f07a5a41b\": container with ID starting with f5eef783fa67c7e0ee92fd6ec96451812d6b3e03209ad67b1cea0c4f07a5a41b not found: ID does not exist" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.609181 4972 scope.go:117] "RemoveContainer" containerID="72c30f7950b3bcac993a065500e65d109b6d159e23ad5a4dfa8238e926384e15" Nov 21 11:53:05 crc kubenswrapper[4972]: E1121 11:53:05.609652 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72c30f7950b3bcac993a065500e65d109b6d159e23ad5a4dfa8238e926384e15\": container with ID starting with 72c30f7950b3bcac993a065500e65d109b6d159e23ad5a4dfa8238e926384e15 not found: ID does not exist" containerID="72c30f7950b3bcac993a065500e65d109b6d159e23ad5a4dfa8238e926384e15" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.609688 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72c30f7950b3bcac993a065500e65d109b6d159e23ad5a4dfa8238e926384e15"} err="failed to get container status \"72c30f7950b3bcac993a065500e65d109b6d159e23ad5a4dfa8238e926384e15\": rpc error: code = NotFound desc = could not find container \"72c30f7950b3bcac993a065500e65d109b6d159e23ad5a4dfa8238e926384e15\": container with ID starting with 72c30f7950b3bcac993a065500e65d109b6d159e23ad5a4dfa8238e926384e15 not found: ID does not exist" Nov 21 11:53:05 crc kubenswrapper[4972]: I1121 11:53:05.771769 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51742387-31c1-4886-9d16-56795b595d75" path="/var/lib/kubelet/pods/51742387-31c1-4886-9d16-56795b595d75/volumes" Nov 21 11:53:19 crc kubenswrapper[4972]: I1121 11:53:19.760555 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:53:19 crc kubenswrapper[4972]: E1121 11:53:19.762336 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:53:31 crc kubenswrapper[4972]: I1121 11:53:31.759901 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:53:31 crc kubenswrapper[4972]: E1121 11:53:31.760712 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:53:44 crc kubenswrapper[4972]: I1121 11:53:44.760023 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:53:44 crc kubenswrapper[4972]: E1121 11:53:44.760872 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:53:58 crc kubenswrapper[4972]: I1121 11:53:58.759349 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:53:58 crc kubenswrapper[4972]: E1121 11:53:58.760217 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:54:11 crc kubenswrapper[4972]: I1121 11:54:11.759566 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:54:11 crc kubenswrapper[4972]: E1121 11:54:11.760608 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:54:26 crc kubenswrapper[4972]: I1121 11:54:26.759553 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:54:26 crc kubenswrapper[4972]: E1121 11:54:26.760641 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:54:41 crc kubenswrapper[4972]: I1121 11:54:41.759082 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:54:41 crc kubenswrapper[4972]: E1121 11:54:41.759782 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:54:52 crc kubenswrapper[4972]: I1121 11:54:52.760867 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:54:52 crc kubenswrapper[4972]: E1121 11:54:52.761852 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:55:04 crc kubenswrapper[4972]: I1121 11:55:04.759701 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:55:04 crc kubenswrapper[4972]: E1121 11:55:04.760545 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:55:17 crc kubenswrapper[4972]: I1121 11:55:17.760778 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:55:17 crc kubenswrapper[4972]: E1121 11:55:17.761638 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:55:28 crc kubenswrapper[4972]: I1121 11:55:28.759753 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:55:28 crc kubenswrapper[4972]: E1121 11:55:28.760859 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:55:40 crc kubenswrapper[4972]: I1121 11:55:40.759763 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:55:40 crc kubenswrapper[4972]: E1121 11:55:40.760693 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:55:52 crc kubenswrapper[4972]: I1121 11:55:52.759496 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:55:52 crc kubenswrapper[4972]: E1121 11:55:52.760301 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 11:56:02 crc kubenswrapper[4972]: I1121 11:56:02.457618 4972 generic.go:334] "Generic (PLEG): container finished" podID="1f21798f-b3ad-4a9f-abba-62a3da5ce59a" containerID="1c6b35d02ee3427bc4ea15457ee294109a27bb95c0158a239331188b2190dbff" exitCode=0 Nov 21 11:56:02 crc kubenswrapper[4972]: I1121 11:56:02.457892 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-dmptj" event={"ID":"1f21798f-b3ad-4a9f-abba-62a3da5ce59a","Type":"ContainerDied","Data":"1c6b35d02ee3427bc4ea15457ee294109a27bb95c0158a239331188b2190dbff"} Nov 21 11:56:03 crc kubenswrapper[4972]: I1121 11:56:03.759119 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.007077 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.096227 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-ssh-key\") pod \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.096281 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-ceph\") pod \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.096326 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-libvirt-secret-0\") pod \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.096392 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-inventory\") pod \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.096473 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-libvirt-combined-ca-bundle\") pod \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.096515 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmt89\" (UniqueName: \"kubernetes.io/projected/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-kube-api-access-vmt89\") pod \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\" (UID: \"1f21798f-b3ad-4a9f-abba-62a3da5ce59a\") " Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.102821 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1f21798f-b3ad-4a9f-abba-62a3da5ce59a" (UID: "1f21798f-b3ad-4a9f-abba-62a3da5ce59a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.103403 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-kube-api-access-vmt89" (OuterVolumeSpecName: "kube-api-access-vmt89") pod "1f21798f-b3ad-4a9f-abba-62a3da5ce59a" (UID: "1f21798f-b3ad-4a9f-abba-62a3da5ce59a"). InnerVolumeSpecName "kube-api-access-vmt89". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.104505 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-ceph" (OuterVolumeSpecName: "ceph") pod "1f21798f-b3ad-4a9f-abba-62a3da5ce59a" (UID: "1f21798f-b3ad-4a9f-abba-62a3da5ce59a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.129371 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "1f21798f-b3ad-4a9f-abba-62a3da5ce59a" (UID: "1f21798f-b3ad-4a9f-abba-62a3da5ce59a"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.136121 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-inventory" (OuterVolumeSpecName: "inventory") pod "1f21798f-b3ad-4a9f-abba-62a3da5ce59a" (UID: "1f21798f-b3ad-4a9f-abba-62a3da5ce59a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.143744 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1f21798f-b3ad-4a9f-abba-62a3da5ce59a" (UID: "1f21798f-b3ad-4a9f-abba-62a3da5ce59a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.199115 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmt89\" (UniqueName: \"kubernetes.io/projected/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-kube-api-access-vmt89\") on node \"crc\" DevicePath \"\"" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.199668 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.199730 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.199785 4972 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.199852 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.199921 4972 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f21798f-b3ad-4a9f-abba-62a3da5ce59a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.483734 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-dmptj" event={"ID":"1f21798f-b3ad-4a9f-abba-62a3da5ce59a","Type":"ContainerDied","Data":"8e0ad7bba7f789616800a790b606279dc5376a607579e24ea407446183e4a7ad"} Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.483766 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-dmptj" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.483790 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e0ad7bba7f789616800a790b606279dc5376a607579e24ea407446183e4a7ad" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.608180 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-ddt9q"] Nov 21 11:56:04 crc kubenswrapper[4972]: E1121 11:56:04.608865 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51742387-31c1-4886-9d16-56795b595d75" containerName="extract-content" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.608891 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="51742387-31c1-4886-9d16-56795b595d75" containerName="extract-content" Nov 21 11:56:04 crc kubenswrapper[4972]: E1121 11:56:04.608928 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51742387-31c1-4886-9d16-56795b595d75" containerName="registry-server" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.608937 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="51742387-31c1-4886-9d16-56795b595d75" containerName="registry-server" Nov 21 11:56:04 crc kubenswrapper[4972]: E1121 11:56:04.608965 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f21798f-b3ad-4a9f-abba-62a3da5ce59a" containerName="libvirt-openstack-openstack-cell1" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.608974 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f21798f-b3ad-4a9f-abba-62a3da5ce59a" containerName="libvirt-openstack-openstack-cell1" Nov 21 11:56:04 crc kubenswrapper[4972]: E1121 11:56:04.608995 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51742387-31c1-4886-9d16-56795b595d75" containerName="extract-utilities" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.609004 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="51742387-31c1-4886-9d16-56795b595d75" containerName="extract-utilities" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.609264 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f21798f-b3ad-4a9f-abba-62a3da5ce59a" containerName="libvirt-openstack-openstack-cell1" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.609313 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="51742387-31c1-4886-9d16-56795b595d75" containerName="registry-server" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.610462 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.613570 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.614003 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.614435 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.615262 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.615436 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.615710 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.616555 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.619568 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-ddt9q"] Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.708533 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.708899 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.709012 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-splt7\" (UniqueName: \"kubernetes.io/projected/78198d69-4fb2-403d-8efb-8fe435b9351b-kube-api-access-splt7\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.709130 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.709269 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.709503 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.709586 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-inventory\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.709639 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-ceph\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.709676 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.709760 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.709950 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.811298 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.811351 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-splt7\" (UniqueName: \"kubernetes.io/projected/78198d69-4fb2-403d-8efb-8fe435b9351b-kube-api-access-splt7\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.811418 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.811490 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.811513 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.811534 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-inventory\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.811552 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-ceph\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.811569 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.811605 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.811635 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.811669 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.813436 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.814024 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.816463 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.816612 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.816847 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.819108 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-inventory\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.819186 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.819373 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.819952 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-ceph\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.821767 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.834789 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-splt7\" (UniqueName: \"kubernetes.io/projected/78198d69-4fb2-403d-8efb-8fe435b9351b-kube-api-access-splt7\") pod \"nova-cell1-openstack-openstack-cell1-ddt9q\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:04 crc kubenswrapper[4972]: I1121 11:56:04.975526 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:56:05 crc kubenswrapper[4972]: I1121 11:56:05.496774 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"0b4dc4317cd5fa834bd0d58d4284e0b65ba94dc083a5e7c02437c6bea6070ec7"} Nov 21 11:56:06 crc kubenswrapper[4972]: I1121 11:56:06.254679 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-ddt9q"] Nov 21 11:56:06 crc kubenswrapper[4972]: W1121 11:56:06.271715 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78198d69_4fb2_403d_8efb_8fe435b9351b.slice/crio-44bf7dfc3be846f6e0a25f7fca5d607e668c6cfa1fd4f654a50e06baee666799 WatchSource:0}: Error finding container 44bf7dfc3be846f6e0a25f7fca5d607e668c6cfa1fd4f654a50e06baee666799: Status 404 returned error can't find the container with id 44bf7dfc3be846f6e0a25f7fca5d607e668c6cfa1fd4f654a50e06baee666799 Nov 21 11:56:06 crc kubenswrapper[4972]: I1121 11:56:06.274267 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 11:56:06 crc kubenswrapper[4972]: I1121 11:56:06.508560 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" event={"ID":"78198d69-4fb2-403d-8efb-8fe435b9351b","Type":"ContainerStarted","Data":"44bf7dfc3be846f6e0a25f7fca5d607e668c6cfa1fd4f654a50e06baee666799"} Nov 21 11:56:09 crc kubenswrapper[4972]: I1121 11:56:09.551114 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" event={"ID":"78198d69-4fb2-403d-8efb-8fe435b9351b","Type":"ContainerStarted","Data":"15a55e01be61e9c6a021c0e6e5efc3638447f117df1ca268d028ab3a4546bf1e"} Nov 21 11:56:09 crc kubenswrapper[4972]: I1121 11:56:09.577929 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" podStartSLOduration=3.055677555 podStartE2EDuration="5.577903986s" podCreationTimestamp="2025-11-21 11:56:04 +0000 UTC" firstStartedPulling="2025-11-21 11:56:06.2740542 +0000 UTC m=+8111.383196698" lastFinishedPulling="2025-11-21 11:56:08.796280631 +0000 UTC m=+8113.905423129" observedRunningTime="2025-11-21 11:56:09.567285197 +0000 UTC m=+8114.676427705" watchObservedRunningTime="2025-11-21 11:56:09.577903986 +0000 UTC m=+8114.687046484" Nov 21 11:56:18 crc kubenswrapper[4972]: I1121 11:56:18.782610 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-df9wz"] Nov 21 11:56:18 crc kubenswrapper[4972]: I1121 11:56:18.785792 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:18 crc kubenswrapper[4972]: I1121 11:56:18.795616 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-df9wz"] Nov 21 11:56:18 crc kubenswrapper[4972]: I1121 11:56:18.874924 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-catalog-content\") pod \"community-operators-df9wz\" (UID: \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\") " pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:18 crc kubenswrapper[4972]: I1121 11:56:18.875059 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-utilities\") pod \"community-operators-df9wz\" (UID: \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\") " pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:18 crc kubenswrapper[4972]: I1121 11:56:18.875255 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xc9x\" (UniqueName: \"kubernetes.io/projected/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-kube-api-access-2xc9x\") pod \"community-operators-df9wz\" (UID: \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\") " pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:18 crc kubenswrapper[4972]: I1121 11:56:18.977110 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-utilities\") pod \"community-operators-df9wz\" (UID: \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\") " pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:18 crc kubenswrapper[4972]: I1121 11:56:18.977318 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xc9x\" (UniqueName: \"kubernetes.io/projected/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-kube-api-access-2xc9x\") pod \"community-operators-df9wz\" (UID: \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\") " pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:18 crc kubenswrapper[4972]: I1121 11:56:18.977460 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-catalog-content\") pod \"community-operators-df9wz\" (UID: \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\") " pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:18 crc kubenswrapper[4972]: I1121 11:56:18.977624 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-utilities\") pod \"community-operators-df9wz\" (UID: \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\") " pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:18 crc kubenswrapper[4972]: I1121 11:56:18.977841 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-catalog-content\") pod \"community-operators-df9wz\" (UID: \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\") " pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:19 crc kubenswrapper[4972]: I1121 11:56:19.003777 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xc9x\" (UniqueName: \"kubernetes.io/projected/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-kube-api-access-2xc9x\") pod \"community-operators-df9wz\" (UID: \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\") " pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:19 crc kubenswrapper[4972]: I1121 11:56:19.109963 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:19 crc kubenswrapper[4972]: I1121 11:56:19.653890 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-df9wz"] Nov 21 11:56:20 crc kubenswrapper[4972]: I1121 11:56:20.667430 4972 generic.go:334] "Generic (PLEG): container finished" podID="1f42fb29-6d54-47ad-b3cc-39e28c4b7336" containerID="26f155a827c355c9ad03a46a69c50c9a503b119edbe7bc5bbb332c8db7785307" exitCode=0 Nov 21 11:56:20 crc kubenswrapper[4972]: I1121 11:56:20.667499 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df9wz" event={"ID":"1f42fb29-6d54-47ad-b3cc-39e28c4b7336","Type":"ContainerDied","Data":"26f155a827c355c9ad03a46a69c50c9a503b119edbe7bc5bbb332c8db7785307"} Nov 21 11:56:20 crc kubenswrapper[4972]: I1121 11:56:20.667786 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df9wz" event={"ID":"1f42fb29-6d54-47ad-b3cc-39e28c4b7336","Type":"ContainerStarted","Data":"8b08751129434a43bdc84b94c11d2ed3841cf4198b598b2a2ab1116be59f19da"} Nov 21 11:56:23 crc kubenswrapper[4972]: I1121 11:56:23.703041 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df9wz" event={"ID":"1f42fb29-6d54-47ad-b3cc-39e28c4b7336","Type":"ContainerStarted","Data":"71288eed78e094b8fca3ea433b941c2011b695c8b3671274e97e08ddb02e45bc"} Nov 21 11:56:31 crc kubenswrapper[4972]: I1121 11:56:31.816185 4972 generic.go:334] "Generic (PLEG): container finished" podID="1f42fb29-6d54-47ad-b3cc-39e28c4b7336" containerID="71288eed78e094b8fca3ea433b941c2011b695c8b3671274e97e08ddb02e45bc" exitCode=0 Nov 21 11:56:31 crc kubenswrapper[4972]: I1121 11:56:31.816271 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df9wz" event={"ID":"1f42fb29-6d54-47ad-b3cc-39e28c4b7336","Type":"ContainerDied","Data":"71288eed78e094b8fca3ea433b941c2011b695c8b3671274e97e08ddb02e45bc"} Nov 21 11:56:33 crc kubenswrapper[4972]: I1121 11:56:33.844905 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df9wz" event={"ID":"1f42fb29-6d54-47ad-b3cc-39e28c4b7336","Type":"ContainerStarted","Data":"cfa0df28caa5d635d7603cf29d7d86b509713c248a909db085a630136ed0641a"} Nov 21 11:56:33 crc kubenswrapper[4972]: I1121 11:56:33.872679 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-df9wz" podStartSLOduration=3.147572615 podStartE2EDuration="15.872658931s" podCreationTimestamp="2025-11-21 11:56:18 +0000 UTC" firstStartedPulling="2025-11-21 11:56:20.669320782 +0000 UTC m=+8125.778463280" lastFinishedPulling="2025-11-21 11:56:33.394407078 +0000 UTC m=+8138.503549596" observedRunningTime="2025-11-21 11:56:33.866222352 +0000 UTC m=+8138.975364870" watchObservedRunningTime="2025-11-21 11:56:33.872658931 +0000 UTC m=+8138.981801439" Nov 21 11:56:39 crc kubenswrapper[4972]: I1121 11:56:39.070342 4972 scope.go:117] "RemoveContainer" containerID="1c9f2fa54ce78d3a340a66b06a61edf78173a7fb45d68823cee179aba3db727f" Nov 21 11:56:39 crc kubenswrapper[4972]: I1121 11:56:39.101187 4972 scope.go:117] "RemoveContainer" containerID="35b495590ea75c3c8e6699e33a867717d1eaceda772883b2d8c8246021b5fcd7" Nov 21 11:56:39 crc kubenswrapper[4972]: I1121 11:56:39.111063 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:39 crc kubenswrapper[4972]: I1121 11:56:39.111107 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:39 crc kubenswrapper[4972]: I1121 11:56:39.140771 4972 scope.go:117] "RemoveContainer" containerID="efb655f701faf8808499ab3634e1e7514aa27592839bd094b77a825238e502af" Nov 21 11:56:39 crc kubenswrapper[4972]: I1121 11:56:39.167223 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:40 crc kubenswrapper[4972]: I1121 11:56:40.022314 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:40 crc kubenswrapper[4972]: I1121 11:56:40.074288 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-df9wz"] Nov 21 11:56:41 crc kubenswrapper[4972]: I1121 11:56:41.963879 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-df9wz" podUID="1f42fb29-6d54-47ad-b3cc-39e28c4b7336" containerName="registry-server" containerID="cri-o://cfa0df28caa5d635d7603cf29d7d86b509713c248a909db085a630136ed0641a" gracePeriod=2 Nov 21 11:56:42 crc kubenswrapper[4972]: I1121 11:56:42.981995 4972 generic.go:334] "Generic (PLEG): container finished" podID="1f42fb29-6d54-47ad-b3cc-39e28c4b7336" containerID="cfa0df28caa5d635d7603cf29d7d86b509713c248a909db085a630136ed0641a" exitCode=0 Nov 21 11:56:42 crc kubenswrapper[4972]: I1121 11:56:42.982073 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df9wz" event={"ID":"1f42fb29-6d54-47ad-b3cc-39e28c4b7336","Type":"ContainerDied","Data":"cfa0df28caa5d635d7603cf29d7d86b509713c248a909db085a630136ed0641a"} Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.143900 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.222847 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xc9x\" (UniqueName: \"kubernetes.io/projected/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-kube-api-access-2xc9x\") pod \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\" (UID: \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\") " Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.223128 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-utilities\") pod \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\" (UID: \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\") " Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.223301 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-catalog-content\") pod \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\" (UID: \"1f42fb29-6d54-47ad-b3cc-39e28c4b7336\") " Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.226633 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-utilities" (OuterVolumeSpecName: "utilities") pod "1f42fb29-6d54-47ad-b3cc-39e28c4b7336" (UID: "1f42fb29-6d54-47ad-b3cc-39e28c4b7336"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.250769 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-kube-api-access-2xc9x" (OuterVolumeSpecName: "kube-api-access-2xc9x") pod "1f42fb29-6d54-47ad-b3cc-39e28c4b7336" (UID: "1f42fb29-6d54-47ad-b3cc-39e28c4b7336"). InnerVolumeSpecName "kube-api-access-2xc9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.282890 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f42fb29-6d54-47ad-b3cc-39e28c4b7336" (UID: "1f42fb29-6d54-47ad-b3cc-39e28c4b7336"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.325686 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xc9x\" (UniqueName: \"kubernetes.io/projected/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-kube-api-access-2xc9x\") on node \"crc\" DevicePath \"\"" Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.325712 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.325722 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f42fb29-6d54-47ad-b3cc-39e28c4b7336-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.994001 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df9wz" event={"ID":"1f42fb29-6d54-47ad-b3cc-39e28c4b7336","Type":"ContainerDied","Data":"8b08751129434a43bdc84b94c11d2ed3841cf4198b598b2a2ab1116be59f19da"} Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.994059 4972 scope.go:117] "RemoveContainer" containerID="cfa0df28caa5d635d7603cf29d7d86b509713c248a909db085a630136ed0641a" Nov 21 11:56:43 crc kubenswrapper[4972]: I1121 11:56:43.994142 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-df9wz" Nov 21 11:56:44 crc kubenswrapper[4972]: I1121 11:56:44.019968 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-df9wz"] Nov 21 11:56:44 crc kubenswrapper[4972]: I1121 11:56:44.025779 4972 scope.go:117] "RemoveContainer" containerID="71288eed78e094b8fca3ea433b941c2011b695c8b3671274e97e08ddb02e45bc" Nov 21 11:56:44 crc kubenswrapper[4972]: I1121 11:56:44.030731 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-df9wz"] Nov 21 11:56:44 crc kubenswrapper[4972]: I1121 11:56:44.047011 4972 scope.go:117] "RemoveContainer" containerID="26f155a827c355c9ad03a46a69c50c9a503b119edbe7bc5bbb332c8db7785307" Nov 21 11:56:45 crc kubenswrapper[4972]: I1121 11:56:45.779323 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f42fb29-6d54-47ad-b3cc-39e28c4b7336" path="/var/lib/kubelet/pods/1f42fb29-6d54-47ad-b3cc-39e28c4b7336/volumes" Nov 21 11:58:26 crc kubenswrapper[4972]: I1121 11:58:26.179281 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:58:26 crc kubenswrapper[4972]: I1121 11:58:26.179959 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:58:56 crc kubenswrapper[4972]: I1121 11:58:56.178702 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:58:56 crc kubenswrapper[4972]: I1121 11:58:56.179370 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:59:24 crc kubenswrapper[4972]: I1121 11:59:24.945515 4972 generic.go:334] "Generic (PLEG): container finished" podID="78198d69-4fb2-403d-8efb-8fe435b9351b" containerID="15a55e01be61e9c6a021c0e6e5efc3638447f117df1ca268d028ab3a4546bf1e" exitCode=0 Nov 21 11:59:24 crc kubenswrapper[4972]: I1121 11:59:24.945625 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" event={"ID":"78198d69-4fb2-403d-8efb-8fe435b9351b","Type":"ContainerDied","Data":"15a55e01be61e9c6a021c0e6e5efc3638447f117df1ca268d028ab3a4546bf1e"} Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.178798 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.179173 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.179224 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.180184 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b4dc4317cd5fa834bd0d58d4284e0b65ba94dc083a5e7c02437c6bea6070ec7"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.180251 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://0b4dc4317cd5fa834bd0d58d4284e0b65ba94dc083a5e7c02437c6bea6070ec7" gracePeriod=600 Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.512611 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.649974 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-combined-ca-bundle\") pod \"78198d69-4fb2-403d-8efb-8fe435b9351b\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.650385 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-ceph\") pod \"78198d69-4fb2-403d-8efb-8fe435b9351b\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.650427 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cells-global-config-1\") pod \"78198d69-4fb2-403d-8efb-8fe435b9351b\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.650529 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-splt7\" (UniqueName: \"kubernetes.io/projected/78198d69-4fb2-403d-8efb-8fe435b9351b-kube-api-access-splt7\") pod \"78198d69-4fb2-403d-8efb-8fe435b9351b\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.650668 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-inventory\") pod \"78198d69-4fb2-403d-8efb-8fe435b9351b\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.650702 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-migration-ssh-key-1\") pod \"78198d69-4fb2-403d-8efb-8fe435b9351b\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.650788 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-migration-ssh-key-0\") pod \"78198d69-4fb2-403d-8efb-8fe435b9351b\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.650980 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-ssh-key\") pod \"78198d69-4fb2-403d-8efb-8fe435b9351b\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.651078 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-compute-config-0\") pod \"78198d69-4fb2-403d-8efb-8fe435b9351b\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.651134 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-compute-config-1\") pod \"78198d69-4fb2-403d-8efb-8fe435b9351b\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.651242 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cells-global-config-0\") pod \"78198d69-4fb2-403d-8efb-8fe435b9351b\" (UID: \"78198d69-4fb2-403d-8efb-8fe435b9351b\") " Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.657997 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78198d69-4fb2-403d-8efb-8fe435b9351b-kube-api-access-splt7" (OuterVolumeSpecName: "kube-api-access-splt7") pod "78198d69-4fb2-403d-8efb-8fe435b9351b" (UID: "78198d69-4fb2-403d-8efb-8fe435b9351b"). InnerVolumeSpecName "kube-api-access-splt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.658520 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "78198d69-4fb2-403d-8efb-8fe435b9351b" (UID: "78198d69-4fb2-403d-8efb-8fe435b9351b"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.683830 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-ceph" (OuterVolumeSpecName: "ceph") pod "78198d69-4fb2-403d-8efb-8fe435b9351b" (UID: "78198d69-4fb2-403d-8efb-8fe435b9351b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.690362 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "78198d69-4fb2-403d-8efb-8fe435b9351b" (UID: "78198d69-4fb2-403d-8efb-8fe435b9351b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.690796 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "78198d69-4fb2-403d-8efb-8fe435b9351b" (UID: "78198d69-4fb2-403d-8efb-8fe435b9351b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.692826 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "78198d69-4fb2-403d-8efb-8fe435b9351b" (UID: "78198d69-4fb2-403d-8efb-8fe435b9351b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.694320 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-inventory" (OuterVolumeSpecName: "inventory") pod "78198d69-4fb2-403d-8efb-8fe435b9351b" (UID: "78198d69-4fb2-403d-8efb-8fe435b9351b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.702840 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "78198d69-4fb2-403d-8efb-8fe435b9351b" (UID: "78198d69-4fb2-403d-8efb-8fe435b9351b"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.705930 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "78198d69-4fb2-403d-8efb-8fe435b9351b" (UID: "78198d69-4fb2-403d-8efb-8fe435b9351b"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.708036 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "78198d69-4fb2-403d-8efb-8fe435b9351b" (UID: "78198d69-4fb2-403d-8efb-8fe435b9351b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.715575 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "78198d69-4fb2-403d-8efb-8fe435b9351b" (UID: "78198d69-4fb2-403d-8efb-8fe435b9351b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.754448 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.754508 4972 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.754521 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-splt7\" (UniqueName: \"kubernetes.io/projected/78198d69-4fb2-403d-8efb-8fe435b9351b-kube-api-access-splt7\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.754530 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.754540 4972 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.754549 4972 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.754560 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.754569 4972 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.754578 4972 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.754587 4972 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.754599 4972 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78198d69-4fb2-403d-8efb-8fe435b9351b-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.969676 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" event={"ID":"78198d69-4fb2-403d-8efb-8fe435b9351b","Type":"ContainerDied","Data":"44bf7dfc3be846f6e0a25f7fca5d607e668c6cfa1fd4f654a50e06baee666799"} Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.969732 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44bf7dfc3be846f6e0a25f7fca5d607e668c6cfa1fd4f654a50e06baee666799" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.969701 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-ddt9q" Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.973491 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="0b4dc4317cd5fa834bd0d58d4284e0b65ba94dc083a5e7c02437c6bea6070ec7" exitCode=0 Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.973562 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"0b4dc4317cd5fa834bd0d58d4284e0b65ba94dc083a5e7c02437c6bea6070ec7"} Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.973613 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b"} Nov 21 11:59:26 crc kubenswrapper[4972]: I1121 11:59:26.973643 4972 scope.go:117] "RemoveContainer" containerID="47203bf1d0143052b8f05cfbf903794b473659ed5f076fbd3a3344233cd888e0" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.094492 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-j6j8b"] Nov 21 11:59:27 crc kubenswrapper[4972]: E1121 11:59:27.095301 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f42fb29-6d54-47ad-b3cc-39e28c4b7336" containerName="extract-content" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.095334 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f42fb29-6d54-47ad-b3cc-39e28c4b7336" containerName="extract-content" Nov 21 11:59:27 crc kubenswrapper[4972]: E1121 11:59:27.095368 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f42fb29-6d54-47ad-b3cc-39e28c4b7336" containerName="registry-server" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.095381 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f42fb29-6d54-47ad-b3cc-39e28c4b7336" containerName="registry-server" Nov 21 11:59:27 crc kubenswrapper[4972]: E1121 11:59:27.095409 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f42fb29-6d54-47ad-b3cc-39e28c4b7336" containerName="extract-utilities" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.095421 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f42fb29-6d54-47ad-b3cc-39e28c4b7336" containerName="extract-utilities" Nov 21 11:59:27 crc kubenswrapper[4972]: E1121 11:59:27.095450 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78198d69-4fb2-403d-8efb-8fe435b9351b" containerName="nova-cell1-openstack-openstack-cell1" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.095463 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="78198d69-4fb2-403d-8efb-8fe435b9351b" containerName="nova-cell1-openstack-openstack-cell1" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.095815 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f42fb29-6d54-47ad-b3cc-39e28c4b7336" containerName="registry-server" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.095875 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="78198d69-4fb2-403d-8efb-8fe435b9351b" containerName="nova-cell1-openstack-openstack-cell1" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.097164 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.099651 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.101353 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.101766 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.102007 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.105647 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-j6j8b"] Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.106362 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.266100 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ssh-key\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.266432 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-845kz\" (UniqueName: \"kubernetes.io/projected/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-kube-api-access-845kz\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.266494 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.266571 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-inventory\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.266661 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceph\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.266679 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.266711 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.266767 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.368487 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-inventory\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.368846 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.368948 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceph\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.369057 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.369223 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.369413 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ssh-key\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.369946 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-845kz\" (UniqueName: \"kubernetes.io/projected/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-kube-api-access-845kz\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.373366 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.375043 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ssh-key\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.375142 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.375344 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-inventory\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.375674 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.376120 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceph\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.376913 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.377989 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.386508 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-845kz\" (UniqueName: \"kubernetes.io/projected/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-kube-api-access-845kz\") pod \"telemetry-openstack-openstack-cell1-j6j8b\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:27 crc kubenswrapper[4972]: I1121 11:59:27.425713 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 11:59:28 crc kubenswrapper[4972]: I1121 11:59:28.444717 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-j6j8b"] Nov 21 11:59:29 crc kubenswrapper[4972]: I1121 11:59:29.438589 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" event={"ID":"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f","Type":"ContainerStarted","Data":"966d01453a331353b6d7133f21d27ec2276313f3e8aeaa52330f5d022cceaff6"} Nov 21 11:59:29 crc kubenswrapper[4972]: I1121 11:59:29.439471 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" event={"ID":"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f","Type":"ContainerStarted","Data":"f659a4d86fc0cfaefe4fd9c5797f555b3dbbe497d1dd34cb1d063e428004e9b2"} Nov 21 11:59:29 crc kubenswrapper[4972]: I1121 11:59:29.471194 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" podStartSLOduration=1.940205954 podStartE2EDuration="2.47117266s" podCreationTimestamp="2025-11-21 11:59:27 +0000 UTC" firstStartedPulling="2025-11-21 11:59:28.463039309 +0000 UTC m=+8313.572181817" lastFinishedPulling="2025-11-21 11:59:28.994005985 +0000 UTC m=+8314.103148523" observedRunningTime="2025-11-21 11:59:29.465930152 +0000 UTC m=+8314.575072680" watchObservedRunningTime="2025-11-21 11:59:29.47117266 +0000 UTC m=+8314.580315158" Nov 21 11:59:33 crc kubenswrapper[4972]: I1121 11:59:33.884522 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9k5h"] Nov 21 11:59:33 crc kubenswrapper[4972]: I1121 11:59:33.888602 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:33 crc kubenswrapper[4972]: I1121 11:59:33.901606 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9k5h"] Nov 21 11:59:34 crc kubenswrapper[4972]: I1121 11:59:34.054791 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-catalog-content\") pod \"redhat-operators-k9k5h\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:34 crc kubenswrapper[4972]: I1121 11:59:34.055317 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-utilities\") pod \"redhat-operators-k9k5h\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:34 crc kubenswrapper[4972]: I1121 11:59:34.055398 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vsh4\" (UniqueName: \"kubernetes.io/projected/0b433692-7d08-4274-91ac-81c624aef609-kube-api-access-2vsh4\") pod \"redhat-operators-k9k5h\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:34 crc kubenswrapper[4972]: I1121 11:59:34.157570 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-catalog-content\") pod \"redhat-operators-k9k5h\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:34 crc kubenswrapper[4972]: I1121 11:59:34.157647 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-utilities\") pod \"redhat-operators-k9k5h\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:34 crc kubenswrapper[4972]: I1121 11:59:34.157687 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vsh4\" (UniqueName: \"kubernetes.io/projected/0b433692-7d08-4274-91ac-81c624aef609-kube-api-access-2vsh4\") pod \"redhat-operators-k9k5h\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:34 crc kubenswrapper[4972]: I1121 11:59:34.158122 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-catalog-content\") pod \"redhat-operators-k9k5h\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:34 crc kubenswrapper[4972]: I1121 11:59:34.158280 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-utilities\") pod \"redhat-operators-k9k5h\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:34 crc kubenswrapper[4972]: I1121 11:59:34.176647 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vsh4\" (UniqueName: \"kubernetes.io/projected/0b433692-7d08-4274-91ac-81c624aef609-kube-api-access-2vsh4\") pod \"redhat-operators-k9k5h\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:34 crc kubenswrapper[4972]: I1121 11:59:34.217698 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:34 crc kubenswrapper[4972]: I1121 11:59:34.810361 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9k5h"] Nov 21 11:59:35 crc kubenswrapper[4972]: I1121 11:59:35.501295 4972 generic.go:334] "Generic (PLEG): container finished" podID="0b433692-7d08-4274-91ac-81c624aef609" containerID="f47e8e0d551e77779f23da2951af78812d27a92b3128caa1761383988dbd156c" exitCode=0 Nov 21 11:59:35 crc kubenswrapper[4972]: I1121 11:59:35.501415 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9k5h" event={"ID":"0b433692-7d08-4274-91ac-81c624aef609","Type":"ContainerDied","Data":"f47e8e0d551e77779f23da2951af78812d27a92b3128caa1761383988dbd156c"} Nov 21 11:59:35 crc kubenswrapper[4972]: I1121 11:59:35.501644 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9k5h" event={"ID":"0b433692-7d08-4274-91ac-81c624aef609","Type":"ContainerStarted","Data":"be07bf2a30eada416c145489336d19df5398a823fc4b9c7a11d40ccc59da08b9"} Nov 21 11:59:37 crc kubenswrapper[4972]: I1121 11:59:37.527982 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9k5h" event={"ID":"0b433692-7d08-4274-91ac-81c624aef609","Type":"ContainerStarted","Data":"ff4989afcef2fb1beb95ccbf58827f162df01ea1104b4659b949f2da4fffea41"} Nov 21 11:59:44 crc kubenswrapper[4972]: I1121 11:59:44.615965 4972 generic.go:334] "Generic (PLEG): container finished" podID="0b433692-7d08-4274-91ac-81c624aef609" containerID="ff4989afcef2fb1beb95ccbf58827f162df01ea1104b4659b949f2da4fffea41" exitCode=0 Nov 21 11:59:44 crc kubenswrapper[4972]: I1121 11:59:44.616053 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9k5h" event={"ID":"0b433692-7d08-4274-91ac-81c624aef609","Type":"ContainerDied","Data":"ff4989afcef2fb1beb95ccbf58827f162df01ea1104b4659b949f2da4fffea41"} Nov 21 11:59:46 crc kubenswrapper[4972]: I1121 11:59:46.650388 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9k5h" event={"ID":"0b433692-7d08-4274-91ac-81c624aef609","Type":"ContainerStarted","Data":"db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e"} Nov 21 11:59:46 crc kubenswrapper[4972]: I1121 11:59:46.678047 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9k5h" podStartSLOduration=3.120600318 podStartE2EDuration="13.678025617s" podCreationTimestamp="2025-11-21 11:59:33 +0000 UTC" firstStartedPulling="2025-11-21 11:59:35.503789748 +0000 UTC m=+8320.612932236" lastFinishedPulling="2025-11-21 11:59:46.061215017 +0000 UTC m=+8331.170357535" observedRunningTime="2025-11-21 11:59:46.671144626 +0000 UTC m=+8331.780287144" watchObservedRunningTime="2025-11-21 11:59:46.678025617 +0000 UTC m=+8331.787168125" Nov 21 11:59:54 crc kubenswrapper[4972]: I1121 11:59:54.218141 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:54 crc kubenswrapper[4972]: I1121 11:59:54.220626 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:54 crc kubenswrapper[4972]: I1121 11:59:54.295707 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:54 crc kubenswrapper[4972]: I1121 11:59:54.820627 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:54 crc kubenswrapper[4972]: I1121 11:59:54.887357 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9k5h"] Nov 21 11:59:56 crc kubenswrapper[4972]: I1121 11:59:56.766329 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k9k5h" podUID="0b433692-7d08-4274-91ac-81c624aef609" containerName="registry-server" containerID="cri-o://db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e" gracePeriod=2 Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.397107 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.526920 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vsh4\" (UniqueName: \"kubernetes.io/projected/0b433692-7d08-4274-91ac-81c624aef609-kube-api-access-2vsh4\") pod \"0b433692-7d08-4274-91ac-81c624aef609\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.527271 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-utilities\") pod \"0b433692-7d08-4274-91ac-81c624aef609\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.527453 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-catalog-content\") pod \"0b433692-7d08-4274-91ac-81c624aef609\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.528163 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-utilities" (OuterVolumeSpecName: "utilities") pod "0b433692-7d08-4274-91ac-81c624aef609" (UID: "0b433692-7d08-4274-91ac-81c624aef609"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.528482 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.542124 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b433692-7d08-4274-91ac-81c624aef609-kube-api-access-2vsh4" (OuterVolumeSpecName: "kube-api-access-2vsh4") pod "0b433692-7d08-4274-91ac-81c624aef609" (UID: "0b433692-7d08-4274-91ac-81c624aef609"). InnerVolumeSpecName "kube-api-access-2vsh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.629187 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b433692-7d08-4274-91ac-81c624aef609" (UID: "0b433692-7d08-4274-91ac-81c624aef609"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.629906 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-catalog-content\") pod \"0b433692-7d08-4274-91ac-81c624aef609\" (UID: \"0b433692-7d08-4274-91ac-81c624aef609\") " Nov 21 11:59:57 crc kubenswrapper[4972]: W1121 11:59:57.630145 4972 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0b433692-7d08-4274-91ac-81c624aef609/volumes/kubernetes.io~empty-dir/catalog-content Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.630222 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b433692-7d08-4274-91ac-81c624aef609" (UID: "0b433692-7d08-4274-91ac-81c624aef609"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.630535 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b433692-7d08-4274-91ac-81c624aef609-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.630614 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vsh4\" (UniqueName: \"kubernetes.io/projected/0b433692-7d08-4274-91ac-81c624aef609-kube-api-access-2vsh4\") on node \"crc\" DevicePath \"\"" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.783812 4972 generic.go:334] "Generic (PLEG): container finished" podID="0b433692-7d08-4274-91ac-81c624aef609" containerID="db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e" exitCode=0 Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.783946 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9k5h" event={"ID":"0b433692-7d08-4274-91ac-81c624aef609","Type":"ContainerDied","Data":"db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e"} Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.783982 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9k5h" event={"ID":"0b433692-7d08-4274-91ac-81c624aef609","Type":"ContainerDied","Data":"be07bf2a30eada416c145489336d19df5398a823fc4b9c7a11d40ccc59da08b9"} Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.783992 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9k5h" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.784006 4972 scope.go:117] "RemoveContainer" containerID="db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.814401 4972 scope.go:117] "RemoveContainer" containerID="ff4989afcef2fb1beb95ccbf58827f162df01ea1104b4659b949f2da4fffea41" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.850388 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9k5h"] Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.859915 4972 scope.go:117] "RemoveContainer" containerID="f47e8e0d551e77779f23da2951af78812d27a92b3128caa1761383988dbd156c" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.862334 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k9k5h"] Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.905093 4972 scope.go:117] "RemoveContainer" containerID="db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e" Nov 21 11:59:57 crc kubenswrapper[4972]: E1121 11:59:57.905553 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e\": container with ID starting with db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e not found: ID does not exist" containerID="db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.905603 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e"} err="failed to get container status \"db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e\": rpc error: code = NotFound desc = could not find container \"db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e\": container with ID starting with db50aed40812f38bef7416a6dec85ded15dda13a212a3ce79a3bd6c3b4e1540e not found: ID does not exist" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.905630 4972 scope.go:117] "RemoveContainer" containerID="ff4989afcef2fb1beb95ccbf58827f162df01ea1104b4659b949f2da4fffea41" Nov 21 11:59:57 crc kubenswrapper[4972]: E1121 11:59:57.905914 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff4989afcef2fb1beb95ccbf58827f162df01ea1104b4659b949f2da4fffea41\": container with ID starting with ff4989afcef2fb1beb95ccbf58827f162df01ea1104b4659b949f2da4fffea41 not found: ID does not exist" containerID="ff4989afcef2fb1beb95ccbf58827f162df01ea1104b4659b949f2da4fffea41" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.905948 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff4989afcef2fb1beb95ccbf58827f162df01ea1104b4659b949f2da4fffea41"} err="failed to get container status \"ff4989afcef2fb1beb95ccbf58827f162df01ea1104b4659b949f2da4fffea41\": rpc error: code = NotFound desc = could not find container \"ff4989afcef2fb1beb95ccbf58827f162df01ea1104b4659b949f2da4fffea41\": container with ID starting with ff4989afcef2fb1beb95ccbf58827f162df01ea1104b4659b949f2da4fffea41 not found: ID does not exist" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.905972 4972 scope.go:117] "RemoveContainer" containerID="f47e8e0d551e77779f23da2951af78812d27a92b3128caa1761383988dbd156c" Nov 21 11:59:57 crc kubenswrapper[4972]: E1121 11:59:57.906163 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f47e8e0d551e77779f23da2951af78812d27a92b3128caa1761383988dbd156c\": container with ID starting with f47e8e0d551e77779f23da2951af78812d27a92b3128caa1761383988dbd156c not found: ID does not exist" containerID="f47e8e0d551e77779f23da2951af78812d27a92b3128caa1761383988dbd156c" Nov 21 11:59:57 crc kubenswrapper[4972]: I1121 11:59:57.906197 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f47e8e0d551e77779f23da2951af78812d27a92b3128caa1761383988dbd156c"} err="failed to get container status \"f47e8e0d551e77779f23da2951af78812d27a92b3128caa1761383988dbd156c\": rpc error: code = NotFound desc = could not find container \"f47e8e0d551e77779f23da2951af78812d27a92b3128caa1761383988dbd156c\": container with ID starting with f47e8e0d551e77779f23da2951af78812d27a92b3128caa1761383988dbd156c not found: ID does not exist" Nov 21 11:59:59 crc kubenswrapper[4972]: I1121 11:59:59.775905 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b433692-7d08-4274-91ac-81c624aef609" path="/var/lib/kubelet/pods/0b433692-7d08-4274-91ac-81c624aef609/volumes" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.171874 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9"] Nov 21 12:00:00 crc kubenswrapper[4972]: E1121 12:00:00.173528 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b433692-7d08-4274-91ac-81c624aef609" containerName="registry-server" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.173633 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b433692-7d08-4274-91ac-81c624aef609" containerName="registry-server" Nov 21 12:00:00 crc kubenswrapper[4972]: E1121 12:00:00.173801 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b433692-7d08-4274-91ac-81c624aef609" containerName="extract-content" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.173982 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b433692-7d08-4274-91ac-81c624aef609" containerName="extract-content" Nov 21 12:00:00 crc kubenswrapper[4972]: E1121 12:00:00.174086 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b433692-7d08-4274-91ac-81c624aef609" containerName="extract-utilities" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.174156 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b433692-7d08-4274-91ac-81c624aef609" containerName="extract-utilities" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.174879 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b433692-7d08-4274-91ac-81c624aef609" containerName="registry-server" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.176290 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.178744 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.179745 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.191073 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9"] Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.298966 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38db668d-73e6-4233-a852-ec410b3b161f-config-volume\") pod \"collect-profiles-29395440-4rgs9\" (UID: \"38db668d-73e6-4233-a852-ec410b3b161f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.299308 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r7hl\" (UniqueName: \"kubernetes.io/projected/38db668d-73e6-4233-a852-ec410b3b161f-kube-api-access-7r7hl\") pod \"collect-profiles-29395440-4rgs9\" (UID: \"38db668d-73e6-4233-a852-ec410b3b161f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.299406 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38db668d-73e6-4233-a852-ec410b3b161f-secret-volume\") pod \"collect-profiles-29395440-4rgs9\" (UID: \"38db668d-73e6-4233-a852-ec410b3b161f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.401947 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38db668d-73e6-4233-a852-ec410b3b161f-config-volume\") pod \"collect-profiles-29395440-4rgs9\" (UID: \"38db668d-73e6-4233-a852-ec410b3b161f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.402232 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r7hl\" (UniqueName: \"kubernetes.io/projected/38db668d-73e6-4233-a852-ec410b3b161f-kube-api-access-7r7hl\") pod \"collect-profiles-29395440-4rgs9\" (UID: \"38db668d-73e6-4233-a852-ec410b3b161f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.402293 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38db668d-73e6-4233-a852-ec410b3b161f-secret-volume\") pod \"collect-profiles-29395440-4rgs9\" (UID: \"38db668d-73e6-4233-a852-ec410b3b161f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.403254 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38db668d-73e6-4233-a852-ec410b3b161f-config-volume\") pod \"collect-profiles-29395440-4rgs9\" (UID: \"38db668d-73e6-4233-a852-ec410b3b161f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.409240 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38db668d-73e6-4233-a852-ec410b3b161f-secret-volume\") pod \"collect-profiles-29395440-4rgs9\" (UID: \"38db668d-73e6-4233-a852-ec410b3b161f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.419901 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r7hl\" (UniqueName: \"kubernetes.io/projected/38db668d-73e6-4233-a852-ec410b3b161f-kube-api-access-7r7hl\") pod \"collect-profiles-29395440-4rgs9\" (UID: \"38db668d-73e6-4233-a852-ec410b3b161f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.508905 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:00 crc kubenswrapper[4972]: I1121 12:00:00.963957 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9"] Nov 21 12:00:01 crc kubenswrapper[4972]: I1121 12:00:01.835469 4972 generic.go:334] "Generic (PLEG): container finished" podID="38db668d-73e6-4233-a852-ec410b3b161f" containerID="7b455a758966f3f9e53b2ef60adf4269d860b033ce625f04a1fc0bff8774e0bc" exitCode=0 Nov 21 12:00:01 crc kubenswrapper[4972]: I1121 12:00:01.835534 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" event={"ID":"38db668d-73e6-4233-a852-ec410b3b161f","Type":"ContainerDied","Data":"7b455a758966f3f9e53b2ef60adf4269d860b033ce625f04a1fc0bff8774e0bc"} Nov 21 12:00:01 crc kubenswrapper[4972]: I1121 12:00:01.835764 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" event={"ID":"38db668d-73e6-4233-a852-ec410b3b161f","Type":"ContainerStarted","Data":"21efb5925ec7b51d313807921dc1859c7b3e1753a15e70810eb4fadc20bff383"} Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.222304 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.387587 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38db668d-73e6-4233-a852-ec410b3b161f-secret-volume\") pod \"38db668d-73e6-4233-a852-ec410b3b161f\" (UID: \"38db668d-73e6-4233-a852-ec410b3b161f\") " Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.387777 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r7hl\" (UniqueName: \"kubernetes.io/projected/38db668d-73e6-4233-a852-ec410b3b161f-kube-api-access-7r7hl\") pod \"38db668d-73e6-4233-a852-ec410b3b161f\" (UID: \"38db668d-73e6-4233-a852-ec410b3b161f\") " Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.387899 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38db668d-73e6-4233-a852-ec410b3b161f-config-volume\") pod \"38db668d-73e6-4233-a852-ec410b3b161f\" (UID: \"38db668d-73e6-4233-a852-ec410b3b161f\") " Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.388712 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38db668d-73e6-4233-a852-ec410b3b161f-config-volume" (OuterVolumeSpecName: "config-volume") pod "38db668d-73e6-4233-a852-ec410b3b161f" (UID: "38db668d-73e6-4233-a852-ec410b3b161f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.395183 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38db668d-73e6-4233-a852-ec410b3b161f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "38db668d-73e6-4233-a852-ec410b3b161f" (UID: "38db668d-73e6-4233-a852-ec410b3b161f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.395605 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38db668d-73e6-4233-a852-ec410b3b161f-kube-api-access-7r7hl" (OuterVolumeSpecName: "kube-api-access-7r7hl") pod "38db668d-73e6-4233-a852-ec410b3b161f" (UID: "38db668d-73e6-4233-a852-ec410b3b161f"). InnerVolumeSpecName "kube-api-access-7r7hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.490013 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38db668d-73e6-4233-a852-ec410b3b161f-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.490051 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r7hl\" (UniqueName: \"kubernetes.io/projected/38db668d-73e6-4233-a852-ec410b3b161f-kube-api-access-7r7hl\") on node \"crc\" DevicePath \"\"" Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.490062 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38db668d-73e6-4233-a852-ec410b3b161f-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.857611 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" event={"ID":"38db668d-73e6-4233-a852-ec410b3b161f","Type":"ContainerDied","Data":"21efb5925ec7b51d313807921dc1859c7b3e1753a15e70810eb4fadc20bff383"} Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.858111 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21efb5925ec7b51d313807921dc1859c7b3e1753a15e70810eb4fadc20bff383" Nov 21 12:00:03 crc kubenswrapper[4972]: I1121 12:00:03.857670 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395440-4rgs9" Nov 21 12:00:04 crc kubenswrapper[4972]: I1121 12:00:04.313073 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7"] Nov 21 12:00:04 crc kubenswrapper[4972]: I1121 12:00:04.321875 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395395-ddbg7"] Nov 21 12:00:05 crc kubenswrapper[4972]: I1121 12:00:05.782240 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c17eb9c-5eb2-4c5b-8594-453d42bf1db9" path="/var/lib/kubelet/pods/8c17eb9c-5eb2-4c5b-8594-453d42bf1db9/volumes" Nov 21 12:00:39 crc kubenswrapper[4972]: I1121 12:00:39.329415 4972 scope.go:117] "RemoveContainer" containerID="09283e95076cbe8d98ebe99c56c6e8e91ad86db72dd3f1d6c9251f3a81eb9391" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.000113 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lbqjx"] Nov 21 12:00:42 crc kubenswrapper[4972]: E1121 12:00:42.001061 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38db668d-73e6-4233-a852-ec410b3b161f" containerName="collect-profiles" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.001353 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="38db668d-73e6-4233-a852-ec410b3b161f" containerName="collect-profiles" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.001611 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="38db668d-73e6-4233-a852-ec410b3b161f" containerName="collect-profiles" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.003327 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.034885 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbqjx"] Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.157189 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwddl\" (UniqueName: \"kubernetes.io/projected/bbf3add8-8123-4735-a350-159e4906c292-kube-api-access-gwddl\") pod \"redhat-marketplace-lbqjx\" (UID: \"bbf3add8-8123-4735-a350-159e4906c292\") " pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.157250 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf3add8-8123-4735-a350-159e4906c292-utilities\") pod \"redhat-marketplace-lbqjx\" (UID: \"bbf3add8-8123-4735-a350-159e4906c292\") " pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.157413 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf3add8-8123-4735-a350-159e4906c292-catalog-content\") pod \"redhat-marketplace-lbqjx\" (UID: \"bbf3add8-8123-4735-a350-159e4906c292\") " pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.259332 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf3add8-8123-4735-a350-159e4906c292-catalog-content\") pod \"redhat-marketplace-lbqjx\" (UID: \"bbf3add8-8123-4735-a350-159e4906c292\") " pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.259643 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwddl\" (UniqueName: \"kubernetes.io/projected/bbf3add8-8123-4735-a350-159e4906c292-kube-api-access-gwddl\") pod \"redhat-marketplace-lbqjx\" (UID: \"bbf3add8-8123-4735-a350-159e4906c292\") " pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.259686 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf3add8-8123-4735-a350-159e4906c292-utilities\") pod \"redhat-marketplace-lbqjx\" (UID: \"bbf3add8-8123-4735-a350-159e4906c292\") " pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.261104 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf3add8-8123-4735-a350-159e4906c292-catalog-content\") pod \"redhat-marketplace-lbqjx\" (UID: \"bbf3add8-8123-4735-a350-159e4906c292\") " pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.261238 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf3add8-8123-4735-a350-159e4906c292-utilities\") pod \"redhat-marketplace-lbqjx\" (UID: \"bbf3add8-8123-4735-a350-159e4906c292\") " pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.283866 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwddl\" (UniqueName: \"kubernetes.io/projected/bbf3add8-8123-4735-a350-159e4906c292-kube-api-access-gwddl\") pod \"redhat-marketplace-lbqjx\" (UID: \"bbf3add8-8123-4735-a350-159e4906c292\") " pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.340995 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:42 crc kubenswrapper[4972]: I1121 12:00:42.814769 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbqjx"] Nov 21 12:00:43 crc kubenswrapper[4972]: I1121 12:00:43.691730 4972 generic.go:334] "Generic (PLEG): container finished" podID="bbf3add8-8123-4735-a350-159e4906c292" containerID="5ea0d2241f27ad38eb0c93fc233e1dcbf787b224f1248f1286485517776b01a0" exitCode=0 Nov 21 12:00:43 crc kubenswrapper[4972]: I1121 12:00:43.691893 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbqjx" event={"ID":"bbf3add8-8123-4735-a350-159e4906c292","Type":"ContainerDied","Data":"5ea0d2241f27ad38eb0c93fc233e1dcbf787b224f1248f1286485517776b01a0"} Nov 21 12:00:43 crc kubenswrapper[4972]: I1121 12:00:43.692139 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbqjx" event={"ID":"bbf3add8-8123-4735-a350-159e4906c292","Type":"ContainerStarted","Data":"67b617c6e3fba8ec4e1284dfcee30bde344d439b3bcefa8c2468e5150eff93e0"} Nov 21 12:00:44 crc kubenswrapper[4972]: I1121 12:00:44.714729 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbqjx" event={"ID":"bbf3add8-8123-4735-a350-159e4906c292","Type":"ContainerStarted","Data":"46fe78cb7eecf31ae5bf4ea4ca592c16ee6a34e7a5c78a7a57c3657daf82b60c"} Nov 21 12:00:45 crc kubenswrapper[4972]: I1121 12:00:45.727403 4972 generic.go:334] "Generic (PLEG): container finished" podID="bbf3add8-8123-4735-a350-159e4906c292" containerID="46fe78cb7eecf31ae5bf4ea4ca592c16ee6a34e7a5c78a7a57c3657daf82b60c" exitCode=0 Nov 21 12:00:45 crc kubenswrapper[4972]: I1121 12:00:45.727452 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbqjx" event={"ID":"bbf3add8-8123-4735-a350-159e4906c292","Type":"ContainerDied","Data":"46fe78cb7eecf31ae5bf4ea4ca592c16ee6a34e7a5c78a7a57c3657daf82b60c"} Nov 21 12:00:46 crc kubenswrapper[4972]: I1121 12:00:46.741864 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbqjx" event={"ID":"bbf3add8-8123-4735-a350-159e4906c292","Type":"ContainerStarted","Data":"cc50a17449c2ca6dd5f307d684f33574d96a6f276d9f26abc333b95aad9105d8"} Nov 21 12:00:46 crc kubenswrapper[4972]: I1121 12:00:46.761199 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lbqjx" podStartSLOduration=3.316343293 podStartE2EDuration="5.761171832s" podCreationTimestamp="2025-11-21 12:00:41 +0000 UTC" firstStartedPulling="2025-11-21 12:00:43.697619913 +0000 UTC m=+8388.806762421" lastFinishedPulling="2025-11-21 12:00:46.142448432 +0000 UTC m=+8391.251590960" observedRunningTime="2025-11-21 12:00:46.759751845 +0000 UTC m=+8391.868894373" watchObservedRunningTime="2025-11-21 12:00:46.761171832 +0000 UTC m=+8391.870314340" Nov 21 12:00:52 crc kubenswrapper[4972]: I1121 12:00:52.341685 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:52 crc kubenswrapper[4972]: I1121 12:00:52.342258 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:52 crc kubenswrapper[4972]: I1121 12:00:52.387799 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:52 crc kubenswrapper[4972]: I1121 12:00:52.877169 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:52 crc kubenswrapper[4972]: I1121 12:00:52.922946 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbqjx"] Nov 21 12:00:54 crc kubenswrapper[4972]: I1121 12:00:54.845510 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lbqjx" podUID="bbf3add8-8123-4735-a350-159e4906c292" containerName="registry-server" containerID="cri-o://cc50a17449c2ca6dd5f307d684f33574d96a6f276d9f26abc333b95aad9105d8" gracePeriod=2 Nov 21 12:00:55 crc kubenswrapper[4972]: I1121 12:00:55.870066 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbqjx" event={"ID":"bbf3add8-8123-4735-a350-159e4906c292","Type":"ContainerDied","Data":"cc50a17449c2ca6dd5f307d684f33574d96a6f276d9f26abc333b95aad9105d8"} Nov 21 12:00:55 crc kubenswrapper[4972]: I1121 12:00:55.870149 4972 generic.go:334] "Generic (PLEG): container finished" podID="bbf3add8-8123-4735-a350-159e4906c292" containerID="cc50a17449c2ca6dd5f307d684f33574d96a6f276d9f26abc333b95aad9105d8" exitCode=0 Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.057264 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.231070 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf3add8-8123-4735-a350-159e4906c292-catalog-content\") pod \"bbf3add8-8123-4735-a350-159e4906c292\" (UID: \"bbf3add8-8123-4735-a350-159e4906c292\") " Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.231169 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf3add8-8123-4735-a350-159e4906c292-utilities\") pod \"bbf3add8-8123-4735-a350-159e4906c292\" (UID: \"bbf3add8-8123-4735-a350-159e4906c292\") " Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.231554 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwddl\" (UniqueName: \"kubernetes.io/projected/bbf3add8-8123-4735-a350-159e4906c292-kube-api-access-gwddl\") pod \"bbf3add8-8123-4735-a350-159e4906c292\" (UID: \"bbf3add8-8123-4735-a350-159e4906c292\") " Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.234161 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbf3add8-8123-4735-a350-159e4906c292-utilities" (OuterVolumeSpecName: "utilities") pod "bbf3add8-8123-4735-a350-159e4906c292" (UID: "bbf3add8-8123-4735-a350-159e4906c292"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.242170 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbf3add8-8123-4735-a350-159e4906c292-kube-api-access-gwddl" (OuterVolumeSpecName: "kube-api-access-gwddl") pod "bbf3add8-8123-4735-a350-159e4906c292" (UID: "bbf3add8-8123-4735-a350-159e4906c292"). InnerVolumeSpecName "kube-api-access-gwddl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.249352 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbf3add8-8123-4735-a350-159e4906c292-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbf3add8-8123-4735-a350-159e4906c292" (UID: "bbf3add8-8123-4735-a350-159e4906c292"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.333911 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf3add8-8123-4735-a350-159e4906c292-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.333947 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf3add8-8123-4735-a350-159e4906c292-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.333958 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwddl\" (UniqueName: \"kubernetes.io/projected/bbf3add8-8123-4735-a350-159e4906c292-kube-api-access-gwddl\") on node \"crc\" DevicePath \"\"" Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.886901 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbqjx" event={"ID":"bbf3add8-8123-4735-a350-159e4906c292","Type":"ContainerDied","Data":"67b617c6e3fba8ec4e1284dfcee30bde344d439b3bcefa8c2468e5150eff93e0"} Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.887269 4972 scope.go:117] "RemoveContainer" containerID="cc50a17449c2ca6dd5f307d684f33574d96a6f276d9f26abc333b95aad9105d8" Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.887006 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbqjx" Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.921787 4972 scope.go:117] "RemoveContainer" containerID="46fe78cb7eecf31ae5bf4ea4ca592c16ee6a34e7a5c78a7a57c3657daf82b60c" Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.939024 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbqjx"] Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.951374 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbqjx"] Nov 21 12:00:56 crc kubenswrapper[4972]: I1121 12:00:56.971078 4972 scope.go:117] "RemoveContainer" containerID="5ea0d2241f27ad38eb0c93fc233e1dcbf787b224f1248f1286485517776b01a0" Nov 21 12:00:57 crc kubenswrapper[4972]: I1121 12:00:57.773032 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbf3add8-8123-4735-a350-159e4906c292" path="/var/lib/kubelet/pods/bbf3add8-8123-4735-a350-159e4906c292/volumes" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.241516 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29395441-9lbdd"] Nov 21 12:01:00 crc kubenswrapper[4972]: E1121 12:01:00.242719 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf3add8-8123-4735-a350-159e4906c292" containerName="extract-content" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.242738 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf3add8-8123-4735-a350-159e4906c292" containerName="extract-content" Nov 21 12:01:00 crc kubenswrapper[4972]: E1121 12:01:00.242765 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf3add8-8123-4735-a350-159e4906c292" containerName="registry-server" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.242773 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf3add8-8123-4735-a350-159e4906c292" containerName="registry-server" Nov 21 12:01:00 crc kubenswrapper[4972]: E1121 12:01:00.242789 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf3add8-8123-4735-a350-159e4906c292" containerName="extract-utilities" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.242798 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf3add8-8123-4735-a350-159e4906c292" containerName="extract-utilities" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.243125 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbf3add8-8123-4735-a350-159e4906c292" containerName="registry-server" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.243959 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.256778 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395441-9lbdd"] Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.325285 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkg2x\" (UniqueName: \"kubernetes.io/projected/95c7d372-ef45-4c62-9d8c-81438229c9f4-kube-api-access-lkg2x\") pod \"keystone-cron-29395441-9lbdd\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.325352 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-combined-ca-bundle\") pod \"keystone-cron-29395441-9lbdd\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.325488 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-config-data\") pod \"keystone-cron-29395441-9lbdd\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.326003 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-fernet-keys\") pod \"keystone-cron-29395441-9lbdd\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.428094 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-fernet-keys\") pod \"keystone-cron-29395441-9lbdd\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.428201 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkg2x\" (UniqueName: \"kubernetes.io/projected/95c7d372-ef45-4c62-9d8c-81438229c9f4-kube-api-access-lkg2x\") pod \"keystone-cron-29395441-9lbdd\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.428227 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-combined-ca-bundle\") pod \"keystone-cron-29395441-9lbdd\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.428257 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-config-data\") pod \"keystone-cron-29395441-9lbdd\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.435640 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-combined-ca-bundle\") pod \"keystone-cron-29395441-9lbdd\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.440093 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-fernet-keys\") pod \"keystone-cron-29395441-9lbdd\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.445850 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkg2x\" (UniqueName: \"kubernetes.io/projected/95c7d372-ef45-4c62-9d8c-81438229c9f4-kube-api-access-lkg2x\") pod \"keystone-cron-29395441-9lbdd\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.448890 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-config-data\") pod \"keystone-cron-29395441-9lbdd\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:00 crc kubenswrapper[4972]: I1121 12:01:00.578468 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:01 crc kubenswrapper[4972]: I1121 12:01:01.083395 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395441-9lbdd"] Nov 21 12:01:01 crc kubenswrapper[4972]: I1121 12:01:01.942752 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395441-9lbdd" event={"ID":"95c7d372-ef45-4c62-9d8c-81438229c9f4","Type":"ContainerStarted","Data":"910a382c92e9fb5fec0f0b0fbaaed25a83d70f50e1229b8c20f7ae61f3ed455a"} Nov 21 12:01:01 crc kubenswrapper[4972]: I1121 12:01:01.943532 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395441-9lbdd" event={"ID":"95c7d372-ef45-4c62-9d8c-81438229c9f4","Type":"ContainerStarted","Data":"bf8005ead3787b0a0b1c441aeb8a11b67b2a0b257e03c34aee617835f3c33467"} Nov 21 12:01:01 crc kubenswrapper[4972]: I1121 12:01:01.964715 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29395441-9lbdd" podStartSLOduration=1.964692528 podStartE2EDuration="1.964692528s" podCreationTimestamp="2025-11-21 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 12:01:01.958485145 +0000 UTC m=+8407.067627663" watchObservedRunningTime="2025-11-21 12:01:01.964692528 +0000 UTC m=+8407.073835036" Nov 21 12:01:05 crc kubenswrapper[4972]: E1121 12:01:05.260820 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95c7d372_ef45_4c62_9d8c_81438229c9f4.slice/crio-910a382c92e9fb5fec0f0b0fbaaed25a83d70f50e1229b8c20f7ae61f3ed455a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95c7d372_ef45_4c62_9d8c_81438229c9f4.slice/crio-conmon-910a382c92e9fb5fec0f0b0fbaaed25a83d70f50e1229b8c20f7ae61f3ed455a.scope\": RecentStats: unable to find data in memory cache]" Nov 21 12:01:05 crc kubenswrapper[4972]: I1121 12:01:05.991970 4972 generic.go:334] "Generic (PLEG): container finished" podID="95c7d372-ef45-4c62-9d8c-81438229c9f4" containerID="910a382c92e9fb5fec0f0b0fbaaed25a83d70f50e1229b8c20f7ae61f3ed455a" exitCode=0 Nov 21 12:01:05 crc kubenswrapper[4972]: I1121 12:01:05.992112 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395441-9lbdd" event={"ID":"95c7d372-ef45-4c62-9d8c-81438229c9f4","Type":"ContainerDied","Data":"910a382c92e9fb5fec0f0b0fbaaed25a83d70f50e1229b8c20f7ae61f3ed455a"} Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.428607 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.499254 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkg2x\" (UniqueName: \"kubernetes.io/projected/95c7d372-ef45-4c62-9d8c-81438229c9f4-kube-api-access-lkg2x\") pod \"95c7d372-ef45-4c62-9d8c-81438229c9f4\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.499407 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-fernet-keys\") pod \"95c7d372-ef45-4c62-9d8c-81438229c9f4\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.499717 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-combined-ca-bundle\") pod \"95c7d372-ef45-4c62-9d8c-81438229c9f4\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.499774 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-config-data\") pod \"95c7d372-ef45-4c62-9d8c-81438229c9f4\" (UID: \"95c7d372-ef45-4c62-9d8c-81438229c9f4\") " Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.510387 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c7d372-ef45-4c62-9d8c-81438229c9f4-kube-api-access-lkg2x" (OuterVolumeSpecName: "kube-api-access-lkg2x") pod "95c7d372-ef45-4c62-9d8c-81438229c9f4" (UID: "95c7d372-ef45-4c62-9d8c-81438229c9f4"). InnerVolumeSpecName "kube-api-access-lkg2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.511103 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "95c7d372-ef45-4c62-9d8c-81438229c9f4" (UID: "95c7d372-ef45-4c62-9d8c-81438229c9f4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.538660 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95c7d372-ef45-4c62-9d8c-81438229c9f4" (UID: "95c7d372-ef45-4c62-9d8c-81438229c9f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.559938 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-config-data" (OuterVolumeSpecName: "config-data") pod "95c7d372-ef45-4c62-9d8c-81438229c9f4" (UID: "95c7d372-ef45-4c62-9d8c-81438229c9f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.602674 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkg2x\" (UniqueName: \"kubernetes.io/projected/95c7d372-ef45-4c62-9d8c-81438229c9f4-kube-api-access-lkg2x\") on node \"crc\" DevicePath \"\"" Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.602715 4972 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.602726 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 12:01:07 crc kubenswrapper[4972]: I1121 12:01:07.602735 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c7d372-ef45-4c62-9d8c-81438229c9f4-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 12:01:08 crc kubenswrapper[4972]: I1121 12:01:08.011460 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395441-9lbdd" event={"ID":"95c7d372-ef45-4c62-9d8c-81438229c9f4","Type":"ContainerDied","Data":"bf8005ead3787b0a0b1c441aeb8a11b67b2a0b257e03c34aee617835f3c33467"} Nov 21 12:01:08 crc kubenswrapper[4972]: I1121 12:01:08.011510 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf8005ead3787b0a0b1c441aeb8a11b67b2a0b257e03c34aee617835f3c33467" Nov 21 12:01:08 crc kubenswrapper[4972]: I1121 12:01:08.011550 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395441-9lbdd" Nov 21 12:01:26 crc kubenswrapper[4972]: I1121 12:01:26.178956 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:01:26 crc kubenswrapper[4972]: I1121 12:01:26.179764 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:01:56 crc kubenswrapper[4972]: I1121 12:01:56.178555 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:01:56 crc kubenswrapper[4972]: I1121 12:01:56.179092 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:02:26 crc kubenswrapper[4972]: I1121 12:02:26.178969 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:02:26 crc kubenswrapper[4972]: I1121 12:02:26.179692 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:02:26 crc kubenswrapper[4972]: I1121 12:02:26.179756 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 12:02:26 crc kubenswrapper[4972]: I1121 12:02:26.180992 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 12:02:26 crc kubenswrapper[4972]: I1121 12:02:26.181101 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" gracePeriod=600 Nov 21 12:02:26 crc kubenswrapper[4972]: E1121 12:02:26.349123 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:02:26 crc kubenswrapper[4972]: I1121 12:02:26.960171 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" exitCode=0 Nov 21 12:02:26 crc kubenswrapper[4972]: I1121 12:02:26.960433 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b"} Nov 21 12:02:26 crc kubenswrapper[4972]: I1121 12:02:26.960784 4972 scope.go:117] "RemoveContainer" containerID="0b4dc4317cd5fa834bd0d58d4284e0b65ba94dc083a5e7c02437c6bea6070ec7" Nov 21 12:02:26 crc kubenswrapper[4972]: I1121 12:02:26.962116 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:02:26 crc kubenswrapper[4972]: E1121 12:02:26.963108 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:02:39 crc kubenswrapper[4972]: I1121 12:02:39.760927 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:02:39 crc kubenswrapper[4972]: E1121 12:02:39.762566 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:02:51 crc kubenswrapper[4972]: I1121 12:02:51.760212 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:02:51 crc kubenswrapper[4972]: E1121 12:02:51.760990 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:03:02 crc kubenswrapper[4972]: I1121 12:03:02.759314 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:03:02 crc kubenswrapper[4972]: E1121 12:03:02.760099 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:03:14 crc kubenswrapper[4972]: I1121 12:03:14.760605 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:03:14 crc kubenswrapper[4972]: E1121 12:03:14.761561 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.077557 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2rrm7"] Nov 21 12:03:19 crc kubenswrapper[4972]: E1121 12:03:19.079532 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c7d372-ef45-4c62-9d8c-81438229c9f4" containerName="keystone-cron" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.079608 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c7d372-ef45-4c62-9d8c-81438229c9f4" containerName="keystone-cron" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.079922 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c7d372-ef45-4c62-9d8c-81438229c9f4" containerName="keystone-cron" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.081503 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.122216 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2rrm7"] Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.276398 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291dc6c-c736-4399-82cf-c07723a5f8b3-utilities\") pod \"certified-operators-2rrm7\" (UID: \"0291dc6c-c736-4399-82cf-c07723a5f8b3\") " pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.276572 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftmn6\" (UniqueName: \"kubernetes.io/projected/0291dc6c-c736-4399-82cf-c07723a5f8b3-kube-api-access-ftmn6\") pod \"certified-operators-2rrm7\" (UID: \"0291dc6c-c736-4399-82cf-c07723a5f8b3\") " pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.276626 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291dc6c-c736-4399-82cf-c07723a5f8b3-catalog-content\") pod \"certified-operators-2rrm7\" (UID: \"0291dc6c-c736-4399-82cf-c07723a5f8b3\") " pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.378536 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291dc6c-c736-4399-82cf-c07723a5f8b3-utilities\") pod \"certified-operators-2rrm7\" (UID: \"0291dc6c-c736-4399-82cf-c07723a5f8b3\") " pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.378678 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftmn6\" (UniqueName: \"kubernetes.io/projected/0291dc6c-c736-4399-82cf-c07723a5f8b3-kube-api-access-ftmn6\") pod \"certified-operators-2rrm7\" (UID: \"0291dc6c-c736-4399-82cf-c07723a5f8b3\") " pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.378716 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291dc6c-c736-4399-82cf-c07723a5f8b3-catalog-content\") pod \"certified-operators-2rrm7\" (UID: \"0291dc6c-c736-4399-82cf-c07723a5f8b3\") " pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.379159 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291dc6c-c736-4399-82cf-c07723a5f8b3-utilities\") pod \"certified-operators-2rrm7\" (UID: \"0291dc6c-c736-4399-82cf-c07723a5f8b3\") " pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.379306 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291dc6c-c736-4399-82cf-c07723a5f8b3-catalog-content\") pod \"certified-operators-2rrm7\" (UID: \"0291dc6c-c736-4399-82cf-c07723a5f8b3\") " pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.400541 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftmn6\" (UniqueName: \"kubernetes.io/projected/0291dc6c-c736-4399-82cf-c07723a5f8b3-kube-api-access-ftmn6\") pod \"certified-operators-2rrm7\" (UID: \"0291dc6c-c736-4399-82cf-c07723a5f8b3\") " pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:19 crc kubenswrapper[4972]: I1121 12:03:19.411963 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:20 crc kubenswrapper[4972]: I1121 12:03:20.051215 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2rrm7"] Nov 21 12:03:20 crc kubenswrapper[4972]: I1121 12:03:20.552445 4972 generic.go:334] "Generic (PLEG): container finished" podID="0291dc6c-c736-4399-82cf-c07723a5f8b3" containerID="3491716743a3c4c61a2d2cc85b2078836e698bbe54075edb74bc66bb794302b8" exitCode=0 Nov 21 12:03:20 crc kubenswrapper[4972]: I1121 12:03:20.552510 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rrm7" event={"ID":"0291dc6c-c736-4399-82cf-c07723a5f8b3","Type":"ContainerDied","Data":"3491716743a3c4c61a2d2cc85b2078836e698bbe54075edb74bc66bb794302b8"} Nov 21 12:03:20 crc kubenswrapper[4972]: I1121 12:03:20.552709 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rrm7" event={"ID":"0291dc6c-c736-4399-82cf-c07723a5f8b3","Type":"ContainerStarted","Data":"0b48e34acffc408e984d359c3254f882700f0ea74ae45e82a98c757baa0b9b57"} Nov 21 12:03:20 crc kubenswrapper[4972]: I1121 12:03:20.555766 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 12:03:21 crc kubenswrapper[4972]: I1121 12:03:21.569491 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rrm7" event={"ID":"0291dc6c-c736-4399-82cf-c07723a5f8b3","Type":"ContainerStarted","Data":"646e7eada26d74966d72d71c90f12c188ace007a7d5e4e9fb44175cab17617a8"} Nov 21 12:03:23 crc kubenswrapper[4972]: I1121 12:03:23.598636 4972 generic.go:334] "Generic (PLEG): container finished" podID="0291dc6c-c736-4399-82cf-c07723a5f8b3" containerID="646e7eada26d74966d72d71c90f12c188ace007a7d5e4e9fb44175cab17617a8" exitCode=0 Nov 21 12:03:23 crc kubenswrapper[4972]: I1121 12:03:23.598725 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rrm7" event={"ID":"0291dc6c-c736-4399-82cf-c07723a5f8b3","Type":"ContainerDied","Data":"646e7eada26d74966d72d71c90f12c188ace007a7d5e4e9fb44175cab17617a8"} Nov 21 12:03:24 crc kubenswrapper[4972]: I1121 12:03:24.613809 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rrm7" event={"ID":"0291dc6c-c736-4399-82cf-c07723a5f8b3","Type":"ContainerStarted","Data":"703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65"} Nov 21 12:03:24 crc kubenswrapper[4972]: I1121 12:03:24.647130 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2rrm7" podStartSLOduration=2.155471456 podStartE2EDuration="5.647112441s" podCreationTimestamp="2025-11-21 12:03:19 +0000 UTC" firstStartedPulling="2025-11-21 12:03:20.555534719 +0000 UTC m=+8545.664677217" lastFinishedPulling="2025-11-21 12:03:24.047175704 +0000 UTC m=+8549.156318202" observedRunningTime="2025-11-21 12:03:24.638448453 +0000 UTC m=+8549.747590961" watchObservedRunningTime="2025-11-21 12:03:24.647112441 +0000 UTC m=+8549.756254949" Nov 21 12:03:25 crc kubenswrapper[4972]: I1121 12:03:25.767115 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:03:25 crc kubenswrapper[4972]: E1121 12:03:25.767658 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:03:29 crc kubenswrapper[4972]: I1121 12:03:29.413103 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:29 crc kubenswrapper[4972]: I1121 12:03:29.413843 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:29 crc kubenswrapper[4972]: I1121 12:03:29.483770 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:29 crc kubenswrapper[4972]: I1121 12:03:29.742560 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:29 crc kubenswrapper[4972]: I1121 12:03:29.798741 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2rrm7"] Nov 21 12:03:31 crc kubenswrapper[4972]: I1121 12:03:31.693703 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2rrm7" podUID="0291dc6c-c736-4399-82cf-c07723a5f8b3" containerName="registry-server" containerID="cri-o://703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65" gracePeriod=2 Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.218461 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.319927 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291dc6c-c736-4399-82cf-c07723a5f8b3-utilities\") pod \"0291dc6c-c736-4399-82cf-c07723a5f8b3\" (UID: \"0291dc6c-c736-4399-82cf-c07723a5f8b3\") " Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.320380 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291dc6c-c736-4399-82cf-c07723a5f8b3-catalog-content\") pod \"0291dc6c-c736-4399-82cf-c07723a5f8b3\" (UID: \"0291dc6c-c736-4399-82cf-c07723a5f8b3\") " Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.320484 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftmn6\" (UniqueName: \"kubernetes.io/projected/0291dc6c-c736-4399-82cf-c07723a5f8b3-kube-api-access-ftmn6\") pod \"0291dc6c-c736-4399-82cf-c07723a5f8b3\" (UID: \"0291dc6c-c736-4399-82cf-c07723a5f8b3\") " Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.321156 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0291dc6c-c736-4399-82cf-c07723a5f8b3-utilities" (OuterVolumeSpecName: "utilities") pod "0291dc6c-c736-4399-82cf-c07723a5f8b3" (UID: "0291dc6c-c736-4399-82cf-c07723a5f8b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.321271 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291dc6c-c736-4399-82cf-c07723a5f8b3-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.325998 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0291dc6c-c736-4399-82cf-c07723a5f8b3-kube-api-access-ftmn6" (OuterVolumeSpecName: "kube-api-access-ftmn6") pod "0291dc6c-c736-4399-82cf-c07723a5f8b3" (UID: "0291dc6c-c736-4399-82cf-c07723a5f8b3"). InnerVolumeSpecName "kube-api-access-ftmn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.403410 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0291dc6c-c736-4399-82cf-c07723a5f8b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0291dc6c-c736-4399-82cf-c07723a5f8b3" (UID: "0291dc6c-c736-4399-82cf-c07723a5f8b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.424637 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291dc6c-c736-4399-82cf-c07723a5f8b3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.424700 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftmn6\" (UniqueName: \"kubernetes.io/projected/0291dc6c-c736-4399-82cf-c07723a5f8b3-kube-api-access-ftmn6\") on node \"crc\" DevicePath \"\"" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.707377 4972 generic.go:334] "Generic (PLEG): container finished" podID="0291dc6c-c736-4399-82cf-c07723a5f8b3" containerID="703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65" exitCode=0 Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.707451 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2rrm7" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.707497 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rrm7" event={"ID":"0291dc6c-c736-4399-82cf-c07723a5f8b3","Type":"ContainerDied","Data":"703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65"} Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.707930 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rrm7" event={"ID":"0291dc6c-c736-4399-82cf-c07723a5f8b3","Type":"ContainerDied","Data":"0b48e34acffc408e984d359c3254f882700f0ea74ae45e82a98c757baa0b9b57"} Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.707983 4972 scope.go:117] "RemoveContainer" containerID="703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.732224 4972 scope.go:117] "RemoveContainer" containerID="646e7eada26d74966d72d71c90f12c188ace007a7d5e4e9fb44175cab17617a8" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.755586 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2rrm7"] Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.767214 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2rrm7"] Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.772943 4972 scope.go:117] "RemoveContainer" containerID="3491716743a3c4c61a2d2cc85b2078836e698bbe54075edb74bc66bb794302b8" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.820108 4972 scope.go:117] "RemoveContainer" containerID="703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65" Nov 21 12:03:32 crc kubenswrapper[4972]: E1121 12:03:32.820583 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65\": container with ID starting with 703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65 not found: ID does not exist" containerID="703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.820642 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65"} err="failed to get container status \"703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65\": rpc error: code = NotFound desc = could not find container \"703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65\": container with ID starting with 703416054b9e8c69b6c09ab233ee526a5c7c28a093f0989ebd6a1b1ca2e4bc65 not found: ID does not exist" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.820675 4972 scope.go:117] "RemoveContainer" containerID="646e7eada26d74966d72d71c90f12c188ace007a7d5e4e9fb44175cab17617a8" Nov 21 12:03:32 crc kubenswrapper[4972]: E1121 12:03:32.821193 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646e7eada26d74966d72d71c90f12c188ace007a7d5e4e9fb44175cab17617a8\": container with ID starting with 646e7eada26d74966d72d71c90f12c188ace007a7d5e4e9fb44175cab17617a8 not found: ID does not exist" containerID="646e7eada26d74966d72d71c90f12c188ace007a7d5e4e9fb44175cab17617a8" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.821227 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646e7eada26d74966d72d71c90f12c188ace007a7d5e4e9fb44175cab17617a8"} err="failed to get container status \"646e7eada26d74966d72d71c90f12c188ace007a7d5e4e9fb44175cab17617a8\": rpc error: code = NotFound desc = could not find container \"646e7eada26d74966d72d71c90f12c188ace007a7d5e4e9fb44175cab17617a8\": container with ID starting with 646e7eada26d74966d72d71c90f12c188ace007a7d5e4e9fb44175cab17617a8 not found: ID does not exist" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.821280 4972 scope.go:117] "RemoveContainer" containerID="3491716743a3c4c61a2d2cc85b2078836e698bbe54075edb74bc66bb794302b8" Nov 21 12:03:32 crc kubenswrapper[4972]: E1121 12:03:32.821609 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3491716743a3c4c61a2d2cc85b2078836e698bbe54075edb74bc66bb794302b8\": container with ID starting with 3491716743a3c4c61a2d2cc85b2078836e698bbe54075edb74bc66bb794302b8 not found: ID does not exist" containerID="3491716743a3c4c61a2d2cc85b2078836e698bbe54075edb74bc66bb794302b8" Nov 21 12:03:32 crc kubenswrapper[4972]: I1121 12:03:32.821641 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3491716743a3c4c61a2d2cc85b2078836e698bbe54075edb74bc66bb794302b8"} err="failed to get container status \"3491716743a3c4c61a2d2cc85b2078836e698bbe54075edb74bc66bb794302b8\": rpc error: code = NotFound desc = could not find container \"3491716743a3c4c61a2d2cc85b2078836e698bbe54075edb74bc66bb794302b8\": container with ID starting with 3491716743a3c4c61a2d2cc85b2078836e698bbe54075edb74bc66bb794302b8 not found: ID does not exist" Nov 21 12:03:33 crc kubenswrapper[4972]: I1121 12:03:33.773972 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0291dc6c-c736-4399-82cf-c07723a5f8b3" path="/var/lib/kubelet/pods/0291dc6c-c736-4399-82cf-c07723a5f8b3/volumes" Nov 21 12:03:36 crc kubenswrapper[4972]: I1121 12:03:36.760962 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:03:36 crc kubenswrapper[4972]: E1121 12:03:36.761675 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:03:51 crc kubenswrapper[4972]: I1121 12:03:51.760850 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:03:51 crc kubenswrapper[4972]: E1121 12:03:51.761696 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:03:53 crc kubenswrapper[4972]: I1121 12:03:53.944131 4972 generic.go:334] "Generic (PLEG): container finished" podID="2f0c2649-60a5-4c8b-8469-bd17ee2fac3f" containerID="966d01453a331353b6d7133f21d27ec2276313f3e8aeaa52330f5d022cceaff6" exitCode=0 Nov 21 12:03:53 crc kubenswrapper[4972]: I1121 12:03:53.944286 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" event={"ID":"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f","Type":"ContainerDied","Data":"966d01453a331353b6d7133f21d27ec2276313f3e8aeaa52330f5d022cceaff6"} Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.482713 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.576577 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-2\") pod \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.576691 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ssh-key\") pod \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.576743 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceph\") pod \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.576773 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-telemetry-combined-ca-bundle\") pod \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.576818 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-0\") pod \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.576950 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-1\") pod \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.577052 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-845kz\" (UniqueName: \"kubernetes.io/projected/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-kube-api-access-845kz\") pod \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.577098 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-inventory\") pod \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\" (UID: \"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f\") " Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.582203 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceph" (OuterVolumeSpecName: "ceph") pod "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f" (UID: "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.583462 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-kube-api-access-845kz" (OuterVolumeSpecName: "kube-api-access-845kz") pod "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f" (UID: "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f"). InnerVolumeSpecName "kube-api-access-845kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.595142 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f" (UID: "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.608913 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f" (UID: "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.610040 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-inventory" (OuterVolumeSpecName: "inventory") pod "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f" (UID: "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.611413 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f" (UID: "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.611610 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f" (UID: "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.628634 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f" (UID: "2f0c2649-60a5-4c8b-8469-bd17ee2fac3f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.679120 4972 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.679175 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.679187 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.679195 4972 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.679206 4972 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.679215 4972 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.679224 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-845kz\" (UniqueName: \"kubernetes.io/projected/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-kube-api-access-845kz\") on node \"crc\" DevicePath \"\"" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.679233 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f0c2649-60a5-4c8b-8469-bd17ee2fac3f-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.967149 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" event={"ID":"2f0c2649-60a5-4c8b-8469-bd17ee2fac3f","Type":"ContainerDied","Data":"f659a4d86fc0cfaefe4fd9c5797f555b3dbbe497d1dd34cb1d063e428004e9b2"} Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.967201 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f659a4d86fc0cfaefe4fd9c5797f555b3dbbe497d1dd34cb1d063e428004e9b2" Nov 21 12:03:55 crc kubenswrapper[4972]: I1121 12:03:55.967210 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-j6j8b" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.094912 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-g76jm"] Nov 21 12:03:56 crc kubenswrapper[4972]: E1121 12:03:56.095401 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0291dc6c-c736-4399-82cf-c07723a5f8b3" containerName="extract-content" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.095418 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0291dc6c-c736-4399-82cf-c07723a5f8b3" containerName="extract-content" Nov 21 12:03:56 crc kubenswrapper[4972]: E1121 12:03:56.095435 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0291dc6c-c736-4399-82cf-c07723a5f8b3" containerName="registry-server" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.095445 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0291dc6c-c736-4399-82cf-c07723a5f8b3" containerName="registry-server" Nov 21 12:03:56 crc kubenswrapper[4972]: E1121 12:03:56.095490 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0291dc6c-c736-4399-82cf-c07723a5f8b3" containerName="extract-utilities" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.095497 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0291dc6c-c736-4399-82cf-c07723a5f8b3" containerName="extract-utilities" Nov 21 12:03:56 crc kubenswrapper[4972]: E1121 12:03:56.095507 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0c2649-60a5-4c8b-8469-bd17ee2fac3f" containerName="telemetry-openstack-openstack-cell1" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.095513 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0c2649-60a5-4c8b-8469-bd17ee2fac3f" containerName="telemetry-openstack-openstack-cell1" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.095728 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="0291dc6c-c736-4399-82cf-c07723a5f8b3" containerName="registry-server" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.095754 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f0c2649-60a5-4c8b-8469-bd17ee2fac3f" containerName="telemetry-openstack-openstack-cell1" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.096666 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.099210 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.099262 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.099540 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.100501 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.103523 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.108033 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-g76jm"] Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.191676 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9bvz\" (UniqueName: \"kubernetes.io/projected/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-kube-api-access-m9bvz\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.191771 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.191795 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.192070 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.192285 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.192492 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.293878 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.293923 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.293989 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.294028 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.294087 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.294156 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9bvz\" (UniqueName: \"kubernetes.io/projected/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-kube-api-access-m9bvz\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.298135 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.298131 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.298515 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.299008 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.302565 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.311540 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9bvz\" (UniqueName: \"kubernetes.io/projected/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-kube-api-access-m9bvz\") pod \"neutron-sriov-openstack-openstack-cell1-g76jm\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.426706 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:03:56 crc kubenswrapper[4972]: I1121 12:03:56.986190 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-g76jm"] Nov 21 12:03:57 crc kubenswrapper[4972]: I1121 12:03:57.987917 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" event={"ID":"c602c118-ca24-4a6a-ada2-b76d3e4f7e25","Type":"ContainerStarted","Data":"645b0c79a28770022da389ae4dd024c1eff1dada330d0a76d74c82a1cf36f76a"} Nov 21 12:03:59 crc kubenswrapper[4972]: I1121 12:03:59.003665 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" event={"ID":"c602c118-ca24-4a6a-ada2-b76d3e4f7e25","Type":"ContainerStarted","Data":"64f2056903307905f44824c859d221b93f198d84191108796aa079718b1e3136"} Nov 21 12:03:59 crc kubenswrapper[4972]: I1121 12:03:59.018302 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" podStartSLOduration=1.7697178660000001 podStartE2EDuration="3.018290018s" podCreationTimestamp="2025-11-21 12:03:56 +0000 UTC" firstStartedPulling="2025-11-21 12:03:56.992537088 +0000 UTC m=+8582.101679586" lastFinishedPulling="2025-11-21 12:03:58.24110924 +0000 UTC m=+8583.350251738" observedRunningTime="2025-11-21 12:03:59.016792969 +0000 UTC m=+8584.125935487" watchObservedRunningTime="2025-11-21 12:03:59.018290018 +0000 UTC m=+8584.127432506" Nov 21 12:04:00 crc kubenswrapper[4972]: E1121 12:04:00.098543 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f0c2649_60a5_4c8b_8469_bd17ee2fac3f.slice\": RecentStats: unable to find data in memory cache]" Nov 21 12:04:03 crc kubenswrapper[4972]: I1121 12:04:03.759151 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:04:03 crc kubenswrapper[4972]: E1121 12:04:03.759687 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:04:10 crc kubenswrapper[4972]: E1121 12:04:10.372798 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f0c2649_60a5_4c8b_8469_bd17ee2fac3f.slice\": RecentStats: unable to find data in memory cache]" Nov 21 12:04:18 crc kubenswrapper[4972]: I1121 12:04:18.760724 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:04:18 crc kubenswrapper[4972]: E1121 12:04:18.762146 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:04:20 crc kubenswrapper[4972]: E1121 12:04:20.697539 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f0c2649_60a5_4c8b_8469_bd17ee2fac3f.slice\": RecentStats: unable to find data in memory cache]" Nov 21 12:04:30 crc kubenswrapper[4972]: E1121 12:04:30.973772 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f0c2649_60a5_4c8b_8469_bd17ee2fac3f.slice\": RecentStats: unable to find data in memory cache]" Nov 21 12:04:32 crc kubenswrapper[4972]: I1121 12:04:32.759757 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:04:32 crc kubenswrapper[4972]: E1121 12:04:32.760328 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:04:41 crc kubenswrapper[4972]: E1121 12:04:41.314774 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f0c2649_60a5_4c8b_8469_bd17ee2fac3f.slice\": RecentStats: unable to find data in memory cache]" Nov 21 12:04:46 crc kubenswrapper[4972]: I1121 12:04:46.760230 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:04:46 crc kubenswrapper[4972]: E1121 12:04:46.761579 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:04:51 crc kubenswrapper[4972]: E1121 12:04:51.594571 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f0c2649_60a5_4c8b_8469_bd17ee2fac3f.slice\": RecentStats: unable to find data in memory cache]" Nov 21 12:04:58 crc kubenswrapper[4972]: I1121 12:04:58.759541 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:04:58 crc kubenswrapper[4972]: E1121 12:04:58.760472 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:05:09 crc kubenswrapper[4972]: I1121 12:05:09.759543 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:05:09 crc kubenswrapper[4972]: E1121 12:05:09.760537 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:05:22 crc kubenswrapper[4972]: I1121 12:05:22.759280 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:05:22 crc kubenswrapper[4972]: E1121 12:05:22.760130 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:05:37 crc kubenswrapper[4972]: I1121 12:05:37.760911 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:05:37 crc kubenswrapper[4972]: E1121 12:05:37.762600 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:05:52 crc kubenswrapper[4972]: I1121 12:05:52.759812 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:05:52 crc kubenswrapper[4972]: E1121 12:05:52.760579 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:06:03 crc kubenswrapper[4972]: I1121 12:06:03.759530 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:06:03 crc kubenswrapper[4972]: E1121 12:06:03.760756 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:06:14 crc kubenswrapper[4972]: I1121 12:06:14.759694 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:06:14 crc kubenswrapper[4972]: E1121 12:06:14.761299 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:06:22 crc kubenswrapper[4972]: I1121 12:06:22.701026 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-6998585d5-c8llb" podUID="e74efb92-2741-41d9-a2aa-01e53dc1492c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.54:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 12:06:25 crc kubenswrapper[4972]: I1121 12:06:25.770928 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:06:25 crc kubenswrapper[4972]: E1121 12:06:25.771557 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:06:36 crc kubenswrapper[4972]: I1121 12:06:36.759963 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:06:36 crc kubenswrapper[4972]: E1121 12:06:36.761496 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:06:48 crc kubenswrapper[4972]: I1121 12:06:48.760691 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:06:48 crc kubenswrapper[4972]: E1121 12:06:48.762245 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:07:03 crc kubenswrapper[4972]: I1121 12:07:03.760272 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:07:03 crc kubenswrapper[4972]: E1121 12:07:03.761300 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:07:14 crc kubenswrapper[4972]: I1121 12:07:14.760693 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:07:14 crc kubenswrapper[4972]: E1121 12:07:14.761519 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:07:28 crc kubenswrapper[4972]: I1121 12:07:28.759505 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:07:29 crc kubenswrapper[4972]: I1121 12:07:29.375385 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"3ff1fe0f86f159c13965155761446e07b49337452a7fff682ad45f4aeec89e81"} Nov 21 12:08:20 crc kubenswrapper[4972]: I1121 12:08:20.915367 4972 generic.go:334] "Generic (PLEG): container finished" podID="c602c118-ca24-4a6a-ada2-b76d3e4f7e25" containerID="64f2056903307905f44824c859d221b93f198d84191108796aa079718b1e3136" exitCode=0 Nov 21 12:08:20 crc kubenswrapper[4972]: I1121 12:08:20.915437 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" event={"ID":"c602c118-ca24-4a6a-ada2-b76d3e4f7e25","Type":"ContainerDied","Data":"64f2056903307905f44824c859d221b93f198d84191108796aa079718b1e3136"} Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.414771 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.549589 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9bvz\" (UniqueName: \"kubernetes.io/projected/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-kube-api-access-m9bvz\") pod \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.549800 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-ssh-key\") pod \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.549892 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-ceph\") pod \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.550024 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-neutron-sriov-agent-neutron-config-0\") pod \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.550085 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-inventory\") pod \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.550188 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-neutron-sriov-combined-ca-bundle\") pod \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\" (UID: \"c602c118-ca24-4a6a-ada2-b76d3e4f7e25\") " Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.556626 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-kube-api-access-m9bvz" (OuterVolumeSpecName: "kube-api-access-m9bvz") pod "c602c118-ca24-4a6a-ada2-b76d3e4f7e25" (UID: "c602c118-ca24-4a6a-ada2-b76d3e4f7e25"). InnerVolumeSpecName "kube-api-access-m9bvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.557050 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "c602c118-ca24-4a6a-ada2-b76d3e4f7e25" (UID: "c602c118-ca24-4a6a-ada2-b76d3e4f7e25"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.572153 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-ceph" (OuterVolumeSpecName: "ceph") pod "c602c118-ca24-4a6a-ada2-b76d3e4f7e25" (UID: "c602c118-ca24-4a6a-ada2-b76d3e4f7e25"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.579579 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c602c118-ca24-4a6a-ada2-b76d3e4f7e25" (UID: "c602c118-ca24-4a6a-ada2-b76d3e4f7e25"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.588117 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "c602c118-ca24-4a6a-ada2-b76d3e4f7e25" (UID: "c602c118-ca24-4a6a-ada2-b76d3e4f7e25"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.597095 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-inventory" (OuterVolumeSpecName: "inventory") pod "c602c118-ca24-4a6a-ada2-b76d3e4f7e25" (UID: "c602c118-ca24-4a6a-ada2-b76d3e4f7e25"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.653339 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9bvz\" (UniqueName: \"kubernetes.io/projected/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-kube-api-access-m9bvz\") on node \"crc\" DevicePath \"\"" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.653380 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.653391 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.653402 4972 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.653411 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.653420 4972 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c602c118-ca24-4a6a-ada2-b76d3e4f7e25-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.941708 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" event={"ID":"c602c118-ca24-4a6a-ada2-b76d3e4f7e25","Type":"ContainerDied","Data":"645b0c79a28770022da389ae4dd024c1eff1dada330d0a76d74c82a1cf36f76a"} Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.941752 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="645b0c79a28770022da389ae4dd024c1eff1dada330d0a76d74c82a1cf36f76a" Nov 21 12:08:22 crc kubenswrapper[4972]: I1121 12:08:22.942269 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-g76jm" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.053494 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp"] Nov 21 12:08:23 crc kubenswrapper[4972]: E1121 12:08:23.054270 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c602c118-ca24-4a6a-ada2-b76d3e4f7e25" containerName="neutron-sriov-openstack-openstack-cell1" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.054290 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="c602c118-ca24-4a6a-ada2-b76d3e4f7e25" containerName="neutron-sriov-openstack-openstack-cell1" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.054531 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="c602c118-ca24-4a6a-ada2-b76d3e4f7e25" containerName="neutron-sriov-openstack-openstack-cell1" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.055272 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.057593 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.057653 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.057996 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.058034 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.058012 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.165224 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.165819 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.165946 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drf4v\" (UniqueName: \"kubernetes.io/projected/99666862-2043-4347-b3a1-7b16e424137e-kube-api-access-drf4v\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.166011 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.166042 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.166189 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.173161 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp"] Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.268233 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.268297 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.268353 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drf4v\" (UniqueName: \"kubernetes.io/projected/99666862-2043-4347-b3a1-7b16e424137e-kube-api-access-drf4v\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.268397 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.268421 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.268498 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.272850 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.273914 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.274129 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.275093 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.275090 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.288531 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drf4v\" (UniqueName: \"kubernetes.io/projected/99666862-2043-4347-b3a1-7b16e424137e-kube-api-access-drf4v\") pod \"neutron-dhcp-openstack-openstack-cell1-b4mxp\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.373919 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.947903 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 12:08:23 crc kubenswrapper[4972]: I1121 12:08:23.948431 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp"] Nov 21 12:08:24 crc kubenswrapper[4972]: I1121 12:08:24.962980 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" event={"ID":"99666862-2043-4347-b3a1-7b16e424137e","Type":"ContainerStarted","Data":"bae66049e312da5d29b2613eba480876eca55b8fa0d834cd88a0069b70de8fdd"} Nov 21 12:08:25 crc kubenswrapper[4972]: I1121 12:08:25.973214 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" event={"ID":"99666862-2043-4347-b3a1-7b16e424137e","Type":"ContainerStarted","Data":"cbca61f92c9ff8194d9c04b038cb248aae5c01476cee0f2c07ed55b63b602293"} Nov 21 12:08:25 crc kubenswrapper[4972]: I1121 12:08:25.999541 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" podStartSLOduration=1.79269063 podStartE2EDuration="2.999523477s" podCreationTimestamp="2025-11-21 12:08:23 +0000 UTC" firstStartedPulling="2025-11-21 12:08:23.947611598 +0000 UTC m=+8849.056754096" lastFinishedPulling="2025-11-21 12:08:25.154444445 +0000 UTC m=+8850.263586943" observedRunningTime="2025-11-21 12:08:25.98744992 +0000 UTC m=+8851.096592428" watchObservedRunningTime="2025-11-21 12:08:25.999523477 +0000 UTC m=+8851.108665975" Nov 21 12:09:56 crc kubenswrapper[4972]: I1121 12:09:56.178708 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:09:56 crc kubenswrapper[4972]: I1121 12:09:56.180093 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.485399 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-whq4p"] Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.490628 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.504297 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-whq4p"] Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.642106 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbd8p\" (UniqueName: \"kubernetes.io/projected/483879f2-127d-436d-9af5-ca65f233a2fc-kube-api-access-xbd8p\") pod \"redhat-operators-whq4p\" (UID: \"483879f2-127d-436d-9af5-ca65f233a2fc\") " pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.642189 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483879f2-127d-436d-9af5-ca65f233a2fc-utilities\") pod \"redhat-operators-whq4p\" (UID: \"483879f2-127d-436d-9af5-ca65f233a2fc\") " pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.643388 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483879f2-127d-436d-9af5-ca65f233a2fc-catalog-content\") pod \"redhat-operators-whq4p\" (UID: \"483879f2-127d-436d-9af5-ca65f233a2fc\") " pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.745262 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483879f2-127d-436d-9af5-ca65f233a2fc-catalog-content\") pod \"redhat-operators-whq4p\" (UID: \"483879f2-127d-436d-9af5-ca65f233a2fc\") " pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.745422 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbd8p\" (UniqueName: \"kubernetes.io/projected/483879f2-127d-436d-9af5-ca65f233a2fc-kube-api-access-xbd8p\") pod \"redhat-operators-whq4p\" (UID: \"483879f2-127d-436d-9af5-ca65f233a2fc\") " pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.745460 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483879f2-127d-436d-9af5-ca65f233a2fc-utilities\") pod \"redhat-operators-whq4p\" (UID: \"483879f2-127d-436d-9af5-ca65f233a2fc\") " pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.745769 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483879f2-127d-436d-9af5-ca65f233a2fc-catalog-content\") pod \"redhat-operators-whq4p\" (UID: \"483879f2-127d-436d-9af5-ca65f233a2fc\") " pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.745901 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483879f2-127d-436d-9af5-ca65f233a2fc-utilities\") pod \"redhat-operators-whq4p\" (UID: \"483879f2-127d-436d-9af5-ca65f233a2fc\") " pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.779717 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbd8p\" (UniqueName: \"kubernetes.io/projected/483879f2-127d-436d-9af5-ca65f233a2fc-kube-api-access-xbd8p\") pod \"redhat-operators-whq4p\" (UID: \"483879f2-127d-436d-9af5-ca65f233a2fc\") " pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:09:58 crc kubenswrapper[4972]: I1121 12:09:58.817443 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:09:59 crc kubenswrapper[4972]: I1121 12:09:59.337788 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-whq4p"] Nov 21 12:10:00 crc kubenswrapper[4972]: I1121 12:10:00.089076 4972 generic.go:334] "Generic (PLEG): container finished" podID="483879f2-127d-436d-9af5-ca65f233a2fc" containerID="dd0b40925696984c1b8f86e1386e915f7c51455006fe3b4f73aa69679d9e72f3" exitCode=0 Nov 21 12:10:00 crc kubenswrapper[4972]: I1121 12:10:00.089126 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whq4p" event={"ID":"483879f2-127d-436d-9af5-ca65f233a2fc","Type":"ContainerDied","Data":"dd0b40925696984c1b8f86e1386e915f7c51455006fe3b4f73aa69679d9e72f3"} Nov 21 12:10:00 crc kubenswrapper[4972]: I1121 12:10:00.089333 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whq4p" event={"ID":"483879f2-127d-436d-9af5-ca65f233a2fc","Type":"ContainerStarted","Data":"869c2fc191af9cac23716e2fe45947c983f76c6bee0bed789720ebc854017e66"} Nov 21 12:10:01 crc kubenswrapper[4972]: I1121 12:10:01.695792 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-65fdh"] Nov 21 12:10:01 crc kubenswrapper[4972]: I1121 12:10:01.699916 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:01 crc kubenswrapper[4972]: I1121 12:10:01.710126 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-65fdh"] Nov 21 12:10:01 crc kubenswrapper[4972]: I1121 12:10:01.814430 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-catalog-content\") pod \"community-operators-65fdh\" (UID: \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\") " pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:01 crc kubenswrapper[4972]: I1121 12:10:01.814871 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-utilities\") pod \"community-operators-65fdh\" (UID: \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\") " pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:01 crc kubenswrapper[4972]: I1121 12:10:01.815039 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t62lr\" (UniqueName: \"kubernetes.io/projected/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-kube-api-access-t62lr\") pod \"community-operators-65fdh\" (UID: \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\") " pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:01 crc kubenswrapper[4972]: I1121 12:10:01.918169 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t62lr\" (UniqueName: \"kubernetes.io/projected/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-kube-api-access-t62lr\") pod \"community-operators-65fdh\" (UID: \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\") " pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:01 crc kubenswrapper[4972]: I1121 12:10:01.918395 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-catalog-content\") pod \"community-operators-65fdh\" (UID: \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\") " pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:01 crc kubenswrapper[4972]: I1121 12:10:01.918537 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-utilities\") pod \"community-operators-65fdh\" (UID: \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\") " pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:01 crc kubenswrapper[4972]: I1121 12:10:01.920290 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-utilities\") pod \"community-operators-65fdh\" (UID: \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\") " pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:01 crc kubenswrapper[4972]: I1121 12:10:01.920350 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-catalog-content\") pod \"community-operators-65fdh\" (UID: \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\") " pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:01 crc kubenswrapper[4972]: I1121 12:10:01.946653 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t62lr\" (UniqueName: \"kubernetes.io/projected/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-kube-api-access-t62lr\") pod \"community-operators-65fdh\" (UID: \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\") " pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:02 crc kubenswrapper[4972]: I1121 12:10:02.030695 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:02 crc kubenswrapper[4972]: I1121 12:10:02.727567 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-65fdh"] Nov 21 12:10:02 crc kubenswrapper[4972]: W1121 12:10:02.735305 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod917736f3_b7e7_42a2_bce2_aad0c4c9e53e.slice/crio-7edb82dab71a878d8a18d04c810015ea0a23328fe60590f8eaf36dcf14c65776 WatchSource:0}: Error finding container 7edb82dab71a878d8a18d04c810015ea0a23328fe60590f8eaf36dcf14c65776: Status 404 returned error can't find the container with id 7edb82dab71a878d8a18d04c810015ea0a23328fe60590f8eaf36dcf14c65776 Nov 21 12:10:03 crc kubenswrapper[4972]: I1121 12:10:03.121634 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whq4p" event={"ID":"483879f2-127d-436d-9af5-ca65f233a2fc","Type":"ContainerStarted","Data":"a4a8d0ba9f2c671f6896af3f0c2f1dc1e802aefa867725309cfbfc1a13096ab1"} Nov 21 12:10:03 crc kubenswrapper[4972]: I1121 12:10:03.126226 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fdh" event={"ID":"917736f3-b7e7-42a2-bce2-aad0c4c9e53e","Type":"ContainerStarted","Data":"7edb82dab71a878d8a18d04c810015ea0a23328fe60590f8eaf36dcf14c65776"} Nov 21 12:10:04 crc kubenswrapper[4972]: I1121 12:10:04.139928 4972 generic.go:334] "Generic (PLEG): container finished" podID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerID="c14b6133d92eb84d8d2ba4e9f7ed91a17e6bb093cd4d0686ce3262a7cf930062" exitCode=0 Nov 21 12:10:04 crc kubenswrapper[4972]: I1121 12:10:04.140046 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fdh" event={"ID":"917736f3-b7e7-42a2-bce2-aad0c4c9e53e","Type":"ContainerDied","Data":"c14b6133d92eb84d8d2ba4e9f7ed91a17e6bb093cd4d0686ce3262a7cf930062"} Nov 21 12:10:08 crc kubenswrapper[4972]: I1121 12:10:08.182414 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fdh" event={"ID":"917736f3-b7e7-42a2-bce2-aad0c4c9e53e","Type":"ContainerStarted","Data":"a2408655482bf74b4744c4f64f43ef858eac26a44b7aa77b1e1afe62b6b6262a"} Nov 21 12:10:16 crc kubenswrapper[4972]: I1121 12:10:16.265048 4972 generic.go:334] "Generic (PLEG): container finished" podID="483879f2-127d-436d-9af5-ca65f233a2fc" containerID="a4a8d0ba9f2c671f6896af3f0c2f1dc1e802aefa867725309cfbfc1a13096ab1" exitCode=0 Nov 21 12:10:16 crc kubenswrapper[4972]: I1121 12:10:16.265097 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whq4p" event={"ID":"483879f2-127d-436d-9af5-ca65f233a2fc","Type":"ContainerDied","Data":"a4a8d0ba9f2c671f6896af3f0c2f1dc1e802aefa867725309cfbfc1a13096ab1"} Nov 21 12:10:17 crc kubenswrapper[4972]: I1121 12:10:17.278742 4972 generic.go:334] "Generic (PLEG): container finished" podID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerID="a2408655482bf74b4744c4f64f43ef858eac26a44b7aa77b1e1afe62b6b6262a" exitCode=0 Nov 21 12:10:17 crc kubenswrapper[4972]: I1121 12:10:17.278852 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fdh" event={"ID":"917736f3-b7e7-42a2-bce2-aad0c4c9e53e","Type":"ContainerDied","Data":"a2408655482bf74b4744c4f64f43ef858eac26a44b7aa77b1e1afe62b6b6262a"} Nov 21 12:10:17 crc kubenswrapper[4972]: I1121 12:10:17.284537 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whq4p" event={"ID":"483879f2-127d-436d-9af5-ca65f233a2fc","Type":"ContainerStarted","Data":"c2be66fb99cf1bd6c44a6a7f7c45fd8777b26ccabab34a19d9e01023e69390ca"} Nov 21 12:10:17 crc kubenswrapper[4972]: I1121 12:10:17.325074 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-whq4p" podStartSLOduration=3.524155004 podStartE2EDuration="19.325055149s" podCreationTimestamp="2025-11-21 12:09:58 +0000 UTC" firstStartedPulling="2025-11-21 12:10:01.100039472 +0000 UTC m=+8946.209181970" lastFinishedPulling="2025-11-21 12:10:16.900939617 +0000 UTC m=+8962.010082115" observedRunningTime="2025-11-21 12:10:17.316545515 +0000 UTC m=+8962.425688013" watchObservedRunningTime="2025-11-21 12:10:17.325055149 +0000 UTC m=+8962.434197647" Nov 21 12:10:18 crc kubenswrapper[4972]: I1121 12:10:18.300467 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fdh" event={"ID":"917736f3-b7e7-42a2-bce2-aad0c4c9e53e","Type":"ContainerStarted","Data":"327d0104af211bcb70bf87cb7446ae6c8332607841a5d2d627b4e7e9d23d6b76"} Nov 21 12:10:18 crc kubenswrapper[4972]: I1121 12:10:18.334935 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-65fdh" podStartSLOduration=3.720418415 podStartE2EDuration="17.334914233s" podCreationTimestamp="2025-11-21 12:10:01 +0000 UTC" firstStartedPulling="2025-11-21 12:10:04.143464212 +0000 UTC m=+8949.252606720" lastFinishedPulling="2025-11-21 12:10:17.75796002 +0000 UTC m=+8962.867102538" observedRunningTime="2025-11-21 12:10:18.325743133 +0000 UTC m=+8963.434885631" watchObservedRunningTime="2025-11-21 12:10:18.334914233 +0000 UTC m=+8963.444056731" Nov 21 12:10:18 crc kubenswrapper[4972]: I1121 12:10:18.819004 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:10:18 crc kubenswrapper[4972]: I1121 12:10:18.819262 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:10:20 crc kubenswrapper[4972]: I1121 12:10:20.045173 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-whq4p" podUID="483879f2-127d-436d-9af5-ca65f233a2fc" containerName="registry-server" probeResult="failure" output=< Nov 21 12:10:20 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 12:10:20 crc kubenswrapper[4972]: > Nov 21 12:10:22 crc kubenswrapper[4972]: I1121 12:10:22.031544 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:22 crc kubenswrapper[4972]: I1121 12:10:22.031908 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:23 crc kubenswrapper[4972]: I1121 12:10:23.085306 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-65fdh" podUID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerName="registry-server" probeResult="failure" output=< Nov 21 12:10:23 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 12:10:23 crc kubenswrapper[4972]: > Nov 21 12:10:26 crc kubenswrapper[4972]: I1121 12:10:26.179583 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:10:26 crc kubenswrapper[4972]: I1121 12:10:26.180088 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:10:29 crc kubenswrapper[4972]: I1121 12:10:29.879527 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-whq4p" podUID="483879f2-127d-436d-9af5-ca65f233a2fc" containerName="registry-server" probeResult="failure" output=< Nov 21 12:10:29 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 12:10:29 crc kubenswrapper[4972]: > Nov 21 12:10:33 crc kubenswrapper[4972]: I1121 12:10:33.081602 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-65fdh" podUID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerName="registry-server" probeResult="failure" output=< Nov 21 12:10:33 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 12:10:33 crc kubenswrapper[4972]: > Nov 21 12:10:39 crc kubenswrapper[4972]: I1121 12:10:39.898423 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-whq4p" podUID="483879f2-127d-436d-9af5-ca65f233a2fc" containerName="registry-server" probeResult="failure" output=< Nov 21 12:10:39 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 12:10:39 crc kubenswrapper[4972]: > Nov 21 12:10:42 crc kubenswrapper[4972]: I1121 12:10:42.084000 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:42 crc kubenswrapper[4972]: I1121 12:10:42.144006 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:42 crc kubenswrapper[4972]: I1121 12:10:42.333306 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-65fdh"] Nov 21 12:10:43 crc kubenswrapper[4972]: I1121 12:10:43.597928 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-65fdh" podUID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerName="registry-server" containerID="cri-o://327d0104af211bcb70bf87cb7446ae6c8332607841a5d2d627b4e7e9d23d6b76" gracePeriod=2 Nov 21 12:10:44 crc kubenswrapper[4972]: I1121 12:10:44.613946 4972 generic.go:334] "Generic (PLEG): container finished" podID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerID="327d0104af211bcb70bf87cb7446ae6c8332607841a5d2d627b4e7e9d23d6b76" exitCode=0 Nov 21 12:10:44 crc kubenswrapper[4972]: I1121 12:10:44.614131 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fdh" event={"ID":"917736f3-b7e7-42a2-bce2-aad0c4c9e53e","Type":"ContainerDied","Data":"327d0104af211bcb70bf87cb7446ae6c8332607841a5d2d627b4e7e9d23d6b76"} Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.105842 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.280872 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-catalog-content\") pod \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\" (UID: \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\") " Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.281045 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t62lr\" (UniqueName: \"kubernetes.io/projected/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-kube-api-access-t62lr\") pod \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\" (UID: \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\") " Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.281215 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-utilities\") pod \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\" (UID: \"917736f3-b7e7-42a2-bce2-aad0c4c9e53e\") " Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.281914 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-utilities" (OuterVolumeSpecName: "utilities") pod "917736f3-b7e7-42a2-bce2-aad0c4c9e53e" (UID: "917736f3-b7e7-42a2-bce2-aad0c4c9e53e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.291094 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-kube-api-access-t62lr" (OuterVolumeSpecName: "kube-api-access-t62lr") pod "917736f3-b7e7-42a2-bce2-aad0c4c9e53e" (UID: "917736f3-b7e7-42a2-bce2-aad0c4c9e53e"). InnerVolumeSpecName "kube-api-access-t62lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.357601 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "917736f3-b7e7-42a2-bce2-aad0c4c9e53e" (UID: "917736f3-b7e7-42a2-bce2-aad0c4c9e53e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.383790 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.383820 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.383849 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t62lr\" (UniqueName: \"kubernetes.io/projected/917736f3-b7e7-42a2-bce2-aad0c4c9e53e-kube-api-access-t62lr\") on node \"crc\" DevicePath \"\"" Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.638448 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fdh" event={"ID":"917736f3-b7e7-42a2-bce2-aad0c4c9e53e","Type":"ContainerDied","Data":"7edb82dab71a878d8a18d04c810015ea0a23328fe60590f8eaf36dcf14c65776"} Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.638496 4972 scope.go:117] "RemoveContainer" containerID="327d0104af211bcb70bf87cb7446ae6c8332607841a5d2d627b4e7e9d23d6b76" Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.638615 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65fdh" Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.685971 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-65fdh"] Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.695071 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-65fdh"] Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.703555 4972 scope.go:117] "RemoveContainer" containerID="a2408655482bf74b4744c4f64f43ef858eac26a44b7aa77b1e1afe62b6b6262a" Nov 21 12:10:45 crc kubenswrapper[4972]: I1121 12:10:45.780340 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" path="/var/lib/kubelet/pods/917736f3-b7e7-42a2-bce2-aad0c4c9e53e/volumes" Nov 21 12:10:46 crc kubenswrapper[4972]: I1121 12:10:46.073943 4972 scope.go:117] "RemoveContainer" containerID="c14b6133d92eb84d8d2ba4e9f7ed91a17e6bb093cd4d0686ce3262a7cf930062" Nov 21 12:10:48 crc kubenswrapper[4972]: I1121 12:10:48.869707 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:10:48 crc kubenswrapper[4972]: I1121 12:10:48.926556 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:10:49 crc kubenswrapper[4972]: I1121 12:10:49.110585 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-whq4p"] Nov 21 12:10:50 crc kubenswrapper[4972]: I1121 12:10:50.689198 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-whq4p" podUID="483879f2-127d-436d-9af5-ca65f233a2fc" containerName="registry-server" containerID="cri-o://c2be66fb99cf1bd6c44a6a7f7c45fd8777b26ccabab34a19d9e01023e69390ca" gracePeriod=2 Nov 21 12:10:51 crc kubenswrapper[4972]: I1121 12:10:51.699669 4972 generic.go:334] "Generic (PLEG): container finished" podID="483879f2-127d-436d-9af5-ca65f233a2fc" containerID="c2be66fb99cf1bd6c44a6a7f7c45fd8777b26ccabab34a19d9e01023e69390ca" exitCode=0 Nov 21 12:10:51 crc kubenswrapper[4972]: I1121 12:10:51.700009 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whq4p" event={"ID":"483879f2-127d-436d-9af5-ca65f233a2fc","Type":"ContainerDied","Data":"c2be66fb99cf1bd6c44a6a7f7c45fd8777b26ccabab34a19d9e01023e69390ca"} Nov 21 12:10:51 crc kubenswrapper[4972]: I1121 12:10:51.947856 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.054742 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483879f2-127d-436d-9af5-ca65f233a2fc-utilities\") pod \"483879f2-127d-436d-9af5-ca65f233a2fc\" (UID: \"483879f2-127d-436d-9af5-ca65f233a2fc\") " Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.055588 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbd8p\" (UniqueName: \"kubernetes.io/projected/483879f2-127d-436d-9af5-ca65f233a2fc-kube-api-access-xbd8p\") pod \"483879f2-127d-436d-9af5-ca65f233a2fc\" (UID: \"483879f2-127d-436d-9af5-ca65f233a2fc\") " Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.055944 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483879f2-127d-436d-9af5-ca65f233a2fc-catalog-content\") pod \"483879f2-127d-436d-9af5-ca65f233a2fc\" (UID: \"483879f2-127d-436d-9af5-ca65f233a2fc\") " Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.057267 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483879f2-127d-436d-9af5-ca65f233a2fc-utilities" (OuterVolumeSpecName: "utilities") pod "483879f2-127d-436d-9af5-ca65f233a2fc" (UID: "483879f2-127d-436d-9af5-ca65f233a2fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.069266 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483879f2-127d-436d-9af5-ca65f233a2fc-kube-api-access-xbd8p" (OuterVolumeSpecName: "kube-api-access-xbd8p") pod "483879f2-127d-436d-9af5-ca65f233a2fc" (UID: "483879f2-127d-436d-9af5-ca65f233a2fc"). InnerVolumeSpecName "kube-api-access-xbd8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.159027 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbd8p\" (UniqueName: \"kubernetes.io/projected/483879f2-127d-436d-9af5-ca65f233a2fc-kube-api-access-xbd8p\") on node \"crc\" DevicePath \"\"" Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.159082 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483879f2-127d-436d-9af5-ca65f233a2fc-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.170973 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483879f2-127d-436d-9af5-ca65f233a2fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "483879f2-127d-436d-9af5-ca65f233a2fc" (UID: "483879f2-127d-436d-9af5-ca65f233a2fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.261489 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483879f2-127d-436d-9af5-ca65f233a2fc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.711064 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whq4p" event={"ID":"483879f2-127d-436d-9af5-ca65f233a2fc","Type":"ContainerDied","Data":"869c2fc191af9cac23716e2fe45947c983f76c6bee0bed789720ebc854017e66"} Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.711116 4972 scope.go:117] "RemoveContainer" containerID="c2be66fb99cf1bd6c44a6a7f7c45fd8777b26ccabab34a19d9e01023e69390ca" Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.711166 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whq4p" Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.745065 4972 scope.go:117] "RemoveContainer" containerID="a4a8d0ba9f2c671f6896af3f0c2f1dc1e802aefa867725309cfbfc1a13096ab1" Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.750918 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-whq4p"] Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.759730 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-whq4p"] Nov 21 12:10:52 crc kubenswrapper[4972]: I1121 12:10:52.783607 4972 scope.go:117] "RemoveContainer" containerID="dd0b40925696984c1b8f86e1386e915f7c51455006fe3b4f73aa69679d9e72f3" Nov 21 12:10:53 crc kubenswrapper[4972]: I1121 12:10:53.770020 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="483879f2-127d-436d-9af5-ca65f233a2fc" path="/var/lib/kubelet/pods/483879f2-127d-436d-9af5-ca65f233a2fc/volumes" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.585784 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9nw9t"] Nov 21 12:10:55 crc kubenswrapper[4972]: E1121 12:10:55.586698 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerName="extract-content" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.586729 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerName="extract-content" Nov 21 12:10:55 crc kubenswrapper[4972]: E1121 12:10:55.586768 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerName="registry-server" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.586781 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerName="registry-server" Nov 21 12:10:55 crc kubenswrapper[4972]: E1121 12:10:55.586824 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483879f2-127d-436d-9af5-ca65f233a2fc" containerName="extract-content" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.586874 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="483879f2-127d-436d-9af5-ca65f233a2fc" containerName="extract-content" Nov 21 12:10:55 crc kubenswrapper[4972]: E1121 12:10:55.586944 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483879f2-127d-436d-9af5-ca65f233a2fc" containerName="extract-utilities" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.586959 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="483879f2-127d-436d-9af5-ca65f233a2fc" containerName="extract-utilities" Nov 21 12:10:55 crc kubenswrapper[4972]: E1121 12:10:55.586979 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerName="extract-utilities" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.586994 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerName="extract-utilities" Nov 21 12:10:55 crc kubenswrapper[4972]: E1121 12:10:55.587012 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483879f2-127d-436d-9af5-ca65f233a2fc" containerName="registry-server" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.587025 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="483879f2-127d-436d-9af5-ca65f233a2fc" containerName="registry-server" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.587425 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="483879f2-127d-436d-9af5-ca65f233a2fc" containerName="registry-server" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.587476 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="917736f3-b7e7-42a2-bce2-aad0c4c9e53e" containerName="registry-server" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.590557 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.605573 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nw9t"] Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.738050 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f469c17c-4aa5-4a0e-826b-a033906969cb-catalog-content\") pod \"redhat-marketplace-9nw9t\" (UID: \"f469c17c-4aa5-4a0e-826b-a033906969cb\") " pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.738193 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f469c17c-4aa5-4a0e-826b-a033906969cb-utilities\") pod \"redhat-marketplace-9nw9t\" (UID: \"f469c17c-4aa5-4a0e-826b-a033906969cb\") " pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.738216 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbpf2\" (UniqueName: \"kubernetes.io/projected/f469c17c-4aa5-4a0e-826b-a033906969cb-kube-api-access-tbpf2\") pod \"redhat-marketplace-9nw9t\" (UID: \"f469c17c-4aa5-4a0e-826b-a033906969cb\") " pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.839851 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f469c17c-4aa5-4a0e-826b-a033906969cb-utilities\") pod \"redhat-marketplace-9nw9t\" (UID: \"f469c17c-4aa5-4a0e-826b-a033906969cb\") " pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.839894 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbpf2\" (UniqueName: \"kubernetes.io/projected/f469c17c-4aa5-4a0e-826b-a033906969cb-kube-api-access-tbpf2\") pod \"redhat-marketplace-9nw9t\" (UID: \"f469c17c-4aa5-4a0e-826b-a033906969cb\") " pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.840000 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f469c17c-4aa5-4a0e-826b-a033906969cb-catalog-content\") pod \"redhat-marketplace-9nw9t\" (UID: \"f469c17c-4aa5-4a0e-826b-a033906969cb\") " pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.840321 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f469c17c-4aa5-4a0e-826b-a033906969cb-utilities\") pod \"redhat-marketplace-9nw9t\" (UID: \"f469c17c-4aa5-4a0e-826b-a033906969cb\") " pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.842295 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f469c17c-4aa5-4a0e-826b-a033906969cb-catalog-content\") pod \"redhat-marketplace-9nw9t\" (UID: \"f469c17c-4aa5-4a0e-826b-a033906969cb\") " pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.889894 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbpf2\" (UniqueName: \"kubernetes.io/projected/f469c17c-4aa5-4a0e-826b-a033906969cb-kube-api-access-tbpf2\") pod \"redhat-marketplace-9nw9t\" (UID: \"f469c17c-4aa5-4a0e-826b-a033906969cb\") " pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:10:55 crc kubenswrapper[4972]: I1121 12:10:55.934451 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.179114 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.179486 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.179537 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.180441 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ff1fe0f86f159c13965155761446e07b49337452a7fff682ad45f4aeec89e81"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.180504 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://3ff1fe0f86f159c13965155761446e07b49337452a7fff682ad45f4aeec89e81" gracePeriod=600 Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.445998 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nw9t"] Nov 21 12:10:56 crc kubenswrapper[4972]: W1121 12:10:56.458686 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf469c17c_4aa5_4a0e_826b_a033906969cb.slice/crio-4207fc5fb7d4ddf5f93c3c3c932c3fae375b0c2d2d0dccec46418b5a1735840b WatchSource:0}: Error finding container 4207fc5fb7d4ddf5f93c3c3c932c3fae375b0c2d2d0dccec46418b5a1735840b: Status 404 returned error can't find the container with id 4207fc5fb7d4ddf5f93c3c3c932c3fae375b0c2d2d0dccec46418b5a1735840b Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.763751 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="3ff1fe0f86f159c13965155761446e07b49337452a7fff682ad45f4aeec89e81" exitCode=0 Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.763878 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"3ff1fe0f86f159c13965155761446e07b49337452a7fff682ad45f4aeec89e81"} Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.764241 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66"} Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.764270 4972 scope.go:117] "RemoveContainer" containerID="e4b6e589a270def4d29172b699d86f89034a6d3d42f1cb39421c0298e97c802b" Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.768261 4972 generic.go:334] "Generic (PLEG): container finished" podID="f469c17c-4aa5-4a0e-826b-a033906969cb" containerID="fc946f4c005543398f4a8cfb5e91f20dd823844fe35f005c0f1944e595f571c2" exitCode=0 Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.768302 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nw9t" event={"ID":"f469c17c-4aa5-4a0e-826b-a033906969cb","Type":"ContainerDied","Data":"fc946f4c005543398f4a8cfb5e91f20dd823844fe35f005c0f1944e595f571c2"} Nov 21 12:10:56 crc kubenswrapper[4972]: I1121 12:10:56.768327 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nw9t" event={"ID":"f469c17c-4aa5-4a0e-826b-a033906969cb","Type":"ContainerStarted","Data":"4207fc5fb7d4ddf5f93c3c3c932c3fae375b0c2d2d0dccec46418b5a1735840b"} Nov 21 12:10:58 crc kubenswrapper[4972]: I1121 12:10:58.793040 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nw9t" event={"ID":"f469c17c-4aa5-4a0e-826b-a033906969cb","Type":"ContainerStarted","Data":"63a4f14dd9d37bbb3ac90656d88393243b87d5b2e62b171ad3a6d4e5edba2346"} Nov 21 12:11:00 crc kubenswrapper[4972]: I1121 12:11:00.829848 4972 generic.go:334] "Generic (PLEG): container finished" podID="f469c17c-4aa5-4a0e-826b-a033906969cb" containerID="63a4f14dd9d37bbb3ac90656d88393243b87d5b2e62b171ad3a6d4e5edba2346" exitCode=0 Nov 21 12:11:00 crc kubenswrapper[4972]: I1121 12:11:00.829931 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nw9t" event={"ID":"f469c17c-4aa5-4a0e-826b-a033906969cb","Type":"ContainerDied","Data":"63a4f14dd9d37bbb3ac90656d88393243b87d5b2e62b171ad3a6d4e5edba2346"} Nov 21 12:11:04 crc kubenswrapper[4972]: I1121 12:11:04.871429 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nw9t" event={"ID":"f469c17c-4aa5-4a0e-826b-a033906969cb","Type":"ContainerStarted","Data":"38c7f7cbae9a468f9de0e836cfec8d467dec81198810a682bfc8a2735e8fe050"} Nov 21 12:11:04 crc kubenswrapper[4972]: I1121 12:11:04.894300 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9nw9t" podStartSLOduration=3.565937475 podStartE2EDuration="9.894276325s" podCreationTimestamp="2025-11-21 12:10:55 +0000 UTC" firstStartedPulling="2025-11-21 12:10:56.769547156 +0000 UTC m=+9001.878689654" lastFinishedPulling="2025-11-21 12:11:03.097886006 +0000 UTC m=+9008.207028504" observedRunningTime="2025-11-21 12:11:04.887943939 +0000 UTC m=+9009.997086457" watchObservedRunningTime="2025-11-21 12:11:04.894276325 +0000 UTC m=+9010.003418823" Nov 21 12:11:05 crc kubenswrapper[4972]: I1121 12:11:05.936141 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:11:05 crc kubenswrapper[4972]: I1121 12:11:05.936537 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:11:06 crc kubenswrapper[4972]: I1121 12:11:06.003862 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:11:15 crc kubenswrapper[4972]: I1121 12:11:15.991522 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:11:16 crc kubenswrapper[4972]: I1121 12:11:16.047613 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nw9t"] Nov 21 12:11:16 crc kubenswrapper[4972]: I1121 12:11:16.996666 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9nw9t" podUID="f469c17c-4aa5-4a0e-826b-a033906969cb" containerName="registry-server" containerID="cri-o://38c7f7cbae9a468f9de0e836cfec8d467dec81198810a682bfc8a2735e8fe050" gracePeriod=2 Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.007725 4972 generic.go:334] "Generic (PLEG): container finished" podID="f469c17c-4aa5-4a0e-826b-a033906969cb" containerID="38c7f7cbae9a468f9de0e836cfec8d467dec81198810a682bfc8a2735e8fe050" exitCode=0 Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.007812 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nw9t" event={"ID":"f469c17c-4aa5-4a0e-826b-a033906969cb","Type":"ContainerDied","Data":"38c7f7cbae9a468f9de0e836cfec8d467dec81198810a682bfc8a2735e8fe050"} Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.008380 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nw9t" event={"ID":"f469c17c-4aa5-4a0e-826b-a033906969cb","Type":"ContainerDied","Data":"4207fc5fb7d4ddf5f93c3c3c932c3fae375b0c2d2d0dccec46418b5a1735840b"} Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.008399 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4207fc5fb7d4ddf5f93c3c3c932c3fae375b0c2d2d0dccec46418b5a1735840b" Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.035961 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.186008 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f469c17c-4aa5-4a0e-826b-a033906969cb-utilities\") pod \"f469c17c-4aa5-4a0e-826b-a033906969cb\" (UID: \"f469c17c-4aa5-4a0e-826b-a033906969cb\") " Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.186165 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f469c17c-4aa5-4a0e-826b-a033906969cb-catalog-content\") pod \"f469c17c-4aa5-4a0e-826b-a033906969cb\" (UID: \"f469c17c-4aa5-4a0e-826b-a033906969cb\") " Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.186254 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbpf2\" (UniqueName: \"kubernetes.io/projected/f469c17c-4aa5-4a0e-826b-a033906969cb-kube-api-access-tbpf2\") pod \"f469c17c-4aa5-4a0e-826b-a033906969cb\" (UID: \"f469c17c-4aa5-4a0e-826b-a033906969cb\") " Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.187155 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f469c17c-4aa5-4a0e-826b-a033906969cb-utilities" (OuterVolumeSpecName: "utilities") pod "f469c17c-4aa5-4a0e-826b-a033906969cb" (UID: "f469c17c-4aa5-4a0e-826b-a033906969cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.205766 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f469c17c-4aa5-4a0e-826b-a033906969cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f469c17c-4aa5-4a0e-826b-a033906969cb" (UID: "f469c17c-4aa5-4a0e-826b-a033906969cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.289230 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f469c17c-4aa5-4a0e-826b-a033906969cb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.289336 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f469c17c-4aa5-4a0e-826b-a033906969cb-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.745637 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f469c17c-4aa5-4a0e-826b-a033906969cb-kube-api-access-tbpf2" (OuterVolumeSpecName: "kube-api-access-tbpf2") pod "f469c17c-4aa5-4a0e-826b-a033906969cb" (UID: "f469c17c-4aa5-4a0e-826b-a033906969cb"). InnerVolumeSpecName "kube-api-access-tbpf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:11:18 crc kubenswrapper[4972]: I1121 12:11:18.800903 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbpf2\" (UniqueName: \"kubernetes.io/projected/f469c17c-4aa5-4a0e-826b-a033906969cb-kube-api-access-tbpf2\") on node \"crc\" DevicePath \"\"" Nov 21 12:11:19 crc kubenswrapper[4972]: I1121 12:11:19.020343 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9nw9t" Nov 21 12:11:19 crc kubenswrapper[4972]: I1121 12:11:19.064749 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nw9t"] Nov 21 12:11:19 crc kubenswrapper[4972]: I1121 12:11:19.073461 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nw9t"] Nov 21 12:11:19 crc kubenswrapper[4972]: I1121 12:11:19.774774 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f469c17c-4aa5-4a0e-826b-a033906969cb" path="/var/lib/kubelet/pods/f469c17c-4aa5-4a0e-826b-a033906969cb/volumes" Nov 21 12:12:56 crc kubenswrapper[4972]: I1121 12:12:56.179064 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:12:56 crc kubenswrapper[4972]: I1121 12:12:56.179662 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:13:26 crc kubenswrapper[4972]: I1121 12:13:26.179075 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:13:26 crc kubenswrapper[4972]: I1121 12:13:26.179743 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:13:49 crc kubenswrapper[4972]: I1121 12:13:49.602391 4972 generic.go:334] "Generic (PLEG): container finished" podID="99666862-2043-4347-b3a1-7b16e424137e" containerID="cbca61f92c9ff8194d9c04b038cb248aae5c01476cee0f2c07ed55b63b602293" exitCode=0 Nov 21 12:13:49 crc kubenswrapper[4972]: I1121 12:13:49.602743 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" event={"ID":"99666862-2043-4347-b3a1-7b16e424137e","Type":"ContainerDied","Data":"cbca61f92c9ff8194d9c04b038cb248aae5c01476cee0f2c07ed55b63b602293"} Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.153100 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.300268 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drf4v\" (UniqueName: \"kubernetes.io/projected/99666862-2043-4347-b3a1-7b16e424137e-kube-api-access-drf4v\") pod \"99666862-2043-4347-b3a1-7b16e424137e\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.300390 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-inventory\") pod \"99666862-2043-4347-b3a1-7b16e424137e\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.300429 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-neutron-dhcp-combined-ca-bundle\") pod \"99666862-2043-4347-b3a1-7b16e424137e\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.300542 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-ssh-key\") pod \"99666862-2043-4347-b3a1-7b16e424137e\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.300591 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-neutron-dhcp-agent-neutron-config-0\") pod \"99666862-2043-4347-b3a1-7b16e424137e\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.300693 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-ceph\") pod \"99666862-2043-4347-b3a1-7b16e424137e\" (UID: \"99666862-2043-4347-b3a1-7b16e424137e\") " Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.306357 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-ceph" (OuterVolumeSpecName: "ceph") pod "99666862-2043-4347-b3a1-7b16e424137e" (UID: "99666862-2043-4347-b3a1-7b16e424137e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.315942 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "99666862-2043-4347-b3a1-7b16e424137e" (UID: "99666862-2043-4347-b3a1-7b16e424137e"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.316548 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99666862-2043-4347-b3a1-7b16e424137e-kube-api-access-drf4v" (OuterVolumeSpecName: "kube-api-access-drf4v") pod "99666862-2043-4347-b3a1-7b16e424137e" (UID: "99666862-2043-4347-b3a1-7b16e424137e"). InnerVolumeSpecName "kube-api-access-drf4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.333904 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "99666862-2043-4347-b3a1-7b16e424137e" (UID: "99666862-2043-4347-b3a1-7b16e424137e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.335185 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "99666862-2043-4347-b3a1-7b16e424137e" (UID: "99666862-2043-4347-b3a1-7b16e424137e"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.335317 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-inventory" (OuterVolumeSpecName: "inventory") pod "99666862-2043-4347-b3a1-7b16e424137e" (UID: "99666862-2043-4347-b3a1-7b16e424137e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.404470 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.404666 4972 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.404687 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.404701 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drf4v\" (UniqueName: \"kubernetes.io/projected/99666862-2043-4347-b3a1-7b16e424137e-kube-api-access-drf4v\") on node \"crc\" DevicePath \"\"" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.404714 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.404725 4972 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99666862-2043-4347-b3a1-7b16e424137e-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.636131 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" event={"ID":"99666862-2043-4347-b3a1-7b16e424137e","Type":"ContainerDied","Data":"bae66049e312da5d29b2613eba480876eca55b8fa0d834cd88a0069b70de8fdd"} Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.636413 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bae66049e312da5d29b2613eba480876eca55b8fa0d834cd88a0069b70de8fdd" Nov 21 12:13:51 crc kubenswrapper[4972]: I1121 12:13:51.636492 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-b4mxp" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.335173 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4wl7v"] Nov 21 12:13:55 crc kubenswrapper[4972]: E1121 12:13:55.336535 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f469c17c-4aa5-4a0e-826b-a033906969cb" containerName="extract-utilities" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.336559 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f469c17c-4aa5-4a0e-826b-a033906969cb" containerName="extract-utilities" Nov 21 12:13:55 crc kubenswrapper[4972]: E1121 12:13:55.336615 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f469c17c-4aa5-4a0e-826b-a033906969cb" containerName="extract-content" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.336627 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f469c17c-4aa5-4a0e-826b-a033906969cb" containerName="extract-content" Nov 21 12:13:55 crc kubenswrapper[4972]: E1121 12:13:55.336662 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99666862-2043-4347-b3a1-7b16e424137e" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.336671 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="99666862-2043-4347-b3a1-7b16e424137e" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 21 12:13:55 crc kubenswrapper[4972]: E1121 12:13:55.336690 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f469c17c-4aa5-4a0e-826b-a033906969cb" containerName="registry-server" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.336698 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f469c17c-4aa5-4a0e-826b-a033906969cb" containerName="registry-server" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.336990 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="99666862-2043-4347-b3a1-7b16e424137e" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.337018 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f469c17c-4aa5-4a0e-826b-a033906969cb" containerName="registry-server" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.339231 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.355766 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wl7v"] Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.384112 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc05f73-4c78-4650-8d71-9e49c2a7b624-catalog-content\") pod \"certified-operators-4wl7v\" (UID: \"afc05f73-4c78-4650-8d71-9e49c2a7b624\") " pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.384204 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc05f73-4c78-4650-8d71-9e49c2a7b624-utilities\") pod \"certified-operators-4wl7v\" (UID: \"afc05f73-4c78-4650-8d71-9e49c2a7b624\") " pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.384287 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r89k4\" (UniqueName: \"kubernetes.io/projected/afc05f73-4c78-4650-8d71-9e49c2a7b624-kube-api-access-r89k4\") pod \"certified-operators-4wl7v\" (UID: \"afc05f73-4c78-4650-8d71-9e49c2a7b624\") " pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.486686 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc05f73-4c78-4650-8d71-9e49c2a7b624-catalog-content\") pod \"certified-operators-4wl7v\" (UID: \"afc05f73-4c78-4650-8d71-9e49c2a7b624\") " pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.486820 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc05f73-4c78-4650-8d71-9e49c2a7b624-utilities\") pod \"certified-operators-4wl7v\" (UID: \"afc05f73-4c78-4650-8d71-9e49c2a7b624\") " pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.487189 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc05f73-4c78-4650-8d71-9e49c2a7b624-catalog-content\") pod \"certified-operators-4wl7v\" (UID: \"afc05f73-4c78-4650-8d71-9e49c2a7b624\") " pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.487298 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc05f73-4c78-4650-8d71-9e49c2a7b624-utilities\") pod \"certified-operators-4wl7v\" (UID: \"afc05f73-4c78-4650-8d71-9e49c2a7b624\") " pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.487471 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r89k4\" (UniqueName: \"kubernetes.io/projected/afc05f73-4c78-4650-8d71-9e49c2a7b624-kube-api-access-r89k4\") pod \"certified-operators-4wl7v\" (UID: \"afc05f73-4c78-4650-8d71-9e49c2a7b624\") " pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.508120 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r89k4\" (UniqueName: \"kubernetes.io/projected/afc05f73-4c78-4650-8d71-9e49c2a7b624-kube-api-access-r89k4\") pod \"certified-operators-4wl7v\" (UID: \"afc05f73-4c78-4650-8d71-9e49c2a7b624\") " pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:13:55 crc kubenswrapper[4972]: I1121 12:13:55.660586 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.179632 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.180353 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.180448 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.181524 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.181585 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" gracePeriod=600 Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.220685 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wl7v"] Nov 21 12:13:56 crc kubenswrapper[4972]: E1121 12:13:56.304685 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.685887 4972 generic.go:334] "Generic (PLEG): container finished" podID="afc05f73-4c78-4650-8d71-9e49c2a7b624" containerID="f8f1a9a0683c9e9f151446d99d325e7570c9ba457d966936fbca68e7c8bc29ff" exitCode=0 Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.685957 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wl7v" event={"ID":"afc05f73-4c78-4650-8d71-9e49c2a7b624","Type":"ContainerDied","Data":"f8f1a9a0683c9e9f151446d99d325e7570c9ba457d966936fbca68e7c8bc29ff"} Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.686296 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wl7v" event={"ID":"afc05f73-4c78-4650-8d71-9e49c2a7b624","Type":"ContainerStarted","Data":"8c52915af3212cd42350802ca43fce1b0844cd7b02de356c5627d7bd2e638421"} Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.689445 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.690800 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" exitCode=0 Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.690852 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66"} Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.690892 4972 scope.go:117] "RemoveContainer" containerID="3ff1fe0f86f159c13965155761446e07b49337452a7fff682ad45f4aeec89e81" Nov 21 12:13:56 crc kubenswrapper[4972]: I1121 12:13:56.691686 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:13:56 crc kubenswrapper[4972]: E1121 12:13:56.692028 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:13:58 crc kubenswrapper[4972]: I1121 12:13:58.728460 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wl7v" event={"ID":"afc05f73-4c78-4650-8d71-9e49c2a7b624","Type":"ContainerStarted","Data":"06980068e08e979ef05eab1e8b47ce6bf011f42983a888acc6cacc82f2c35ad5"} Nov 21 12:14:00 crc kubenswrapper[4972]: I1121 12:14:00.750314 4972 generic.go:334] "Generic (PLEG): container finished" podID="afc05f73-4c78-4650-8d71-9e49c2a7b624" containerID="06980068e08e979ef05eab1e8b47ce6bf011f42983a888acc6cacc82f2c35ad5" exitCode=0 Nov 21 12:14:00 crc kubenswrapper[4972]: I1121 12:14:00.750389 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wl7v" event={"ID":"afc05f73-4c78-4650-8d71-9e49c2a7b624","Type":"ContainerDied","Data":"06980068e08e979ef05eab1e8b47ce6bf011f42983a888acc6cacc82f2c35ad5"} Nov 21 12:14:02 crc kubenswrapper[4972]: I1121 12:14:02.775749 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wl7v" event={"ID":"afc05f73-4c78-4650-8d71-9e49c2a7b624","Type":"ContainerStarted","Data":"e6b997ead5364f907f3712f1fb460193638cbc05b6e22c085256639277f0ee4b"} Nov 21 12:14:02 crc kubenswrapper[4972]: I1121 12:14:02.806547 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4wl7v" podStartSLOduration=2.240111693 podStartE2EDuration="7.806525665s" podCreationTimestamp="2025-11-21 12:13:55 +0000 UTC" firstStartedPulling="2025-11-21 12:13:56.689244604 +0000 UTC m=+9181.798387102" lastFinishedPulling="2025-11-21 12:14:02.255658576 +0000 UTC m=+9187.364801074" observedRunningTime="2025-11-21 12:14:02.796988364 +0000 UTC m=+9187.906130892" watchObservedRunningTime="2025-11-21 12:14:02.806525665 +0000 UTC m=+9187.915668163" Nov 21 12:14:05 crc kubenswrapper[4972]: I1121 12:14:05.661039 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:14:05 crc kubenswrapper[4972]: I1121 12:14:05.661406 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:14:06 crc kubenswrapper[4972]: I1121 12:14:06.189102 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:14:08 crc kubenswrapper[4972]: I1121 12:14:08.060528 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 12:14:08 crc kubenswrapper[4972]: I1121 12:14:08.061350 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="9b0e5f3d-e84e-4866-81ca-119283c296d7" containerName="nova-cell0-conductor-conductor" containerID="cri-o://2bb09ea1fb99562ae2245d7d80a0a482ded0dc19db5a08e39c55e1f6bb81df59" gracePeriod=30 Nov 21 12:14:08 crc kubenswrapper[4972]: I1121 12:14:08.115916 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 12:14:08 crc kubenswrapper[4972]: I1121 12:14:08.116205 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="3da7f1a1-6ce5-468a-a84f-e12242d5539e" containerName="nova-cell1-conductor-conductor" containerID="cri-o://736a65abba960b9bad9182628928af172f1d216c482a0987666c682cd8a7bc1d" gracePeriod=30 Nov 21 12:14:08 crc kubenswrapper[4972]: E1121 12:14:08.665251 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2bb09ea1fb99562ae2245d7d80a0a482ded0dc19db5a08e39c55e1f6bb81df59" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 12:14:08 crc kubenswrapper[4972]: E1121 12:14:08.676369 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2bb09ea1fb99562ae2245d7d80a0a482ded0dc19db5a08e39c55e1f6bb81df59" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 12:14:08 crc kubenswrapper[4972]: E1121 12:14:08.693340 4972 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2bb09ea1fb99562ae2245d7d80a0a482ded0dc19db5a08e39c55e1f6bb81df59" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 12:14:08 crc kubenswrapper[4972]: E1121 12:14:08.693419 4972 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="9b0e5f3d-e84e-4866-81ca-119283c296d7" containerName="nova-cell0-conductor-conductor" Nov 21 12:14:08 crc kubenswrapper[4972]: I1121 12:14:08.761311 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:14:08 crc kubenswrapper[4972]: E1121 12:14:08.761750 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.284383 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.284909 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8319eeb4-9a52-4573-b264-22f703b195a8" containerName="nova-scheduler-scheduler" containerID="cri-o://f845e482af8c82e253ff34bcbcb151030a614818aabf6d1a4a10d471ceb3645f" gracePeriod=30 Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.298972 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.299270 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="76a1d86d-887f-461d-9415-908540ed2f33" containerName="nova-api-log" containerID="cri-o://8fcaf6fdcf7a06e2eb824a2e922877222b49ba25c68fcf77c0bc6d1620d078f0" gracePeriod=30 Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.299627 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="76a1d86d-887f-461d-9415-908540ed2f33" containerName="nova-api-api" containerID="cri-o://5026fd1e01357d7e018deab68e34ea1c38d3ba63c5c7241042861939f452af5e" gracePeriod=30 Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.311234 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.311491 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-log" containerID="cri-o://945bbb7cb188bde4d0c73027afd5b358190f93fc0121f0d6a899a8fa8f08064a" gracePeriod=30 Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.311644 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-metadata" containerID="cri-o://aeb921465259a6901b1e4e8704fd6d97e5ba8bae2eaf7347cdd19708e3e7fa59" gracePeriod=30 Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.847090 4972 generic.go:334] "Generic (PLEG): container finished" podID="3da7f1a1-6ce5-468a-a84f-e12242d5539e" containerID="736a65abba960b9bad9182628928af172f1d216c482a0987666c682cd8a7bc1d" exitCode=0 Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.847166 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3da7f1a1-6ce5-468a-a84f-e12242d5539e","Type":"ContainerDied","Data":"736a65abba960b9bad9182628928af172f1d216c482a0987666c682cd8a7bc1d"} Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.849776 4972 generic.go:334] "Generic (PLEG): container finished" podID="76a1d86d-887f-461d-9415-908540ed2f33" containerID="8fcaf6fdcf7a06e2eb824a2e922877222b49ba25c68fcf77c0bc6d1620d078f0" exitCode=143 Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.849905 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76a1d86d-887f-461d-9415-908540ed2f33","Type":"ContainerDied","Data":"8fcaf6fdcf7a06e2eb824a2e922877222b49ba25c68fcf77c0bc6d1620d078f0"} Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.853111 4972 generic.go:334] "Generic (PLEG): container finished" podID="8164b141-9e42-4a0c-b161-ec80323b043d" containerID="945bbb7cb188bde4d0c73027afd5b358190f93fc0121f0d6a899a8fa8f08064a" exitCode=143 Nov 21 12:14:09 crc kubenswrapper[4972]: I1121 12:14:09.853159 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8164b141-9e42-4a0c-b161-ec80323b043d","Type":"ContainerDied","Data":"945bbb7cb188bde4d0c73027afd5b358190f93fc0121f0d6a899a8fa8f08064a"} Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.620909 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.736729 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da7f1a1-6ce5-468a-a84f-e12242d5539e-combined-ca-bundle\") pod \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\" (UID: \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\") " Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.736975 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkfl5\" (UniqueName: \"kubernetes.io/projected/3da7f1a1-6ce5-468a-a84f-e12242d5539e-kube-api-access-rkfl5\") pod \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\" (UID: \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\") " Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.737090 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da7f1a1-6ce5-468a-a84f-e12242d5539e-config-data\") pod \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\" (UID: \"3da7f1a1-6ce5-468a-a84f-e12242d5539e\") " Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.745148 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da7f1a1-6ce5-468a-a84f-e12242d5539e-kube-api-access-rkfl5" (OuterVolumeSpecName: "kube-api-access-rkfl5") pod "3da7f1a1-6ce5-468a-a84f-e12242d5539e" (UID: "3da7f1a1-6ce5-468a-a84f-e12242d5539e"). InnerVolumeSpecName "kube-api-access-rkfl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.775210 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da7f1a1-6ce5-468a-a84f-e12242d5539e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3da7f1a1-6ce5-468a-a84f-e12242d5539e" (UID: "3da7f1a1-6ce5-468a-a84f-e12242d5539e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.787784 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da7f1a1-6ce5-468a-a84f-e12242d5539e-config-data" (OuterVolumeSpecName: "config-data") pod "3da7f1a1-6ce5-468a-a84f-e12242d5539e" (UID: "3da7f1a1-6ce5-468a-a84f-e12242d5539e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.841894 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkfl5\" (UniqueName: \"kubernetes.io/projected/3da7f1a1-6ce5-468a-a84f-e12242d5539e-kube-api-access-rkfl5\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.841983 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da7f1a1-6ce5-468a-a84f-e12242d5539e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.841997 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da7f1a1-6ce5-468a-a84f-e12242d5539e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.864293 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3da7f1a1-6ce5-468a-a84f-e12242d5539e","Type":"ContainerDied","Data":"77791e6207ab88e2f41d6e1c9af7686f5da0eaee2b828fdcd70e21eaac083487"} Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.864351 4972 scope.go:117] "RemoveContainer" containerID="736a65abba960b9bad9182628928af172f1d216c482a0987666c682cd8a7bc1d" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.864372 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.907897 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.929452 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.939045 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 12:14:10 crc kubenswrapper[4972]: E1121 12:14:10.940128 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da7f1a1-6ce5-468a-a84f-e12242d5539e" containerName="nova-cell1-conductor-conductor" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.940152 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da7f1a1-6ce5-468a-a84f-e12242d5539e" containerName="nova-cell1-conductor-conductor" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.940597 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da7f1a1-6ce5-468a-a84f-e12242d5539e" containerName="nova-cell1-conductor-conductor" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.942647 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.948518 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 12:14:10 crc kubenswrapper[4972]: I1121 12:14:10.988020 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.052297 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhmr8\" (UniqueName: \"kubernetes.io/projected/d4843821-347e-44d1-8c71-9f637fa97d72-kube-api-access-nhmr8\") pod \"nova-cell1-conductor-0\" (UID: \"d4843821-347e-44d1-8c71-9f637fa97d72\") " pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.052367 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4843821-347e-44d1-8c71-9f637fa97d72-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d4843821-347e-44d1-8c71-9f637fa97d72\") " pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.052389 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4843821-347e-44d1-8c71-9f637fa97d72-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d4843821-347e-44d1-8c71-9f637fa97d72\") " pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.154355 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhmr8\" (UniqueName: \"kubernetes.io/projected/d4843821-347e-44d1-8c71-9f637fa97d72-kube-api-access-nhmr8\") pod \"nova-cell1-conductor-0\" (UID: \"d4843821-347e-44d1-8c71-9f637fa97d72\") " pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.154420 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4843821-347e-44d1-8c71-9f637fa97d72-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d4843821-347e-44d1-8c71-9f637fa97d72\") " pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.154439 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4843821-347e-44d1-8c71-9f637fa97d72-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d4843821-347e-44d1-8c71-9f637fa97d72\") " pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.161808 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4843821-347e-44d1-8c71-9f637fa97d72-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d4843821-347e-44d1-8c71-9f637fa97d72\") " pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.161948 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4843821-347e-44d1-8c71-9f637fa97d72-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d4843821-347e-44d1-8c71-9f637fa97d72\") " pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.171016 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhmr8\" (UniqueName: \"kubernetes.io/projected/d4843821-347e-44d1-8c71-9f637fa97d72-kube-api-access-nhmr8\") pod \"nova-cell1-conductor-0\" (UID: \"d4843821-347e-44d1-8c71-9f637fa97d72\") " pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.304850 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.773626 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da7f1a1-6ce5-468a-a84f-e12242d5539e" path="/var/lib/kubelet/pods/3da7f1a1-6ce5-468a-a84f-e12242d5539e/volumes" Nov 21 12:14:11 crc kubenswrapper[4972]: W1121 12:14:11.812374 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4843821_347e_44d1_8c71_9f637fa97d72.slice/crio-79fc9e59f5e0ea8dcdf41f09aa712e8779fd1193995308bc63a46f6cabf07dda WatchSource:0}: Error finding container 79fc9e59f5e0ea8dcdf41f09aa712e8779fd1193995308bc63a46f6cabf07dda: Status 404 returned error can't find the container with id 79fc9e59f5e0ea8dcdf41f09aa712e8779fd1193995308bc63a46f6cabf07dda Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.821888 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 12:14:11 crc kubenswrapper[4972]: I1121 12:14:11.880436 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d4843821-347e-44d1-8c71-9f637fa97d72","Type":"ContainerStarted","Data":"79fc9e59f5e0ea8dcdf41f09aa712e8779fd1193995308bc63a46f6cabf07dda"} Nov 21 12:14:12 crc kubenswrapper[4972]: I1121 12:14:12.899921 4972 generic.go:334] "Generic (PLEG): container finished" podID="9b0e5f3d-e84e-4866-81ca-119283c296d7" containerID="2bb09ea1fb99562ae2245d7d80a0a482ded0dc19db5a08e39c55e1f6bb81df59" exitCode=0 Nov 21 12:14:12 crc kubenswrapper[4972]: I1121 12:14:12.900024 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9b0e5f3d-e84e-4866-81ca-119283c296d7","Type":"ContainerDied","Data":"2bb09ea1fb99562ae2245d7d80a0a482ded0dc19db5a08e39c55e1f6bb81df59"} Nov 21 12:14:12 crc kubenswrapper[4972]: I1121 12:14:12.900211 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9b0e5f3d-e84e-4866-81ca-119283c296d7","Type":"ContainerDied","Data":"ed315efae2564545b449cc67dc698f57e395a14b5dc5c387c93e6b16c451e7ee"} Nov 21 12:14:12 crc kubenswrapper[4972]: I1121 12:14:12.900226 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed315efae2564545b449cc67dc698f57e395a14b5dc5c387c93e6b16c451e7ee" Nov 21 12:14:12 crc kubenswrapper[4972]: I1121 12:14:12.902780 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d4843821-347e-44d1-8c71-9f637fa97d72","Type":"ContainerStarted","Data":"63e05f3101fde9123d24ce4c75688ff32163505cfeb82ded545cbc71721ce229"} Nov 21 12:14:12 crc kubenswrapper[4972]: I1121 12:14:12.908646 4972 generic.go:334] "Generic (PLEG): container finished" podID="8319eeb4-9a52-4573-b264-22f703b195a8" containerID="f845e482af8c82e253ff34bcbcb151030a614818aabf6d1a4a10d471ceb3645f" exitCode=0 Nov 21 12:14:12 crc kubenswrapper[4972]: I1121 12:14:12.908676 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8319eeb4-9a52-4573-b264-22f703b195a8","Type":"ContainerDied","Data":"f845e482af8c82e253ff34bcbcb151030a614818aabf6d1a4a10d471ceb3645f"} Nov 21 12:14:12 crc kubenswrapper[4972]: I1121 12:14:12.912481 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:12 crc kubenswrapper[4972]: I1121 12:14:12.925957 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.925806556 podStartE2EDuration="2.925806556s" podCreationTimestamp="2025-11-21 12:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 12:14:12.920120307 +0000 UTC m=+9198.029262805" watchObservedRunningTime="2025-11-21 12:14:12.925806556 +0000 UTC m=+9198.034949064" Nov 21 12:14:12 crc kubenswrapper[4972]: I1121 12:14:12.997729 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0e5f3d-e84e-4866-81ca-119283c296d7-config-data\") pod \"9b0e5f3d-e84e-4866-81ca-119283c296d7\" (UID: \"9b0e5f3d-e84e-4866-81ca-119283c296d7\") " Nov 21 12:14:12 crc kubenswrapper[4972]: I1121 12:14:12.997815 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0e5f3d-e84e-4866-81ca-119283c296d7-combined-ca-bundle\") pod \"9b0e5f3d-e84e-4866-81ca-119283c296d7\" (UID: \"9b0e5f3d-e84e-4866-81ca-119283c296d7\") " Nov 21 12:14:12 crc kubenswrapper[4972]: I1121 12:14:12.998059 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvk9d\" (UniqueName: \"kubernetes.io/projected/9b0e5f3d-e84e-4866-81ca-119283c296d7-kube-api-access-gvk9d\") pod \"9b0e5f3d-e84e-4866-81ca-119283c296d7\" (UID: \"9b0e5f3d-e84e-4866-81ca-119283c296d7\") " Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.005013 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0e5f3d-e84e-4866-81ca-119283c296d7-kube-api-access-gvk9d" (OuterVolumeSpecName: "kube-api-access-gvk9d") pod "9b0e5f3d-e84e-4866-81ca-119283c296d7" (UID: "9b0e5f3d-e84e-4866-81ca-119283c296d7"). InnerVolumeSpecName "kube-api-access-gvk9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.031937 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0e5f3d-e84e-4866-81ca-119283c296d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b0e5f3d-e84e-4866-81ca-119283c296d7" (UID: "9b0e5f3d-e84e-4866-81ca-119283c296d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.033769 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0e5f3d-e84e-4866-81ca-119283c296d7-config-data" (OuterVolumeSpecName: "config-data") pod "9b0e5f3d-e84e-4866-81ca-119283c296d7" (UID: "9b0e5f3d-e84e-4866-81ca-119283c296d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.101583 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvk9d\" (UniqueName: \"kubernetes.io/projected/9b0e5f3d-e84e-4866-81ca-119283c296d7-kube-api-access-gvk9d\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.101646 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0e5f3d-e84e-4866-81ca-119283c296d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.101660 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0e5f3d-e84e-4866-81ca-119283c296d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.156038 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.304883 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8319eeb4-9a52-4573-b264-22f703b195a8-combined-ca-bundle\") pod \"8319eeb4-9a52-4573-b264-22f703b195a8\" (UID: \"8319eeb4-9a52-4573-b264-22f703b195a8\") " Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.304987 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8319eeb4-9a52-4573-b264-22f703b195a8-config-data\") pod \"8319eeb4-9a52-4573-b264-22f703b195a8\" (UID: \"8319eeb4-9a52-4573-b264-22f703b195a8\") " Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.305053 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzn26\" (UniqueName: \"kubernetes.io/projected/8319eeb4-9a52-4573-b264-22f703b195a8-kube-api-access-qzn26\") pod \"8319eeb4-9a52-4573-b264-22f703b195a8\" (UID: \"8319eeb4-9a52-4573-b264-22f703b195a8\") " Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.308097 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8319eeb4-9a52-4573-b264-22f703b195a8-kube-api-access-qzn26" (OuterVolumeSpecName: "kube-api-access-qzn26") pod "8319eeb4-9a52-4573-b264-22f703b195a8" (UID: "8319eeb4-9a52-4573-b264-22f703b195a8"). InnerVolumeSpecName "kube-api-access-qzn26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.333407 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8319eeb4-9a52-4573-b264-22f703b195a8-config-data" (OuterVolumeSpecName: "config-data") pod "8319eeb4-9a52-4573-b264-22f703b195a8" (UID: "8319eeb4-9a52-4573-b264-22f703b195a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.336331 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8319eeb4-9a52-4573-b264-22f703b195a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8319eeb4-9a52-4573-b264-22f703b195a8" (UID: "8319eeb4-9a52-4573-b264-22f703b195a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.407466 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzn26\" (UniqueName: \"kubernetes.io/projected/8319eeb4-9a52-4573-b264-22f703b195a8-kube-api-access-qzn26\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.407501 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8319eeb4-9a52-4573-b264-22f703b195a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.407510 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8319eeb4-9a52-4573-b264-22f703b195a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.919512 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 12:14:13 crc kubenswrapper[4972]: I1121 12:14:13.919562 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.023007 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": read tcp 10.217.0.2:43260->10.217.1.79:8775: read: connection reset by peer" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.023064 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": read tcp 10.217.0.2:43262->10.217.1.79:8775: read: connection reset by peer" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.082279 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.082316 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.082335 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8319eeb4-9a52-4573-b264-22f703b195a8","Type":"ContainerDied","Data":"3892f96fa94ea1a73816450078e40944ee7bfde1bdc5b9561682b51c4e2b92d4"} Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.082373 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.082392 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.082403 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.082415 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.082489 4972 scope.go:117] "RemoveContainer" containerID="f845e482af8c82e253ff34bcbcb151030a614818aabf6d1a4a10d471ceb3645f" Nov 21 12:14:14 crc kubenswrapper[4972]: E1121 12:14:14.082915 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0e5f3d-e84e-4866-81ca-119283c296d7" containerName="nova-cell0-conductor-conductor" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.082944 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0e5f3d-e84e-4866-81ca-119283c296d7" containerName="nova-cell0-conductor-conductor" Nov 21 12:14:14 crc kubenswrapper[4972]: E1121 12:14:14.082968 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8319eeb4-9a52-4573-b264-22f703b195a8" containerName="nova-scheduler-scheduler" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.082975 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8319eeb4-9a52-4573-b264-22f703b195a8" containerName="nova-scheduler-scheduler" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.083267 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0e5f3d-e84e-4866-81ca-119283c296d7" containerName="nova-cell0-conductor-conductor" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.083301 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8319eeb4-9a52-4573-b264-22f703b195a8" containerName="nova-scheduler-scheduler" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.084140 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.084166 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.084274 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.085308 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.085361 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.086859 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.088798 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.229219 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/912d0781-7cd0-435a-af6c-3e64c68f94bc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"912d0781-7cd0-435a-af6c-3e64c68f94bc\") " pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.229344 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/912d0781-7cd0-435a-af6c-3e64c68f94bc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"912d0781-7cd0-435a-af6c-3e64c68f94bc\") " pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.229484 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g25f8\" (UniqueName: \"kubernetes.io/projected/912d0781-7cd0-435a-af6c-3e64c68f94bc-kube-api-access-g25f8\") pod \"nova-cell0-conductor-0\" (UID: \"912d0781-7cd0-435a-af6c-3e64c68f94bc\") " pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.229536 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e4d32-26f0-447a-97eb-17942860126a-config-data\") pod \"nova-scheduler-0\" (UID: \"9f7e4d32-26f0-447a-97eb-17942860126a\") " pod="openstack/nova-scheduler-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.229576 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e4d32-26f0-447a-97eb-17942860126a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9f7e4d32-26f0-447a-97eb-17942860126a\") " pod="openstack/nova-scheduler-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.229645 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s52rl\" (UniqueName: \"kubernetes.io/projected/9f7e4d32-26f0-447a-97eb-17942860126a-kube-api-access-s52rl\") pod \"nova-scheduler-0\" (UID: \"9f7e4d32-26f0-447a-97eb-17942860126a\") " pod="openstack/nova-scheduler-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.331884 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g25f8\" (UniqueName: \"kubernetes.io/projected/912d0781-7cd0-435a-af6c-3e64c68f94bc-kube-api-access-g25f8\") pod \"nova-cell0-conductor-0\" (UID: \"912d0781-7cd0-435a-af6c-3e64c68f94bc\") " pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.331965 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e4d32-26f0-447a-97eb-17942860126a-config-data\") pod \"nova-scheduler-0\" (UID: \"9f7e4d32-26f0-447a-97eb-17942860126a\") " pod="openstack/nova-scheduler-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.331988 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e4d32-26f0-447a-97eb-17942860126a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9f7e4d32-26f0-447a-97eb-17942860126a\") " pod="openstack/nova-scheduler-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.332004 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s52rl\" (UniqueName: \"kubernetes.io/projected/9f7e4d32-26f0-447a-97eb-17942860126a-kube-api-access-s52rl\") pod \"nova-scheduler-0\" (UID: \"9f7e4d32-26f0-447a-97eb-17942860126a\") " pod="openstack/nova-scheduler-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.332074 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/912d0781-7cd0-435a-af6c-3e64c68f94bc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"912d0781-7cd0-435a-af6c-3e64c68f94bc\") " pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.332105 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/912d0781-7cd0-435a-af6c-3e64c68f94bc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"912d0781-7cd0-435a-af6c-3e64c68f94bc\") " pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.338328 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/912d0781-7cd0-435a-af6c-3e64c68f94bc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"912d0781-7cd0-435a-af6c-3e64c68f94bc\") " pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.338554 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e4d32-26f0-447a-97eb-17942860126a-config-data\") pod \"nova-scheduler-0\" (UID: \"9f7e4d32-26f0-447a-97eb-17942860126a\") " pod="openstack/nova-scheduler-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.339807 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e4d32-26f0-447a-97eb-17942860126a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9f7e4d32-26f0-447a-97eb-17942860126a\") " pod="openstack/nova-scheduler-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.340140 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/912d0781-7cd0-435a-af6c-3e64c68f94bc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"912d0781-7cd0-435a-af6c-3e64c68f94bc\") " pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.356410 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g25f8\" (UniqueName: \"kubernetes.io/projected/912d0781-7cd0-435a-af6c-3e64c68f94bc-kube-api-access-g25f8\") pod \"nova-cell0-conductor-0\" (UID: \"912d0781-7cd0-435a-af6c-3e64c68f94bc\") " pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.357802 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s52rl\" (UniqueName: \"kubernetes.io/projected/9f7e4d32-26f0-447a-97eb-17942860126a-kube-api-access-s52rl\") pod \"nova-scheduler-0\" (UID: \"9f7e4d32-26f0-447a-97eb-17942860126a\") " pod="openstack/nova-scheduler-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.425102 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.435216 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.535176 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="76a1d86d-887f-461d-9415-908540ed2f33" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.78:8774/\": dial tcp 10.217.1.78:8774: connect: connection refused" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.535261 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="76a1d86d-887f-461d-9415-908540ed2f33" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.78:8774/\": dial tcp 10.217.1.78:8774: connect: connection refused" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.555734 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": dial tcp 10.217.1.79:8775: connect: connection refused" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.556051 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": dial tcp 10.217.1.79:8775: connect: connection refused" Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.942257 4972 generic.go:334] "Generic (PLEG): container finished" podID="8164b141-9e42-4a0c-b161-ec80323b043d" containerID="aeb921465259a6901b1e4e8704fd6d97e5ba8bae2eaf7347cdd19708e3e7fa59" exitCode=0 Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.942321 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8164b141-9e42-4a0c-b161-ec80323b043d","Type":"ContainerDied","Data":"aeb921465259a6901b1e4e8704fd6d97e5ba8bae2eaf7347cdd19708e3e7fa59"} Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.950348 4972 generic.go:334] "Generic (PLEG): container finished" podID="76a1d86d-887f-461d-9415-908540ed2f33" containerID="5026fd1e01357d7e018deab68e34ea1c38d3ba63c5c7241042861939f452af5e" exitCode=0 Nov 21 12:14:14 crc kubenswrapper[4972]: I1121 12:14:14.950413 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76a1d86d-887f-461d-9415-908540ed2f33","Type":"ContainerDied","Data":"5026fd1e01357d7e018deab68e34ea1c38d3ba63c5c7241042861939f452af5e"} Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.274610 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.387992 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 12:14:15 crc kubenswrapper[4972]: W1121 12:14:15.393088 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f7e4d32_26f0_447a_97eb_17942860126a.slice/crio-bfcbfb9214cc9531df918efa43d238fd114b88abad06a6044e0fe1b751119faf WatchSource:0}: Error finding container bfcbfb9214cc9531df918efa43d238fd114b88abad06a6044e0fe1b751119faf: Status 404 returned error can't find the container with id bfcbfb9214cc9531df918efa43d238fd114b88abad06a6044e0fe1b751119faf Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.483624 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.520047 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.566580 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8164b141-9e42-4a0c-b161-ec80323b043d-config-data\") pod \"8164b141-9e42-4a0c-b161-ec80323b043d\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.566627 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8164b141-9e42-4a0c-b161-ec80323b043d-logs\") pod \"8164b141-9e42-4a0c-b161-ec80323b043d\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.566783 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78bkw\" (UniqueName: \"kubernetes.io/projected/8164b141-9e42-4a0c-b161-ec80323b043d-kube-api-access-78bkw\") pod \"8164b141-9e42-4a0c-b161-ec80323b043d\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.566870 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8164b141-9e42-4a0c-b161-ec80323b043d-combined-ca-bundle\") pod \"8164b141-9e42-4a0c-b161-ec80323b043d\" (UID: \"8164b141-9e42-4a0c-b161-ec80323b043d\") " Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.568466 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8164b141-9e42-4a0c-b161-ec80323b043d-logs" (OuterVolumeSpecName: "logs") pod "8164b141-9e42-4a0c-b161-ec80323b043d" (UID: "8164b141-9e42-4a0c-b161-ec80323b043d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.583647 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8164b141-9e42-4a0c-b161-ec80323b043d-kube-api-access-78bkw" (OuterVolumeSpecName: "kube-api-access-78bkw") pod "8164b141-9e42-4a0c-b161-ec80323b043d" (UID: "8164b141-9e42-4a0c-b161-ec80323b043d"). InnerVolumeSpecName "kube-api-access-78bkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.671562 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a1d86d-887f-461d-9415-908540ed2f33-combined-ca-bundle\") pod \"76a1d86d-887f-461d-9415-908540ed2f33\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.671748 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a1d86d-887f-461d-9415-908540ed2f33-config-data\") pod \"76a1d86d-887f-461d-9415-908540ed2f33\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.671770 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-788kn\" (UniqueName: \"kubernetes.io/projected/76a1d86d-887f-461d-9415-908540ed2f33-kube-api-access-788kn\") pod \"76a1d86d-887f-461d-9415-908540ed2f33\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.671900 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a1d86d-887f-461d-9415-908540ed2f33-logs\") pod \"76a1d86d-887f-461d-9415-908540ed2f33\" (UID: \"76a1d86d-887f-461d-9415-908540ed2f33\") " Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.672392 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8164b141-9e42-4a0c-b161-ec80323b043d-logs\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.672412 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78bkw\" (UniqueName: \"kubernetes.io/projected/8164b141-9e42-4a0c-b161-ec80323b043d-kube-api-access-78bkw\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.675573 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76a1d86d-887f-461d-9415-908540ed2f33-logs" (OuterVolumeSpecName: "logs") pod "76a1d86d-887f-461d-9415-908540ed2f33" (UID: "76a1d86d-887f-461d-9415-908540ed2f33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.692845 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a1d86d-887f-461d-9415-908540ed2f33-kube-api-access-788kn" (OuterVolumeSpecName: "kube-api-access-788kn") pod "76a1d86d-887f-461d-9415-908540ed2f33" (UID: "76a1d86d-887f-461d-9415-908540ed2f33"). InnerVolumeSpecName "kube-api-access-788kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.760357 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8164b141-9e42-4a0c-b161-ec80323b043d-config-data" (OuterVolumeSpecName: "config-data") pod "8164b141-9e42-4a0c-b161-ec80323b043d" (UID: "8164b141-9e42-4a0c-b161-ec80323b043d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.766022 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a1d86d-887f-461d-9415-908540ed2f33-config-data" (OuterVolumeSpecName: "config-data") pod "76a1d86d-887f-461d-9415-908540ed2f33" (UID: "76a1d86d-887f-461d-9415-908540ed2f33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.777284 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a1d86d-887f-461d-9415-908540ed2f33-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.779098 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-788kn\" (UniqueName: \"kubernetes.io/projected/76a1d86d-887f-461d-9415-908540ed2f33-kube-api-access-788kn\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.779125 4972 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a1d86d-887f-461d-9415-908540ed2f33-logs\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.779139 4972 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8164b141-9e42-4a0c-b161-ec80323b043d-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.794249 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8319eeb4-9a52-4573-b264-22f703b195a8" path="/var/lib/kubelet/pods/8319eeb4-9a52-4573-b264-22f703b195a8/volumes" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.795168 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0e5f3d-e84e-4866-81ca-119283c296d7" path="/var/lib/kubelet/pods/9b0e5f3d-e84e-4866-81ca-119283c296d7/volumes" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.801147 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8164b141-9e42-4a0c-b161-ec80323b043d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8164b141-9e42-4a0c-b161-ec80323b043d" (UID: "8164b141-9e42-4a0c-b161-ec80323b043d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.819995 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a1d86d-887f-461d-9415-908540ed2f33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76a1d86d-887f-461d-9415-908540ed2f33" (UID: "76a1d86d-887f-461d-9415-908540ed2f33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.881522 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a1d86d-887f-461d-9415-908540ed2f33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.881553 4972 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8164b141-9e42-4a0c-b161-ec80323b043d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.904262 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.987913 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"912d0781-7cd0-435a-af6c-3e64c68f94bc","Type":"ContainerStarted","Data":"b74476e6ec48346575a228c138c39bc424c36c48c8672bfee14cf25a846b59de"} Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.987967 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"912d0781-7cd0-435a-af6c-3e64c68f94bc","Type":"ContainerStarted","Data":"9a21c875e8d0bd2d40af7a8a341ea34bff9848f406f00c82664cb5e5a841c85b"} Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.988417 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.992973 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9f7e4d32-26f0-447a-97eb-17942860126a","Type":"ContainerStarted","Data":"a0394fd0b8d8a7086b75ef0f85fdcc374125e067ced8d9501f0a25c272cd32a1"} Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.993005 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9f7e4d32-26f0-447a-97eb-17942860126a","Type":"ContainerStarted","Data":"bfcbfb9214cc9531df918efa43d238fd114b88abad06a6044e0fe1b751119faf"} Nov 21 12:14:15 crc kubenswrapper[4972]: I1121 12:14:15.996980 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wl7v"] Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.000932 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76a1d86d-887f-461d-9415-908540ed2f33","Type":"ContainerDied","Data":"0015ed2c9719393d5275d8ca5d21b44f5c2aecbad9bd2295303237e55f9faeb6"} Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.001334 4972 scope.go:117] "RemoveContainer" containerID="5026fd1e01357d7e018deab68e34ea1c38d3ba63c5c7241042861939f452af5e" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.000946 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.006370 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8164b141-9e42-4a0c-b161-ec80323b043d","Type":"ContainerDied","Data":"bad735de138a68cdbc313cb6971a292b79dc3bfeecf5683fedbe8caa14e8181e"} Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.006551 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4wl7v" podUID="afc05f73-4c78-4650-8d71-9e49c2a7b624" containerName="registry-server" containerID="cri-o://e6b997ead5364f907f3712f1fb460193638cbc05b6e22c085256639277f0ee4b" gracePeriod=2 Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.012305 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.014044 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.014022862 podStartE2EDuration="3.014022862s" podCreationTimestamp="2025-11-21 12:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 12:14:16.009228436 +0000 UTC m=+9201.118370954" watchObservedRunningTime="2025-11-21 12:14:16.014022862 +0000 UTC m=+9201.123165360" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.036058 4972 scope.go:117] "RemoveContainer" containerID="8fcaf6fdcf7a06e2eb824a2e922877222b49ba25c68fcf77c0bc6d1620d078f0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.068287 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.077816 4972 scope.go:117] "RemoveContainer" containerID="aeb921465259a6901b1e4e8704fd6d97e5ba8bae2eaf7347cdd19708e3e7fa59" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.086221 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.102501 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 12:14:16 crc kubenswrapper[4972]: E1121 12:14:16.103008 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a1d86d-887f-461d-9415-908540ed2f33" containerName="nova-api-api" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.103023 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a1d86d-887f-461d-9415-908540ed2f33" containerName="nova-api-api" Nov 21 12:14:16 crc kubenswrapper[4972]: E1121 12:14:16.103048 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a1d86d-887f-461d-9415-908540ed2f33" containerName="nova-api-log" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.103054 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a1d86d-887f-461d-9415-908540ed2f33" containerName="nova-api-log" Nov 21 12:14:16 crc kubenswrapper[4972]: E1121 12:14:16.103068 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-metadata" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.103074 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-metadata" Nov 21 12:14:16 crc kubenswrapper[4972]: E1121 12:14:16.103083 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-log" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.103090 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-log" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.103336 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a1d86d-887f-461d-9415-908540ed2f33" containerName="nova-api-log" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.103360 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a1d86d-887f-461d-9415-908540ed2f33" containerName="nova-api-api" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.103368 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-log" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.103378 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" containerName="nova-metadata-metadata" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.105126 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.111789 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.123333 4972 scope.go:117] "RemoveContainer" containerID="945bbb7cb188bde4d0c73027afd5b358190f93fc0121f0d6a899a8fa8f08064a" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.132422 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.167983 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.189271 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.189321 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205-logs\") pod \"nova-metadata-0\" (UID: \"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205\") " pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.189377 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205-config-data\") pod \"nova-metadata-0\" (UID: \"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205\") " pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.189405 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzmj2\" (UniqueName: \"kubernetes.io/projected/7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205-kube-api-access-fzmj2\") pod \"nova-metadata-0\" (UID: \"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205\") " pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.189480 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205\") " pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.258196 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.275318 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.283391 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.291594 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205\") " pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.291733 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205-logs\") pod \"nova-metadata-0\" (UID: \"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205\") " pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.291770 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205-config-data\") pod \"nova-metadata-0\" (UID: \"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205\") " pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.291794 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzmj2\" (UniqueName: \"kubernetes.io/projected/7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205-kube-api-access-fzmj2\") pod \"nova-metadata-0\" (UID: \"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205\") " pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.292354 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205-logs\") pod \"nova-metadata-0\" (UID: \"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205\") " pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.303195 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.404239 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2348570c-90b9-4153-98fa-49aea6529eb1-config-data\") pod \"nova-api-0\" (UID: \"2348570c-90b9-4153-98fa-49aea6529eb1\") " pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.404417 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2348570c-90b9-4153-98fa-49aea6529eb1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2348570c-90b9-4153-98fa-49aea6529eb1\") " pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.404600 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktvsq\" (UniqueName: \"kubernetes.io/projected/2348570c-90b9-4153-98fa-49aea6529eb1-kube-api-access-ktvsq\") pod \"nova-api-0\" (UID: \"2348570c-90b9-4153-98fa-49aea6529eb1\") " pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.404725 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2348570c-90b9-4153-98fa-49aea6529eb1-logs\") pod \"nova-api-0\" (UID: \"2348570c-90b9-4153-98fa-49aea6529eb1\") " pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.513433 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2348570c-90b9-4153-98fa-49aea6529eb1-config-data\") pod \"nova-api-0\" (UID: \"2348570c-90b9-4153-98fa-49aea6529eb1\") " pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.513575 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2348570c-90b9-4153-98fa-49aea6529eb1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2348570c-90b9-4153-98fa-49aea6529eb1\") " pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.513667 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktvsq\" (UniqueName: \"kubernetes.io/projected/2348570c-90b9-4153-98fa-49aea6529eb1-kube-api-access-ktvsq\") pod \"nova-api-0\" (UID: \"2348570c-90b9-4153-98fa-49aea6529eb1\") " pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.513732 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2348570c-90b9-4153-98fa-49aea6529eb1-logs\") pod \"nova-api-0\" (UID: \"2348570c-90b9-4153-98fa-49aea6529eb1\") " pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.514483 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2348570c-90b9-4153-98fa-49aea6529eb1-logs\") pod \"nova-api-0\" (UID: \"2348570c-90b9-4153-98fa-49aea6529eb1\") " pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.945058 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzmj2\" (UniqueName: \"kubernetes.io/projected/7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205-kube-api-access-fzmj2\") pod \"nova-metadata-0\" (UID: \"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205\") " pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.945204 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205\") " pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.945228 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2348570c-90b9-4153-98fa-49aea6529eb1-config-data\") pod \"nova-api-0\" (UID: \"2348570c-90b9-4153-98fa-49aea6529eb1\") " pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.950442 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205-config-data\") pod \"nova-metadata-0\" (UID: \"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205\") " pod="openstack/nova-metadata-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.950601 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktvsq\" (UniqueName: \"kubernetes.io/projected/2348570c-90b9-4153-98fa-49aea6529eb1-kube-api-access-ktvsq\") pod \"nova-api-0\" (UID: \"2348570c-90b9-4153-98fa-49aea6529eb1\") " pod="openstack/nova-api-0" Nov 21 12:14:16 crc kubenswrapper[4972]: I1121 12:14:16.960678 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2348570c-90b9-4153-98fa-49aea6529eb1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2348570c-90b9-4153-98fa-49aea6529eb1\") " pod="openstack/nova-api-0" Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.033591 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.033845 4972 generic.go:334] "Generic (PLEG): container finished" podID="afc05f73-4c78-4650-8d71-9e49c2a7b624" containerID="e6b997ead5364f907f3712f1fb460193638cbc05b6e22c085256639277f0ee4b" exitCode=0 Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.033874 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wl7v" event={"ID":"afc05f73-4c78-4650-8d71-9e49c2a7b624","Type":"ContainerDied","Data":"e6b997ead5364f907f3712f1fb460193638cbc05b6e22c085256639277f0ee4b"} Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.034236 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wl7v" event={"ID":"afc05f73-4c78-4650-8d71-9e49c2a7b624","Type":"ContainerDied","Data":"8c52915af3212cd42350802ca43fce1b0844cd7b02de356c5627d7bd2e638421"} Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.034263 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c52915af3212cd42350802ca43fce1b0844cd7b02de356c5627d7bd2e638421" Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.191203 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.302537 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.334883 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.33485808 podStartE2EDuration="4.33485808s" podCreationTimestamp="2025-11-21 12:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 12:14:17.059270487 +0000 UTC m=+9202.168412985" watchObservedRunningTime="2025-11-21 12:14:17.33485808 +0000 UTC m=+9202.444000598" Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.434478 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc05f73-4c78-4650-8d71-9e49c2a7b624-catalog-content\") pod \"afc05f73-4c78-4650-8d71-9e49c2a7b624\" (UID: \"afc05f73-4c78-4650-8d71-9e49c2a7b624\") " Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.434540 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc05f73-4c78-4650-8d71-9e49c2a7b624-utilities\") pod \"afc05f73-4c78-4650-8d71-9e49c2a7b624\" (UID: \"afc05f73-4c78-4650-8d71-9e49c2a7b624\") " Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.434665 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r89k4\" (UniqueName: \"kubernetes.io/projected/afc05f73-4c78-4650-8d71-9e49c2a7b624-kube-api-access-r89k4\") pod \"afc05f73-4c78-4650-8d71-9e49c2a7b624\" (UID: \"afc05f73-4c78-4650-8d71-9e49c2a7b624\") " Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.436890 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afc05f73-4c78-4650-8d71-9e49c2a7b624-utilities" (OuterVolumeSpecName: "utilities") pod "afc05f73-4c78-4650-8d71-9e49c2a7b624" (UID: "afc05f73-4c78-4650-8d71-9e49c2a7b624"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.444318 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc05f73-4c78-4650-8d71-9e49c2a7b624-kube-api-access-r89k4" (OuterVolumeSpecName: "kube-api-access-r89k4") pod "afc05f73-4c78-4650-8d71-9e49c2a7b624" (UID: "afc05f73-4c78-4650-8d71-9e49c2a7b624"). InnerVolumeSpecName "kube-api-access-r89k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.495182 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afc05f73-4c78-4650-8d71-9e49c2a7b624-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afc05f73-4c78-4650-8d71-9e49c2a7b624" (UID: "afc05f73-4c78-4650-8d71-9e49c2a7b624"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.533343 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.537110 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r89k4\" (UniqueName: \"kubernetes.io/projected/afc05f73-4c78-4650-8d71-9e49c2a7b624-kube-api-access-r89k4\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.537142 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc05f73-4c78-4650-8d71-9e49c2a7b624-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.537156 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc05f73-4c78-4650-8d71-9e49c2a7b624-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:14:17 crc kubenswrapper[4972]: W1121 12:14:17.539291 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2348570c_90b9_4153_98fa_49aea6529eb1.slice/crio-d0bc6294cd6cc1c9141c0c42461aa206a1c79c0e38c5f64f80a2daf26382a8c9 WatchSource:0}: Error finding container d0bc6294cd6cc1c9141c0c42461aa206a1c79c0e38c5f64f80a2daf26382a8c9: Status 404 returned error can't find the container with id d0bc6294cd6cc1c9141c0c42461aa206a1c79c0e38c5f64f80a2daf26382a8c9 Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.693314 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 12:14:17 crc kubenswrapper[4972]: W1121 12:14:17.698173 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bbbe9d3_ca15_45c2_9cf1_0cd0d9e5b205.slice/crio-54219904ef91013707fb5de5984d08ded36da5ce4ee04faabab4a97bf7eccdf2 WatchSource:0}: Error finding container 54219904ef91013707fb5de5984d08ded36da5ce4ee04faabab4a97bf7eccdf2: Status 404 returned error can't find the container with id 54219904ef91013707fb5de5984d08ded36da5ce4ee04faabab4a97bf7eccdf2 Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.790804 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76a1d86d-887f-461d-9415-908540ed2f33" path="/var/lib/kubelet/pods/76a1d86d-887f-461d-9415-908540ed2f33/volumes" Nov 21 12:14:17 crc kubenswrapper[4972]: I1121 12:14:17.792024 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8164b141-9e42-4a0c-b161-ec80323b043d" path="/var/lib/kubelet/pods/8164b141-9e42-4a0c-b161-ec80323b043d/volumes" Nov 21 12:14:17 crc kubenswrapper[4972]: E1121 12:14:17.921339 4972 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafc05f73_4c78_4650_8d71_9e49c2a7b624.slice\": RecentStats: unable to find data in memory cache]" Nov 21 12:14:18 crc kubenswrapper[4972]: I1121 12:14:18.044623 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2348570c-90b9-4153-98fa-49aea6529eb1","Type":"ContainerStarted","Data":"76cd447f35f7de218edecb416a1f7ef2fc91bc46b53602e33d6756573070c698"} Nov 21 12:14:18 crc kubenswrapper[4972]: I1121 12:14:18.044951 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2348570c-90b9-4153-98fa-49aea6529eb1","Type":"ContainerStarted","Data":"c893ccfe47bd3999c7c6d3edf5a0602e716b3f206881e5beb97367747353b522"} Nov 21 12:14:18 crc kubenswrapper[4972]: I1121 12:14:18.044972 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2348570c-90b9-4153-98fa-49aea6529eb1","Type":"ContainerStarted","Data":"d0bc6294cd6cc1c9141c0c42461aa206a1c79c0e38c5f64f80a2daf26382a8c9"} Nov 21 12:14:18 crc kubenswrapper[4972]: I1121 12:14:18.045701 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205","Type":"ContainerStarted","Data":"ea05184e8853052bec6ab7fa6f326cbc7ac50ed35d6ea8f6684aeef99734318b"} Nov 21 12:14:18 crc kubenswrapper[4972]: I1121 12:14:18.045754 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205","Type":"ContainerStarted","Data":"54219904ef91013707fb5de5984d08ded36da5ce4ee04faabab4a97bf7eccdf2"} Nov 21 12:14:18 crc kubenswrapper[4972]: I1121 12:14:18.045718 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wl7v" Nov 21 12:14:18 crc kubenswrapper[4972]: I1121 12:14:18.071232 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.071207626 podStartE2EDuration="2.071207626s" podCreationTimestamp="2025-11-21 12:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 12:14:18.059557861 +0000 UTC m=+9203.168700379" watchObservedRunningTime="2025-11-21 12:14:18.071207626 +0000 UTC m=+9203.180350134" Nov 21 12:14:18 crc kubenswrapper[4972]: I1121 12:14:18.169503 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wl7v"] Nov 21 12:14:18 crc kubenswrapper[4972]: I1121 12:14:18.187929 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4wl7v"] Nov 21 12:14:19 crc kubenswrapper[4972]: I1121 12:14:19.058534 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205","Type":"ContainerStarted","Data":"0aa5eb6eb2e460c057ac1acd99b150b2c24a9fd3376000116d90d83c5c7cbcb8"} Nov 21 12:14:19 crc kubenswrapper[4972]: I1121 12:14:19.081007 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.08099023 podStartE2EDuration="3.08099023s" podCreationTimestamp="2025-11-21 12:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 12:14:19.079931242 +0000 UTC m=+9204.189073770" watchObservedRunningTime="2025-11-21 12:14:19.08099023 +0000 UTC m=+9204.190132728" Nov 21 12:14:19 crc kubenswrapper[4972]: I1121 12:14:19.435808 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 12:14:19 crc kubenswrapper[4972]: I1121 12:14:19.770565 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afc05f73-4c78-4650-8d71-9e49c2a7b624" path="/var/lib/kubelet/pods/afc05f73-4c78-4650-8d71-9e49c2a7b624/volumes" Nov 21 12:14:20 crc kubenswrapper[4972]: I1121 12:14:20.760817 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:14:20 crc kubenswrapper[4972]: E1121 12:14:20.761980 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:14:21 crc kubenswrapper[4972]: I1121 12:14:21.335503 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 21 12:14:22 crc kubenswrapper[4972]: I1121 12:14:22.191755 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 12:14:22 crc kubenswrapper[4972]: I1121 12:14:22.192271 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 12:14:24 crc kubenswrapper[4972]: I1121 12:14:24.435806 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 12:14:24 crc kubenswrapper[4972]: I1121 12:14:24.464488 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 21 12:14:24 crc kubenswrapper[4972]: I1121 12:14:24.464625 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 12:14:25 crc kubenswrapper[4972]: I1121 12:14:25.164439 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 12:14:27 crc kubenswrapper[4972]: I1121 12:14:27.034300 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 12:14:27 crc kubenswrapper[4972]: I1121 12:14:27.034362 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 12:14:27 crc kubenswrapper[4972]: I1121 12:14:27.191987 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 12:14:27 crc kubenswrapper[4972]: I1121 12:14:27.192042 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 12:14:28 crc kubenswrapper[4972]: I1121 12:14:28.116175 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2348570c-90b9-4153-98fa-49aea6529eb1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 12:14:28 crc kubenswrapper[4972]: I1121 12:14:28.117383 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2348570c-90b9-4153-98fa-49aea6529eb1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 12:14:28 crc kubenswrapper[4972]: I1121 12:14:28.275150 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.195:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 12:14:28 crc kubenswrapper[4972]: I1121 12:14:28.275462 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.195:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 12:14:35 crc kubenswrapper[4972]: I1121 12:14:35.768192 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:14:35 crc kubenswrapper[4972]: E1121 12:14:35.769192 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:14:37 crc kubenswrapper[4972]: I1121 12:14:37.037826 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 12:14:37 crc kubenswrapper[4972]: I1121 12:14:37.039193 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 12:14:37 crc kubenswrapper[4972]: I1121 12:14:37.039433 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 12:14:37 crc kubenswrapper[4972]: I1121 12:14:37.042663 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 12:14:37 crc kubenswrapper[4972]: I1121 12:14:37.194145 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 12:14:37 crc kubenswrapper[4972]: I1121 12:14:37.194644 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 12:14:37 crc kubenswrapper[4972]: I1121 12:14:37.196725 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 12:14:37 crc kubenswrapper[4972]: I1121 12:14:37.196934 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 12:14:37 crc kubenswrapper[4972]: I1121 12:14:37.265387 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 12:14:37 crc kubenswrapper[4972]: I1121 12:14:37.275959 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.221956 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w"] Nov 21 12:14:38 crc kubenswrapper[4972]: E1121 12:14:38.222676 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc05f73-4c78-4650-8d71-9e49c2a7b624" containerName="registry-server" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.222696 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc05f73-4c78-4650-8d71-9e49c2a7b624" containerName="registry-server" Nov 21 12:14:38 crc kubenswrapper[4972]: E1121 12:14:38.222722 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc05f73-4c78-4650-8d71-9e49c2a7b624" containerName="extract-content" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.222729 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc05f73-4c78-4650-8d71-9e49c2a7b624" containerName="extract-content" Nov 21 12:14:38 crc kubenswrapper[4972]: E1121 12:14:38.222765 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc05f73-4c78-4650-8d71-9e49c2a7b624" containerName="extract-utilities" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.222774 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc05f73-4c78-4650-8d71-9e49c2a7b624" containerName="extract-utilities" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.223001 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc05f73-4c78-4650-8d71-9e49c2a7b624" containerName="registry-server" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.223775 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.233404 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.233622 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.233747 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.233991 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.234146 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.234287 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.234427 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g4l5l" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.246279 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w"] Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.306231 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.306329 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrght\" (UniqueName: \"kubernetes.io/projected/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-kube-api-access-jrght\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.306357 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.306373 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.306476 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.306561 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.306639 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.306707 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.306794 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.306919 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.306989 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.408534 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.408632 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.408692 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.408779 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.408872 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.408911 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.409009 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.409117 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.409213 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrght\" (UniqueName: \"kubernetes.io/projected/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-kube-api-access-jrght\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.409251 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.409297 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.409607 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.414966 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.415879 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.416460 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.417847 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.417955 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.418552 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.418989 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.419513 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.420281 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.439120 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrght\" (UniqueName: \"kubernetes.io/projected/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-kube-api-access-jrght\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:38 crc kubenswrapper[4972]: I1121 12:14:38.550991 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:14:39 crc kubenswrapper[4972]: I1121 12:14:39.135224 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w"] Nov 21 12:14:39 crc kubenswrapper[4972]: W1121 12:14:39.159740 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee8b138b_b9f9_4dc2_be8d_a366f0077e9e.slice/crio-f9eda0fc79c25cd30a683090a438f1248d7a80f5e2fe1c236767e16ca95eff1a WatchSource:0}: Error finding container f9eda0fc79c25cd30a683090a438f1248d7a80f5e2fe1c236767e16ca95eff1a: Status 404 returned error can't find the container with id f9eda0fc79c25cd30a683090a438f1248d7a80f5e2fe1c236767e16ca95eff1a Nov 21 12:14:39 crc kubenswrapper[4972]: I1121 12:14:39.294669 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" event={"ID":"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e","Type":"ContainerStarted","Data":"f9eda0fc79c25cd30a683090a438f1248d7a80f5e2fe1c236767e16ca95eff1a"} Nov 21 12:14:39 crc kubenswrapper[4972]: I1121 12:14:39.854710 4972 scope.go:117] "RemoveContainer" containerID="2bb09ea1fb99562ae2245d7d80a0a482ded0dc19db5a08e39c55e1f6bb81df59" Nov 21 12:14:40 crc kubenswrapper[4972]: I1121 12:14:40.306267 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" event={"ID":"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e","Type":"ContainerStarted","Data":"088a1b1ce2dc71862789777c5e6837b31e87804d080ce9fb070c30283b5c4342"} Nov 21 12:14:40 crc kubenswrapper[4972]: I1121 12:14:40.332599 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" podStartSLOduration=1.895584798 podStartE2EDuration="2.332578798s" podCreationTimestamp="2025-11-21 12:14:38 +0000 UTC" firstStartedPulling="2025-11-21 12:14:39.163115013 +0000 UTC m=+9224.272257511" lastFinishedPulling="2025-11-21 12:14:39.600109023 +0000 UTC m=+9224.709251511" observedRunningTime="2025-11-21 12:14:40.327641839 +0000 UTC m=+9225.436784347" watchObservedRunningTime="2025-11-21 12:14:40.332578798 +0000 UTC m=+9225.441721306" Nov 21 12:14:50 crc kubenswrapper[4972]: I1121 12:14:50.759690 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:14:50 crc kubenswrapper[4972]: E1121 12:14:50.761003 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.145822 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk"] Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.148216 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.153813 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.154001 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.156149 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk"] Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.236623 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjvhx\" (UniqueName: \"kubernetes.io/projected/e00e2a81-81e4-4bf6-9201-cd41d9582c67-kube-api-access-sjvhx\") pod \"collect-profiles-29395455-xjxtk\" (UID: \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.236758 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e00e2a81-81e4-4bf6-9201-cd41d9582c67-secret-volume\") pod \"collect-profiles-29395455-xjxtk\" (UID: \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.237110 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e00e2a81-81e4-4bf6-9201-cd41d9582c67-config-volume\") pod \"collect-profiles-29395455-xjxtk\" (UID: \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.339477 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e00e2a81-81e4-4bf6-9201-cd41d9582c67-config-volume\") pod \"collect-profiles-29395455-xjxtk\" (UID: \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.339559 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjvhx\" (UniqueName: \"kubernetes.io/projected/e00e2a81-81e4-4bf6-9201-cd41d9582c67-kube-api-access-sjvhx\") pod \"collect-profiles-29395455-xjxtk\" (UID: \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.339636 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e00e2a81-81e4-4bf6-9201-cd41d9582c67-secret-volume\") pod \"collect-profiles-29395455-xjxtk\" (UID: \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.340790 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e00e2a81-81e4-4bf6-9201-cd41d9582c67-config-volume\") pod \"collect-profiles-29395455-xjxtk\" (UID: \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.345841 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e00e2a81-81e4-4bf6-9201-cd41d9582c67-secret-volume\") pod \"collect-profiles-29395455-xjxtk\" (UID: \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.360937 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjvhx\" (UniqueName: \"kubernetes.io/projected/e00e2a81-81e4-4bf6-9201-cd41d9582c67-kube-api-access-sjvhx\") pod \"collect-profiles-29395455-xjxtk\" (UID: \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.470274 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:00 crc kubenswrapper[4972]: I1121 12:15:00.930154 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk"] Nov 21 12:15:01 crc kubenswrapper[4972]: I1121 12:15:01.513444 4972 generic.go:334] "Generic (PLEG): container finished" podID="e00e2a81-81e4-4bf6-9201-cd41d9582c67" containerID="ce6746484963d399bc4ddd864e2885661677ca5bdc044691cdaae7418e146cbb" exitCode=0 Nov 21 12:15:01 crc kubenswrapper[4972]: I1121 12:15:01.513666 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" event={"ID":"e00e2a81-81e4-4bf6-9201-cd41d9582c67","Type":"ContainerDied","Data":"ce6746484963d399bc4ddd864e2885661677ca5bdc044691cdaae7418e146cbb"} Nov 21 12:15:01 crc kubenswrapper[4972]: I1121 12:15:01.513817 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" event={"ID":"e00e2a81-81e4-4bf6-9201-cd41d9582c67","Type":"ContainerStarted","Data":"7df671f97da8951907c10d02c9d61ddf0c8fcc3a9ea85696bba3c5a588448200"} Nov 21 12:15:02 crc kubenswrapper[4972]: I1121 12:15:02.974758 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.102982 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e00e2a81-81e4-4bf6-9201-cd41d9582c67-config-volume\") pod \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\" (UID: \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\") " Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.103122 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e00e2a81-81e4-4bf6-9201-cd41d9582c67-secret-volume\") pod \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\" (UID: \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\") " Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.103151 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjvhx\" (UniqueName: \"kubernetes.io/projected/e00e2a81-81e4-4bf6-9201-cd41d9582c67-kube-api-access-sjvhx\") pod \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\" (UID: \"e00e2a81-81e4-4bf6-9201-cd41d9582c67\") " Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.103963 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e00e2a81-81e4-4bf6-9201-cd41d9582c67-config-volume" (OuterVolumeSpecName: "config-volume") pod "e00e2a81-81e4-4bf6-9201-cd41d9582c67" (UID: "e00e2a81-81e4-4bf6-9201-cd41d9582c67"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.109430 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e00e2a81-81e4-4bf6-9201-cd41d9582c67-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e00e2a81-81e4-4bf6-9201-cd41d9582c67" (UID: "e00e2a81-81e4-4bf6-9201-cd41d9582c67"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.109699 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00e2a81-81e4-4bf6-9201-cd41d9582c67-kube-api-access-sjvhx" (OuterVolumeSpecName: "kube-api-access-sjvhx") pod "e00e2a81-81e4-4bf6-9201-cd41d9582c67" (UID: "e00e2a81-81e4-4bf6-9201-cd41d9582c67"). InnerVolumeSpecName "kube-api-access-sjvhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.205868 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e00e2a81-81e4-4bf6-9201-cd41d9582c67-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.206101 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjvhx\" (UniqueName: \"kubernetes.io/projected/e00e2a81-81e4-4bf6-9201-cd41d9582c67-kube-api-access-sjvhx\") on node \"crc\" DevicePath \"\"" Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.206178 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e00e2a81-81e4-4bf6-9201-cd41d9582c67-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.542762 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" event={"ID":"e00e2a81-81e4-4bf6-9201-cd41d9582c67","Type":"ContainerDied","Data":"7df671f97da8951907c10d02c9d61ddf0c8fcc3a9ea85696bba3c5a588448200"} Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.542798 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7df671f97da8951907c10d02c9d61ddf0c8fcc3a9ea85696bba3c5a588448200" Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.542879 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395455-xjxtk" Nov 21 12:15:03 crc kubenswrapper[4972]: I1121 12:15:03.760100 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:15:03 crc kubenswrapper[4972]: E1121 12:15:03.760741 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:15:04 crc kubenswrapper[4972]: I1121 12:15:04.052007 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c"] Nov 21 12:15:04 crc kubenswrapper[4972]: I1121 12:15:04.064063 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395410-z8h5c"] Nov 21 12:15:05 crc kubenswrapper[4972]: I1121 12:15:05.772441 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54c8a44-a05f-4a74-8d3c-b8e74844033e" path="/var/lib/kubelet/pods/e54c8a44-a05f-4a74-8d3c-b8e74844033e/volumes" Nov 21 12:15:16 crc kubenswrapper[4972]: I1121 12:15:16.759794 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:15:16 crc kubenswrapper[4972]: E1121 12:15:16.760572 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:15:31 crc kubenswrapper[4972]: I1121 12:15:31.760043 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:15:31 crc kubenswrapper[4972]: E1121 12:15:31.760867 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:15:39 crc kubenswrapper[4972]: I1121 12:15:39.980800 4972 scope.go:117] "RemoveContainer" containerID="d0588c395de5b47c6d16f8907b229dc00a9c7388b590fbc33b3d2420db1e3a29" Nov 21 12:15:46 crc kubenswrapper[4972]: I1121 12:15:46.773268 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:15:46 crc kubenswrapper[4972]: E1121 12:15:46.784383 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:15:59 crc kubenswrapper[4972]: I1121 12:15:59.759536 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:15:59 crc kubenswrapper[4972]: E1121 12:15:59.760592 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:16:11 crc kubenswrapper[4972]: I1121 12:16:11.760067 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:16:11 crc kubenswrapper[4972]: E1121 12:16:11.760985 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:16:24 crc kubenswrapper[4972]: I1121 12:16:24.760707 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:16:24 crc kubenswrapper[4972]: E1121 12:16:24.762206 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:16:39 crc kubenswrapper[4972]: I1121 12:16:39.760161 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:16:39 crc kubenswrapper[4972]: E1121 12:16:39.761127 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:16:51 crc kubenswrapper[4972]: I1121 12:16:51.760671 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:16:51 crc kubenswrapper[4972]: E1121 12:16:51.761658 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:17:04 crc kubenswrapper[4972]: I1121 12:17:04.758995 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:17:04 crc kubenswrapper[4972]: E1121 12:17:04.760029 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:17:16 crc kubenswrapper[4972]: I1121 12:17:16.759910 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:17:16 crc kubenswrapper[4972]: E1121 12:17:16.760958 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:17:28 crc kubenswrapper[4972]: I1121 12:17:28.759704 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:17:28 crc kubenswrapper[4972]: E1121 12:17:28.760511 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:17:40 crc kubenswrapper[4972]: I1121 12:17:40.059133 4972 scope.go:117] "RemoveContainer" containerID="63a4f14dd9d37bbb3ac90656d88393243b87d5b2e62b171ad3a6d4e5edba2346" Nov 21 12:17:40 crc kubenswrapper[4972]: I1121 12:17:40.086204 4972 scope.go:117] "RemoveContainer" containerID="fc946f4c005543398f4a8cfb5e91f20dd823844fe35f005c0f1944e595f571c2" Nov 21 12:17:40 crc kubenswrapper[4972]: I1121 12:17:40.133316 4972 scope.go:117] "RemoveContainer" containerID="38c7f7cbae9a468f9de0e836cfec8d467dec81198810a682bfc8a2735e8fe050" Nov 21 12:17:42 crc kubenswrapper[4972]: I1121 12:17:42.759761 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:17:42 crc kubenswrapper[4972]: E1121 12:17:42.760457 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:17:55 crc kubenswrapper[4972]: I1121 12:17:55.769606 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:17:55 crc kubenswrapper[4972]: E1121 12:17:55.770529 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:18:08 crc kubenswrapper[4972]: I1121 12:18:08.760950 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:18:08 crc kubenswrapper[4972]: E1121 12:18:08.762110 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:18:21 crc kubenswrapper[4972]: I1121 12:18:21.760139 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:18:21 crc kubenswrapper[4972]: E1121 12:18:21.760871 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:18:34 crc kubenswrapper[4972]: I1121 12:18:34.765188 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:18:34 crc kubenswrapper[4972]: E1121 12:18:34.765957 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:18:45 crc kubenswrapper[4972]: I1121 12:18:45.767643 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:18:45 crc kubenswrapper[4972]: E1121 12:18:45.768610 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:18:59 crc kubenswrapper[4972]: I1121 12:18:59.760109 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:19:00 crc kubenswrapper[4972]: I1121 12:19:00.091380 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"019de17bd234fdb2e5375ae3b63fffc29c9876c0b1467405e993de03653eb3a4"} Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.602786 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ds4km"] Nov 21 12:20:36 crc kubenswrapper[4972]: E1121 12:20:36.604605 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00e2a81-81e4-4bf6-9201-cd41d9582c67" containerName="collect-profiles" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.604627 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00e2a81-81e4-4bf6-9201-cd41d9582c67" containerName="collect-profiles" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.605010 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00e2a81-81e4-4bf6-9201-cd41d9582c67" containerName="collect-profiles" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.607420 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.619919 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ds4km"] Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.683275 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631b60d9-ae64-4986-856c-0e366e765e14-utilities\") pod \"redhat-operators-ds4km\" (UID: \"631b60d9-ae64-4986-856c-0e366e765e14\") " pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.683541 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47k48\" (UniqueName: \"kubernetes.io/projected/631b60d9-ae64-4986-856c-0e366e765e14-kube-api-access-47k48\") pod \"redhat-operators-ds4km\" (UID: \"631b60d9-ae64-4986-856c-0e366e765e14\") " pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.683614 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631b60d9-ae64-4986-856c-0e366e765e14-catalog-content\") pod \"redhat-operators-ds4km\" (UID: \"631b60d9-ae64-4986-856c-0e366e765e14\") " pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.785708 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631b60d9-ae64-4986-856c-0e366e765e14-utilities\") pod \"redhat-operators-ds4km\" (UID: \"631b60d9-ae64-4986-856c-0e366e765e14\") " pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.785810 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47k48\" (UniqueName: \"kubernetes.io/projected/631b60d9-ae64-4986-856c-0e366e765e14-kube-api-access-47k48\") pod \"redhat-operators-ds4km\" (UID: \"631b60d9-ae64-4986-856c-0e366e765e14\") " pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.785963 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631b60d9-ae64-4986-856c-0e366e765e14-catalog-content\") pod \"redhat-operators-ds4km\" (UID: \"631b60d9-ae64-4986-856c-0e366e765e14\") " pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.786147 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631b60d9-ae64-4986-856c-0e366e765e14-utilities\") pod \"redhat-operators-ds4km\" (UID: \"631b60d9-ae64-4986-856c-0e366e765e14\") " pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.786706 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631b60d9-ae64-4986-856c-0e366e765e14-catalog-content\") pod \"redhat-operators-ds4km\" (UID: \"631b60d9-ae64-4986-856c-0e366e765e14\") " pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.808590 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47k48\" (UniqueName: \"kubernetes.io/projected/631b60d9-ae64-4986-856c-0e366e765e14-kube-api-access-47k48\") pod \"redhat-operators-ds4km\" (UID: \"631b60d9-ae64-4986-856c-0e366e765e14\") " pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:36 crc kubenswrapper[4972]: I1121 12:20:36.945774 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:37 crc kubenswrapper[4972]: I1121 12:20:37.478148 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ds4km"] Nov 21 12:20:37 crc kubenswrapper[4972]: W1121 12:20:37.861313 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631b60d9_ae64_4986_856c_0e366e765e14.slice/crio-c447a0c67cb6f3b25affbfc4603ebee1d4296aa66020b832843d49f2ee63c41a WatchSource:0}: Error finding container c447a0c67cb6f3b25affbfc4603ebee1d4296aa66020b832843d49f2ee63c41a: Status 404 returned error can't find the container with id c447a0c67cb6f3b25affbfc4603ebee1d4296aa66020b832843d49f2ee63c41a Nov 21 12:20:38 crc kubenswrapper[4972]: I1121 12:20:38.148546 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ds4km" event={"ID":"631b60d9-ae64-4986-856c-0e366e765e14","Type":"ContainerStarted","Data":"c447a0c67cb6f3b25affbfc4603ebee1d4296aa66020b832843d49f2ee63c41a"} Nov 21 12:20:39 crc kubenswrapper[4972]: I1121 12:20:39.161126 4972 generic.go:334] "Generic (PLEG): container finished" podID="631b60d9-ae64-4986-856c-0e366e765e14" containerID="b830b8bf20cb30f8b8e95e6a3324f92fead1aaad03b5c1af5fb4bc352f674523" exitCode=0 Nov 21 12:20:39 crc kubenswrapper[4972]: I1121 12:20:39.161729 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ds4km" event={"ID":"631b60d9-ae64-4986-856c-0e366e765e14","Type":"ContainerDied","Data":"b830b8bf20cb30f8b8e95e6a3324f92fead1aaad03b5c1af5fb4bc352f674523"} Nov 21 12:20:39 crc kubenswrapper[4972]: I1121 12:20:39.164537 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 12:20:40 crc kubenswrapper[4972]: I1121 12:20:40.175437 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ds4km" event={"ID":"631b60d9-ae64-4986-856c-0e366e765e14","Type":"ContainerStarted","Data":"7071fe548c483977c3c117f70be39f3fd86dbc2535772e49b035fd429c56467d"} Nov 21 12:20:40 crc kubenswrapper[4972]: I1121 12:20:40.253549 4972 scope.go:117] "RemoveContainer" containerID="f8f1a9a0683c9e9f151446d99d325e7570c9ba457d966936fbca68e7c8bc29ff" Nov 21 12:20:40 crc kubenswrapper[4972]: I1121 12:20:40.277309 4972 scope.go:117] "RemoveContainer" containerID="e6b997ead5364f907f3712f1fb460193638cbc05b6e22c085256639277f0ee4b" Nov 21 12:20:40 crc kubenswrapper[4972]: I1121 12:20:40.333915 4972 scope.go:117] "RemoveContainer" containerID="06980068e08e979ef05eab1e8b47ce6bf011f42983a888acc6cacc82f2c35ad5" Nov 21 12:20:47 crc kubenswrapper[4972]: I1121 12:20:47.257432 4972 generic.go:334] "Generic (PLEG): container finished" podID="631b60d9-ae64-4986-856c-0e366e765e14" containerID="7071fe548c483977c3c117f70be39f3fd86dbc2535772e49b035fd429c56467d" exitCode=0 Nov 21 12:20:47 crc kubenswrapper[4972]: I1121 12:20:47.257518 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ds4km" event={"ID":"631b60d9-ae64-4986-856c-0e366e765e14","Type":"ContainerDied","Data":"7071fe548c483977c3c117f70be39f3fd86dbc2535772e49b035fd429c56467d"} Nov 21 12:20:48 crc kubenswrapper[4972]: I1121 12:20:48.276110 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ds4km" event={"ID":"631b60d9-ae64-4986-856c-0e366e765e14","Type":"ContainerStarted","Data":"c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70"} Nov 21 12:20:48 crc kubenswrapper[4972]: I1121 12:20:48.303997 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ds4km" podStartSLOduration=3.743253461 podStartE2EDuration="12.303968364s" podCreationTimestamp="2025-11-21 12:20:36 +0000 UTC" firstStartedPulling="2025-11-21 12:20:39.164277674 +0000 UTC m=+9584.273420162" lastFinishedPulling="2025-11-21 12:20:47.724992567 +0000 UTC m=+9592.834135065" observedRunningTime="2025-11-21 12:20:48.296957318 +0000 UTC m=+9593.406099846" watchObservedRunningTime="2025-11-21 12:20:48.303968364 +0000 UTC m=+9593.413110862" Nov 21 12:20:56 crc kubenswrapper[4972]: I1121 12:20:56.946383 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:56 crc kubenswrapper[4972]: I1121 12:20:56.948265 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:20:57 crc kubenswrapper[4972]: I1121 12:20:57.992526 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ds4km" podUID="631b60d9-ae64-4986-856c-0e366e765e14" containerName="registry-server" probeResult="failure" output=< Nov 21 12:20:57 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 12:20:57 crc kubenswrapper[4972]: > Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.347730 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f4mdn"] Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.349916 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.361531 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4mdn"] Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.448887 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66027d9-27d5-483c-9bbb-6e8208d49d8b-catalog-content\") pod \"community-operators-f4mdn\" (UID: \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\") " pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.448955 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czbbm\" (UniqueName: \"kubernetes.io/projected/f66027d9-27d5-483c-9bbb-6e8208d49d8b-kube-api-access-czbbm\") pod \"community-operators-f4mdn\" (UID: \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\") " pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.448996 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66027d9-27d5-483c-9bbb-6e8208d49d8b-utilities\") pod \"community-operators-f4mdn\" (UID: \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\") " pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.549676 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czbbm\" (UniqueName: \"kubernetes.io/projected/f66027d9-27d5-483c-9bbb-6e8208d49d8b-kube-api-access-czbbm\") pod \"community-operators-f4mdn\" (UID: \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\") " pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.549737 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66027d9-27d5-483c-9bbb-6e8208d49d8b-utilities\") pod \"community-operators-f4mdn\" (UID: \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\") " pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.549926 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66027d9-27d5-483c-9bbb-6e8208d49d8b-catalog-content\") pod \"community-operators-f4mdn\" (UID: \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\") " pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.550282 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66027d9-27d5-483c-9bbb-6e8208d49d8b-utilities\") pod \"community-operators-f4mdn\" (UID: \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\") " pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.550404 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66027d9-27d5-483c-9bbb-6e8208d49d8b-catalog-content\") pod \"community-operators-f4mdn\" (UID: \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\") " pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.570037 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czbbm\" (UniqueName: \"kubernetes.io/projected/f66027d9-27d5-483c-9bbb-6e8208d49d8b-kube-api-access-czbbm\") pod \"community-operators-f4mdn\" (UID: \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\") " pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:20:58 crc kubenswrapper[4972]: I1121 12:20:58.728683 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:20:59 crc kubenswrapper[4972]: I1121 12:20:59.272808 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4mdn"] Nov 21 12:20:59 crc kubenswrapper[4972]: W1121 12:20:59.453105 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf66027d9_27d5_483c_9bbb_6e8208d49d8b.slice/crio-664a26285184c981bab5ba7e04b813b8f0327a1944a60f9e89c5cddd5e45ab9d WatchSource:0}: Error finding container 664a26285184c981bab5ba7e04b813b8f0327a1944a60f9e89c5cddd5e45ab9d: Status 404 returned error can't find the container with id 664a26285184c981bab5ba7e04b813b8f0327a1944a60f9e89c5cddd5e45ab9d Nov 21 12:21:00 crc kubenswrapper[4972]: I1121 12:21:00.393616 4972 generic.go:334] "Generic (PLEG): container finished" podID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerID="5e6e659cbb2e72344875eb069bdcd70f4452a074408a2b905803ec09de9dd24d" exitCode=0 Nov 21 12:21:00 crc kubenswrapper[4972]: I1121 12:21:00.393729 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4mdn" event={"ID":"f66027d9-27d5-483c-9bbb-6e8208d49d8b","Type":"ContainerDied","Data":"5e6e659cbb2e72344875eb069bdcd70f4452a074408a2b905803ec09de9dd24d"} Nov 21 12:21:00 crc kubenswrapper[4972]: I1121 12:21:00.393964 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4mdn" event={"ID":"f66027d9-27d5-483c-9bbb-6e8208d49d8b","Type":"ContainerStarted","Data":"664a26285184c981bab5ba7e04b813b8f0327a1944a60f9e89c5cddd5e45ab9d"} Nov 21 12:21:02 crc kubenswrapper[4972]: I1121 12:21:02.415681 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4mdn" event={"ID":"f66027d9-27d5-483c-9bbb-6e8208d49d8b","Type":"ContainerStarted","Data":"334a7cd127a81f661337c253b67c9685919234d19558210321303126d0fdcb9e"} Nov 21 12:21:03 crc kubenswrapper[4972]: I1121 12:21:03.731150 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9c8jg"] Nov 21 12:21:03 crc kubenswrapper[4972]: I1121 12:21:03.734206 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:03 crc kubenswrapper[4972]: I1121 12:21:03.748363 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c8jg"] Nov 21 12:21:03 crc kubenswrapper[4972]: I1121 12:21:03.761299 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-utilities\") pod \"redhat-marketplace-9c8jg\" (UID: \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\") " pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:03 crc kubenswrapper[4972]: I1121 12:21:03.761356 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-catalog-content\") pod \"redhat-marketplace-9c8jg\" (UID: \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\") " pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:03 crc kubenswrapper[4972]: I1121 12:21:03.761429 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sndh9\" (UniqueName: \"kubernetes.io/projected/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-kube-api-access-sndh9\") pod \"redhat-marketplace-9c8jg\" (UID: \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\") " pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:03 crc kubenswrapper[4972]: I1121 12:21:03.864681 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-utilities\") pod \"redhat-marketplace-9c8jg\" (UID: \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\") " pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:03 crc kubenswrapper[4972]: I1121 12:21:03.865415 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-catalog-content\") pod \"redhat-marketplace-9c8jg\" (UID: \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\") " pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:03 crc kubenswrapper[4972]: I1121 12:21:03.865354 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-utilities\") pod \"redhat-marketplace-9c8jg\" (UID: \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\") " pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:03 crc kubenswrapper[4972]: I1121 12:21:03.865507 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sndh9\" (UniqueName: \"kubernetes.io/projected/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-kube-api-access-sndh9\") pod \"redhat-marketplace-9c8jg\" (UID: \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\") " pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:03 crc kubenswrapper[4972]: I1121 12:21:03.865902 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-catalog-content\") pod \"redhat-marketplace-9c8jg\" (UID: \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\") " pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:03 crc kubenswrapper[4972]: I1121 12:21:03.884712 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sndh9\" (UniqueName: \"kubernetes.io/projected/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-kube-api-access-sndh9\") pod \"redhat-marketplace-9c8jg\" (UID: \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\") " pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:04 crc kubenswrapper[4972]: I1121 12:21:04.058878 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:04 crc kubenswrapper[4972]: W1121 12:21:04.599519 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bdbe6a0_24bc_4156_92cc_f3c2a579a01e.slice/crio-c5665d8475bd8098b1ca0fbed89473ba61517ff64179d05b673e7dd806e82474 WatchSource:0}: Error finding container c5665d8475bd8098b1ca0fbed89473ba61517ff64179d05b673e7dd806e82474: Status 404 returned error can't find the container with id c5665d8475bd8098b1ca0fbed89473ba61517ff64179d05b673e7dd806e82474 Nov 21 12:21:04 crc kubenswrapper[4972]: I1121 12:21:04.601479 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c8jg"] Nov 21 12:21:05 crc kubenswrapper[4972]: I1121 12:21:05.453248 4972 generic.go:334] "Generic (PLEG): container finished" podID="9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" containerID="6c58a077ce5478beef24e83e0492912af78da89d89b239d91c73e638ab9b5a86" exitCode=0 Nov 21 12:21:05 crc kubenswrapper[4972]: I1121 12:21:05.453320 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c8jg" event={"ID":"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e","Type":"ContainerDied","Data":"6c58a077ce5478beef24e83e0492912af78da89d89b239d91c73e638ab9b5a86"} Nov 21 12:21:05 crc kubenswrapper[4972]: I1121 12:21:05.453347 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c8jg" event={"ID":"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e","Type":"ContainerStarted","Data":"c5665d8475bd8098b1ca0fbed89473ba61517ff64179d05b673e7dd806e82474"} Nov 21 12:21:05 crc kubenswrapper[4972]: I1121 12:21:05.457501 4972 generic.go:334] "Generic (PLEG): container finished" podID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerID="334a7cd127a81f661337c253b67c9685919234d19558210321303126d0fdcb9e" exitCode=0 Nov 21 12:21:05 crc kubenswrapper[4972]: I1121 12:21:05.457581 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4mdn" event={"ID":"f66027d9-27d5-483c-9bbb-6e8208d49d8b","Type":"ContainerDied","Data":"334a7cd127a81f661337c253b67c9685919234d19558210321303126d0fdcb9e"} Nov 21 12:21:06 crc kubenswrapper[4972]: I1121 12:21:06.471008 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c8jg" event={"ID":"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e","Type":"ContainerStarted","Data":"705a647db5053650f01d0b61cd1925e8d837e39671ef60365cccf8daca48de68"} Nov 21 12:21:06 crc kubenswrapper[4972]: I1121 12:21:06.474000 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4mdn" event={"ID":"f66027d9-27d5-483c-9bbb-6e8208d49d8b","Type":"ContainerStarted","Data":"d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954"} Nov 21 12:21:06 crc kubenswrapper[4972]: I1121 12:21:06.527173 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f4mdn" podStartSLOduration=3.075230526 podStartE2EDuration="8.527154326s" podCreationTimestamp="2025-11-21 12:20:58 +0000 UTC" firstStartedPulling="2025-11-21 12:21:00.395231406 +0000 UTC m=+9605.504373904" lastFinishedPulling="2025-11-21 12:21:05.847155196 +0000 UTC m=+9610.956297704" observedRunningTime="2025-11-21 12:21:06.511536103 +0000 UTC m=+9611.620678621" watchObservedRunningTime="2025-11-21 12:21:06.527154326 +0000 UTC m=+9611.636296824" Nov 21 12:21:07 crc kubenswrapper[4972]: I1121 12:21:07.486533 4972 generic.go:334] "Generic (PLEG): container finished" podID="9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" containerID="705a647db5053650f01d0b61cd1925e8d837e39671ef60365cccf8daca48de68" exitCode=0 Nov 21 12:21:07 crc kubenswrapper[4972]: I1121 12:21:07.486659 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c8jg" event={"ID":"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e","Type":"ContainerDied","Data":"705a647db5053650f01d0b61cd1925e8d837e39671ef60365cccf8daca48de68"} Nov 21 12:21:08 crc kubenswrapper[4972]: I1121 12:21:08.011204 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ds4km" podUID="631b60d9-ae64-4986-856c-0e366e765e14" containerName="registry-server" probeResult="failure" output=< Nov 21 12:21:08 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 12:21:08 crc kubenswrapper[4972]: > Nov 21 12:21:08 crc kubenswrapper[4972]: I1121 12:21:08.500082 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c8jg" event={"ID":"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e","Type":"ContainerStarted","Data":"3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c"} Nov 21 12:21:08 crc kubenswrapper[4972]: I1121 12:21:08.536609 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9c8jg" podStartSLOduration=3.022864046 podStartE2EDuration="5.536589079s" podCreationTimestamp="2025-11-21 12:21:03 +0000 UTC" firstStartedPulling="2025-11-21 12:21:05.455884308 +0000 UTC m=+9610.565026796" lastFinishedPulling="2025-11-21 12:21:07.969609331 +0000 UTC m=+9613.078751829" observedRunningTime="2025-11-21 12:21:08.524249993 +0000 UTC m=+9613.633392491" watchObservedRunningTime="2025-11-21 12:21:08.536589079 +0000 UTC m=+9613.645731567" Nov 21 12:21:08 crc kubenswrapper[4972]: I1121 12:21:08.729264 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:21:08 crc kubenswrapper[4972]: I1121 12:21:08.729333 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:21:09 crc kubenswrapper[4972]: I1121 12:21:09.785498 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-f4mdn" podUID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerName="registry-server" probeResult="failure" output=< Nov 21 12:21:09 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 12:21:09 crc kubenswrapper[4972]: > Nov 21 12:21:14 crc kubenswrapper[4972]: I1121 12:21:14.059480 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:14 crc kubenswrapper[4972]: I1121 12:21:14.060198 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:14 crc kubenswrapper[4972]: I1121 12:21:14.128027 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:14 crc kubenswrapper[4972]: I1121 12:21:14.671964 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:14 crc kubenswrapper[4972]: I1121 12:21:14.719965 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c8jg"] Nov 21 12:21:16 crc kubenswrapper[4972]: I1121 12:21:16.603214 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9c8jg" podUID="9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" containerName="registry-server" containerID="cri-o://3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c" gracePeriod=2 Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.223143 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.391418 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-catalog-content\") pod \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\" (UID: \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\") " Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.391685 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-utilities\") pod \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\" (UID: \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\") " Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.391920 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sndh9\" (UniqueName: \"kubernetes.io/projected/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-kube-api-access-sndh9\") pod \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\" (UID: \"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e\") " Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.393451 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-utilities" (OuterVolumeSpecName: "utilities") pod "9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" (UID: "9bdbe6a0-24bc-4156-92cc-f3c2a579a01e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.398525 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-kube-api-access-sndh9" (OuterVolumeSpecName: "kube-api-access-sndh9") pod "9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" (UID: "9bdbe6a0-24bc-4156-92cc-f3c2a579a01e"). InnerVolumeSpecName "kube-api-access-sndh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.412919 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" (UID: "9bdbe6a0-24bc-4156-92cc-f3c2a579a01e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.495061 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.495114 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.495124 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sndh9\" (UniqueName: \"kubernetes.io/projected/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e-kube-api-access-sndh9\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.614647 4972 generic.go:334] "Generic (PLEG): container finished" podID="9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" containerID="3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c" exitCode=0 Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.614689 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c8jg" event={"ID":"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e","Type":"ContainerDied","Data":"3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c"} Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.614700 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c8jg" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.614716 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c8jg" event={"ID":"9bdbe6a0-24bc-4156-92cc-f3c2a579a01e","Type":"ContainerDied","Data":"c5665d8475bd8098b1ca0fbed89473ba61517ff64179d05b673e7dd806e82474"} Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.614735 4972 scope.go:117] "RemoveContainer" containerID="3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.652970 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c8jg"] Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.662547 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c8jg"] Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.662774 4972 scope.go:117] "RemoveContainer" containerID="705a647db5053650f01d0b61cd1925e8d837e39671ef60365cccf8daca48de68" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.695945 4972 scope.go:117] "RemoveContainer" containerID="6c58a077ce5478beef24e83e0492912af78da89d89b239d91c73e638ab9b5a86" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.742538 4972 scope.go:117] "RemoveContainer" containerID="3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c" Nov 21 12:21:17 crc kubenswrapper[4972]: E1121 12:21:17.743602 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c\": container with ID starting with 3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c not found: ID does not exist" containerID="3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.743643 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c"} err="failed to get container status \"3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c\": rpc error: code = NotFound desc = could not find container \"3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c\": container with ID starting with 3e10c481d20e317e450474d30536f7aac3b01ed44eb942bb2787665097cc598c not found: ID does not exist" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.743668 4972 scope.go:117] "RemoveContainer" containerID="705a647db5053650f01d0b61cd1925e8d837e39671ef60365cccf8daca48de68" Nov 21 12:21:17 crc kubenswrapper[4972]: E1121 12:21:17.744010 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705a647db5053650f01d0b61cd1925e8d837e39671ef60365cccf8daca48de68\": container with ID starting with 705a647db5053650f01d0b61cd1925e8d837e39671ef60365cccf8daca48de68 not found: ID does not exist" containerID="705a647db5053650f01d0b61cd1925e8d837e39671ef60365cccf8daca48de68" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.744067 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705a647db5053650f01d0b61cd1925e8d837e39671ef60365cccf8daca48de68"} err="failed to get container status \"705a647db5053650f01d0b61cd1925e8d837e39671ef60365cccf8daca48de68\": rpc error: code = NotFound desc = could not find container \"705a647db5053650f01d0b61cd1925e8d837e39671ef60365cccf8daca48de68\": container with ID starting with 705a647db5053650f01d0b61cd1925e8d837e39671ef60365cccf8daca48de68 not found: ID does not exist" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.744079 4972 scope.go:117] "RemoveContainer" containerID="6c58a077ce5478beef24e83e0492912af78da89d89b239d91c73e638ab9b5a86" Nov 21 12:21:17 crc kubenswrapper[4972]: E1121 12:21:17.744675 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c58a077ce5478beef24e83e0492912af78da89d89b239d91c73e638ab9b5a86\": container with ID starting with 6c58a077ce5478beef24e83e0492912af78da89d89b239d91c73e638ab9b5a86 not found: ID does not exist" containerID="6c58a077ce5478beef24e83e0492912af78da89d89b239d91c73e638ab9b5a86" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.744719 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c58a077ce5478beef24e83e0492912af78da89d89b239d91c73e638ab9b5a86"} err="failed to get container status \"6c58a077ce5478beef24e83e0492912af78da89d89b239d91c73e638ab9b5a86\": rpc error: code = NotFound desc = could not find container \"6c58a077ce5478beef24e83e0492912af78da89d89b239d91c73e638ab9b5a86\": container with ID starting with 6c58a077ce5478beef24e83e0492912af78da89d89b239d91c73e638ab9b5a86 not found: ID does not exist" Nov 21 12:21:17 crc kubenswrapper[4972]: I1121 12:21:17.778210 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" path="/var/lib/kubelet/pods/9bdbe6a0-24bc-4156-92cc-f3c2a579a01e/volumes" Nov 21 12:21:18 crc kubenswrapper[4972]: I1121 12:21:18.026775 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ds4km" podUID="631b60d9-ae64-4986-856c-0e366e765e14" containerName="registry-server" probeResult="failure" output=< Nov 21 12:21:18 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 12:21:18 crc kubenswrapper[4972]: > Nov 21 12:21:19 crc kubenswrapper[4972]: I1121 12:21:19.776969 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-f4mdn" podUID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerName="registry-server" probeResult="failure" output=< Nov 21 12:21:19 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 12:21:19 crc kubenswrapper[4972]: > Nov 21 12:21:26 crc kubenswrapper[4972]: I1121 12:21:26.178934 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:21:26 crc kubenswrapper[4972]: I1121 12:21:26.179731 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:21:27 crc kubenswrapper[4972]: I1121 12:21:27.001092 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:21:27 crc kubenswrapper[4972]: I1121 12:21:27.051571 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:21:27 crc kubenswrapper[4972]: I1121 12:21:27.247098 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ds4km"] Nov 21 12:21:28 crc kubenswrapper[4972]: I1121 12:21:28.756102 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ds4km" podUID="631b60d9-ae64-4986-856c-0e366e765e14" containerName="registry-server" containerID="cri-o://c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70" gracePeriod=2 Nov 21 12:21:28 crc kubenswrapper[4972]: I1121 12:21:28.800964 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:21:28 crc kubenswrapper[4972]: I1121 12:21:28.877902 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.254053 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.372663 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631b60d9-ae64-4986-856c-0e366e765e14-catalog-content\") pod \"631b60d9-ae64-4986-856c-0e366e765e14\" (UID: \"631b60d9-ae64-4986-856c-0e366e765e14\") " Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.373005 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47k48\" (UniqueName: \"kubernetes.io/projected/631b60d9-ae64-4986-856c-0e366e765e14-kube-api-access-47k48\") pod \"631b60d9-ae64-4986-856c-0e366e765e14\" (UID: \"631b60d9-ae64-4986-856c-0e366e765e14\") " Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.373048 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631b60d9-ae64-4986-856c-0e366e765e14-utilities\") pod \"631b60d9-ae64-4986-856c-0e366e765e14\" (UID: \"631b60d9-ae64-4986-856c-0e366e765e14\") " Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.373973 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/631b60d9-ae64-4986-856c-0e366e765e14-utilities" (OuterVolumeSpecName: "utilities") pod "631b60d9-ae64-4986-856c-0e366e765e14" (UID: "631b60d9-ae64-4986-856c-0e366e765e14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.379848 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/631b60d9-ae64-4986-856c-0e366e765e14-kube-api-access-47k48" (OuterVolumeSpecName: "kube-api-access-47k48") pod "631b60d9-ae64-4986-856c-0e366e765e14" (UID: "631b60d9-ae64-4986-856c-0e366e765e14"). InnerVolumeSpecName "kube-api-access-47k48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.467759 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/631b60d9-ae64-4986-856c-0e366e765e14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "631b60d9-ae64-4986-856c-0e366e765e14" (UID: "631b60d9-ae64-4986-856c-0e366e765e14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.475048 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47k48\" (UniqueName: \"kubernetes.io/projected/631b60d9-ae64-4986-856c-0e366e765e14-kube-api-access-47k48\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.475078 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631b60d9-ae64-4986-856c-0e366e765e14-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.475088 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631b60d9-ae64-4986-856c-0e366e765e14-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.768797 4972 generic.go:334] "Generic (PLEG): container finished" podID="631b60d9-ae64-4986-856c-0e366e765e14" containerID="c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70" exitCode=0 Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.768931 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ds4km" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.770689 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ds4km" event={"ID":"631b60d9-ae64-4986-856c-0e366e765e14","Type":"ContainerDied","Data":"c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70"} Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.770731 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ds4km" event={"ID":"631b60d9-ae64-4986-856c-0e366e765e14","Type":"ContainerDied","Data":"c447a0c67cb6f3b25affbfc4603ebee1d4296aa66020b832843d49f2ee63c41a"} Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.770755 4972 scope.go:117] "RemoveContainer" containerID="c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.796146 4972 scope.go:117] "RemoveContainer" containerID="7071fe548c483977c3c117f70be39f3fd86dbc2535772e49b035fd429c56467d" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.808128 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ds4km"] Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.821985 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ds4km"] Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.826000 4972 scope.go:117] "RemoveContainer" containerID="b830b8bf20cb30f8b8e95e6a3324f92fead1aaad03b5c1af5fb4bc352f674523" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.844350 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4mdn"] Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.881376 4972 scope.go:117] "RemoveContainer" containerID="c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70" Nov 21 12:21:29 crc kubenswrapper[4972]: E1121 12:21:29.881805 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70\": container with ID starting with c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70 not found: ID does not exist" containerID="c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.881852 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70"} err="failed to get container status \"c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70\": rpc error: code = NotFound desc = could not find container \"c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70\": container with ID starting with c6146f9e4b1ed09b4a5650ec76450e19ede878ad68b446211fccddfe88209e70 not found: ID does not exist" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.881880 4972 scope.go:117] "RemoveContainer" containerID="7071fe548c483977c3c117f70be39f3fd86dbc2535772e49b035fd429c56467d" Nov 21 12:21:29 crc kubenswrapper[4972]: E1121 12:21:29.882394 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7071fe548c483977c3c117f70be39f3fd86dbc2535772e49b035fd429c56467d\": container with ID starting with 7071fe548c483977c3c117f70be39f3fd86dbc2535772e49b035fd429c56467d not found: ID does not exist" containerID="7071fe548c483977c3c117f70be39f3fd86dbc2535772e49b035fd429c56467d" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.882416 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7071fe548c483977c3c117f70be39f3fd86dbc2535772e49b035fd429c56467d"} err="failed to get container status \"7071fe548c483977c3c117f70be39f3fd86dbc2535772e49b035fd429c56467d\": rpc error: code = NotFound desc = could not find container \"7071fe548c483977c3c117f70be39f3fd86dbc2535772e49b035fd429c56467d\": container with ID starting with 7071fe548c483977c3c117f70be39f3fd86dbc2535772e49b035fd429c56467d not found: ID does not exist" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.882433 4972 scope.go:117] "RemoveContainer" containerID="b830b8bf20cb30f8b8e95e6a3324f92fead1aaad03b5c1af5fb4bc352f674523" Nov 21 12:21:29 crc kubenswrapper[4972]: E1121 12:21:29.882786 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b830b8bf20cb30f8b8e95e6a3324f92fead1aaad03b5c1af5fb4bc352f674523\": container with ID starting with b830b8bf20cb30f8b8e95e6a3324f92fead1aaad03b5c1af5fb4bc352f674523 not found: ID does not exist" containerID="b830b8bf20cb30f8b8e95e6a3324f92fead1aaad03b5c1af5fb4bc352f674523" Nov 21 12:21:29 crc kubenswrapper[4972]: I1121 12:21:29.882853 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b830b8bf20cb30f8b8e95e6a3324f92fead1aaad03b5c1af5fb4bc352f674523"} err="failed to get container status \"b830b8bf20cb30f8b8e95e6a3324f92fead1aaad03b5c1af5fb4bc352f674523\": rpc error: code = NotFound desc = could not find container \"b830b8bf20cb30f8b8e95e6a3324f92fead1aaad03b5c1af5fb4bc352f674523\": container with ID starting with b830b8bf20cb30f8b8e95e6a3324f92fead1aaad03b5c1af5fb4bc352f674523 not found: ID does not exist" Nov 21 12:21:30 crc kubenswrapper[4972]: I1121 12:21:30.779745 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f4mdn" podUID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerName="registry-server" containerID="cri-o://d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954" gracePeriod=2 Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.369113 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.518455 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czbbm\" (UniqueName: \"kubernetes.io/projected/f66027d9-27d5-483c-9bbb-6e8208d49d8b-kube-api-access-czbbm\") pod \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\" (UID: \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\") " Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.518662 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66027d9-27d5-483c-9bbb-6e8208d49d8b-catalog-content\") pod \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\" (UID: \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\") " Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.518793 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66027d9-27d5-483c-9bbb-6e8208d49d8b-utilities\") pod \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\" (UID: \"f66027d9-27d5-483c-9bbb-6e8208d49d8b\") " Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.519427 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66027d9-27d5-483c-9bbb-6e8208d49d8b-utilities" (OuterVolumeSpecName: "utilities") pod "f66027d9-27d5-483c-9bbb-6e8208d49d8b" (UID: "f66027d9-27d5-483c-9bbb-6e8208d49d8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.524017 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66027d9-27d5-483c-9bbb-6e8208d49d8b-kube-api-access-czbbm" (OuterVolumeSpecName: "kube-api-access-czbbm") pod "f66027d9-27d5-483c-9bbb-6e8208d49d8b" (UID: "f66027d9-27d5-483c-9bbb-6e8208d49d8b"). InnerVolumeSpecName "kube-api-access-czbbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.602091 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66027d9-27d5-483c-9bbb-6e8208d49d8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f66027d9-27d5-483c-9bbb-6e8208d49d8b" (UID: "f66027d9-27d5-483c-9bbb-6e8208d49d8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.621593 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66027d9-27d5-483c-9bbb-6e8208d49d8b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.621631 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66027d9-27d5-483c-9bbb-6e8208d49d8b-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.621641 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czbbm\" (UniqueName: \"kubernetes.io/projected/f66027d9-27d5-483c-9bbb-6e8208d49d8b-kube-api-access-czbbm\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.770209 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="631b60d9-ae64-4986-856c-0e366e765e14" path="/var/lib/kubelet/pods/631b60d9-ae64-4986-856c-0e366e765e14/volumes" Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.790870 4972 generic.go:334] "Generic (PLEG): container finished" podID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerID="d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954" exitCode=0 Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.790913 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4mdn" Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.790922 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4mdn" event={"ID":"f66027d9-27d5-483c-9bbb-6e8208d49d8b","Type":"ContainerDied","Data":"d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954"} Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.790956 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4mdn" event={"ID":"f66027d9-27d5-483c-9bbb-6e8208d49d8b","Type":"ContainerDied","Data":"664a26285184c981bab5ba7e04b813b8f0327a1944a60f9e89c5cddd5e45ab9d"} Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.790979 4972 scope.go:117] "RemoveContainer" containerID="d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954" Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.824867 4972 scope.go:117] "RemoveContainer" containerID="334a7cd127a81f661337c253b67c9685919234d19558210321303126d0fdcb9e" Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.836009 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4mdn"] Nov 21 12:21:31 crc kubenswrapper[4972]: I1121 12:21:31.846953 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f4mdn"] Nov 21 12:21:32 crc kubenswrapper[4972]: I1121 12:21:32.257003 4972 scope.go:117] "RemoveContainer" containerID="5e6e659cbb2e72344875eb069bdcd70f4452a074408a2b905803ec09de9dd24d" Nov 21 12:21:32 crc kubenswrapper[4972]: I1121 12:21:32.302086 4972 scope.go:117] "RemoveContainer" containerID="d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954" Nov 21 12:21:32 crc kubenswrapper[4972]: E1121 12:21:32.302468 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954\": container with ID starting with d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954 not found: ID does not exist" containerID="d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954" Nov 21 12:21:32 crc kubenswrapper[4972]: I1121 12:21:32.302498 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954"} err="failed to get container status \"d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954\": rpc error: code = NotFound desc = could not find container \"d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954\": container with ID starting with d2ebdbff659685cd0acb9d6b75d2c05d7b055833e7c3b039c27fd3ad519d3954 not found: ID does not exist" Nov 21 12:21:32 crc kubenswrapper[4972]: I1121 12:21:32.302520 4972 scope.go:117] "RemoveContainer" containerID="334a7cd127a81f661337c253b67c9685919234d19558210321303126d0fdcb9e" Nov 21 12:21:32 crc kubenswrapper[4972]: E1121 12:21:32.302844 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334a7cd127a81f661337c253b67c9685919234d19558210321303126d0fdcb9e\": container with ID starting with 334a7cd127a81f661337c253b67c9685919234d19558210321303126d0fdcb9e not found: ID does not exist" containerID="334a7cd127a81f661337c253b67c9685919234d19558210321303126d0fdcb9e" Nov 21 12:21:32 crc kubenswrapper[4972]: I1121 12:21:32.302902 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334a7cd127a81f661337c253b67c9685919234d19558210321303126d0fdcb9e"} err="failed to get container status \"334a7cd127a81f661337c253b67c9685919234d19558210321303126d0fdcb9e\": rpc error: code = NotFound desc = could not find container \"334a7cd127a81f661337c253b67c9685919234d19558210321303126d0fdcb9e\": container with ID starting with 334a7cd127a81f661337c253b67c9685919234d19558210321303126d0fdcb9e not found: ID does not exist" Nov 21 12:21:32 crc kubenswrapper[4972]: I1121 12:21:32.302940 4972 scope.go:117] "RemoveContainer" containerID="5e6e659cbb2e72344875eb069bdcd70f4452a074408a2b905803ec09de9dd24d" Nov 21 12:21:32 crc kubenswrapper[4972]: E1121 12:21:32.303260 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e6e659cbb2e72344875eb069bdcd70f4452a074408a2b905803ec09de9dd24d\": container with ID starting with 5e6e659cbb2e72344875eb069bdcd70f4452a074408a2b905803ec09de9dd24d not found: ID does not exist" containerID="5e6e659cbb2e72344875eb069bdcd70f4452a074408a2b905803ec09de9dd24d" Nov 21 12:21:32 crc kubenswrapper[4972]: I1121 12:21:32.303286 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6e659cbb2e72344875eb069bdcd70f4452a074408a2b905803ec09de9dd24d"} err="failed to get container status \"5e6e659cbb2e72344875eb069bdcd70f4452a074408a2b905803ec09de9dd24d\": rpc error: code = NotFound desc = could not find container \"5e6e659cbb2e72344875eb069bdcd70f4452a074408a2b905803ec09de9dd24d\": container with ID starting with 5e6e659cbb2e72344875eb069bdcd70f4452a074408a2b905803ec09de9dd24d not found: ID does not exist" Nov 21 12:21:33 crc kubenswrapper[4972]: I1121 12:21:33.777266 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" path="/var/lib/kubelet/pods/f66027d9-27d5-483c-9bbb-6e8208d49d8b/volumes" Nov 21 12:21:56 crc kubenswrapper[4972]: I1121 12:21:56.179489 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:21:56 crc kubenswrapper[4972]: I1121 12:21:56.180284 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:21:58 crc kubenswrapper[4972]: I1121 12:21:58.096516 4972 generic.go:334] "Generic (PLEG): container finished" podID="ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" containerID="088a1b1ce2dc71862789777c5e6837b31e87804d080ce9fb070c30283b5c4342" exitCode=0 Nov 21 12:21:58 crc kubenswrapper[4972]: I1121 12:21:58.096743 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" event={"ID":"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e","Type":"ContainerDied","Data":"088a1b1ce2dc71862789777c5e6837b31e87804d080ce9fb070c30283b5c4342"} Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.617241 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.668999 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-migration-ssh-key-0\") pod \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.669049 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-migration-ssh-key-1\") pod \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.669094 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cells-global-config-1\") pod \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.669179 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cells-global-config-0\") pod \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.669236 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-combined-ca-bundle\") pod \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.669288 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-ssh-key\") pod \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.669354 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-inventory\") pod \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.669461 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-ceph\") pod \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.669489 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrght\" (UniqueName: \"kubernetes.io/projected/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-kube-api-access-jrght\") pod \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.669530 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-compute-config-1\") pod \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.669548 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-compute-config-0\") pod \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\" (UID: \"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e\") " Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.676789 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" (UID: "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.680253 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-ceph" (OuterVolumeSpecName: "ceph") pod "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" (UID: "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.680974 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-kube-api-access-jrght" (OuterVolumeSpecName: "kube-api-access-jrght") pod "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" (UID: "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e"). InnerVolumeSpecName "kube-api-access-jrght". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.704023 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" (UID: "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.707409 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" (UID: "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.708536 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" (UID: "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.715383 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-inventory" (OuterVolumeSpecName: "inventory") pod "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" (UID: "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.715726 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" (UID: "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.716305 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" (UID: "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.732068 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" (UID: "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.736153 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" (UID: "ee8b138b-b9f9-4dc2-be8d-a366f0077e9e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.771223 4972 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.771250 4972 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.771260 4972 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.771270 4972 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.771280 4972 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.771289 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrght\" (UniqueName: \"kubernetes.io/projected/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-kube-api-access-jrght\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.771299 4972 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.771309 4972 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.771317 4972 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.771325 4972 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 21 12:21:59 crc kubenswrapper[4972]: I1121 12:21:59.771333 4972 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/ee8b138b-b9f9-4dc2-be8d-a366f0077e9e-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Nov 21 12:22:00 crc kubenswrapper[4972]: I1121 12:22:00.127090 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" event={"ID":"ee8b138b-b9f9-4dc2-be8d-a366f0077e9e","Type":"ContainerDied","Data":"f9eda0fc79c25cd30a683090a438f1248d7a80f5e2fe1c236767e16ca95eff1a"} Nov 21 12:22:00 crc kubenswrapper[4972]: I1121 12:22:00.127149 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9eda0fc79c25cd30a683090a438f1248d7a80f5e2fe1c236767e16ca95eff1a" Nov 21 12:22:00 crc kubenswrapper[4972]: I1121 12:22:00.127531 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w" Nov 21 12:22:26 crc kubenswrapper[4972]: I1121 12:22:26.178733 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:22:26 crc kubenswrapper[4972]: I1121 12:22:26.179355 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:22:26 crc kubenswrapper[4972]: I1121 12:22:26.179416 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 12:22:26 crc kubenswrapper[4972]: I1121 12:22:26.180319 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"019de17bd234fdb2e5375ae3b63fffc29c9876c0b1467405e993de03653eb3a4"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 12:22:26 crc kubenswrapper[4972]: I1121 12:22:26.180378 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://019de17bd234fdb2e5375ae3b63fffc29c9876c0b1467405e993de03653eb3a4" gracePeriod=600 Nov 21 12:22:26 crc kubenswrapper[4972]: I1121 12:22:26.414873 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="019de17bd234fdb2e5375ae3b63fffc29c9876c0b1467405e993de03653eb3a4" exitCode=0 Nov 21 12:22:26 crc kubenswrapper[4972]: I1121 12:22:26.414934 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"019de17bd234fdb2e5375ae3b63fffc29c9876c0b1467405e993de03653eb3a4"} Nov 21 12:22:26 crc kubenswrapper[4972]: I1121 12:22:26.414983 4972 scope.go:117] "RemoveContainer" containerID="d3cd66c4f1059eec593e58e1ca64e4f3a594f0132b247d357b1dcaee08982f66" Nov 21 12:22:27 crc kubenswrapper[4972]: I1121 12:22:27.427096 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb"} Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.868236 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8czjn/must-gather-s4k4l"] Nov 21 12:24:10 crc kubenswrapper[4972]: E1121 12:24:10.869228 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerName="extract-content" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869241 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerName="extract-content" Nov 21 12:24:10 crc kubenswrapper[4972]: E1121 12:24:10.869263 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631b60d9-ae64-4986-856c-0e366e765e14" containerName="extract-content" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869269 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="631b60d9-ae64-4986-856c-0e366e765e14" containerName="extract-content" Nov 21 12:24:10 crc kubenswrapper[4972]: E1121 12:24:10.869287 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" containerName="registry-server" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869293 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" containerName="registry-server" Nov 21 12:24:10 crc kubenswrapper[4972]: E1121 12:24:10.869312 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" containerName="extract-utilities" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869318 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" containerName="extract-utilities" Nov 21 12:24:10 crc kubenswrapper[4972]: E1121 12:24:10.869331 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" containerName="extract-content" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869336 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" containerName="extract-content" Nov 21 12:24:10 crc kubenswrapper[4972]: E1121 12:24:10.869345 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerName="registry-server" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869351 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerName="registry-server" Nov 21 12:24:10 crc kubenswrapper[4972]: E1121 12:24:10.869366 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869375 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 21 12:24:10 crc kubenswrapper[4972]: E1121 12:24:10.869386 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerName="extract-utilities" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869391 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerName="extract-utilities" Nov 21 12:24:10 crc kubenswrapper[4972]: E1121 12:24:10.869409 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631b60d9-ae64-4986-856c-0e366e765e14" containerName="registry-server" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869427 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="631b60d9-ae64-4986-856c-0e366e765e14" containerName="registry-server" Nov 21 12:24:10 crc kubenswrapper[4972]: E1121 12:24:10.869440 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631b60d9-ae64-4986-856c-0e366e765e14" containerName="extract-utilities" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869447 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="631b60d9-ae64-4986-856c-0e366e765e14" containerName="extract-utilities" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869669 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="631b60d9-ae64-4986-856c-0e366e765e14" containerName="registry-server" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869695 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8b138b-b9f9-4dc2-be8d-a366f0077e9e" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869714 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bdbe6a0-24bc-4156-92cc-f3c2a579a01e" containerName="registry-server" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.869734 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66027d9-27d5-483c-9bbb-6e8208d49d8b" containerName="registry-server" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.874000 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8czjn/must-gather-s4k4l" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.878095 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8czjn"/"default-dockercfg-6rtcg" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.878115 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8czjn"/"openshift-service-ca.crt" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.880031 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8czjn"/"kube-root-ca.crt" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.882022 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8czjn/must-gather-s4k4l"] Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.967820 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m8vd\" (UniqueName: \"kubernetes.io/projected/ff1b4df7-ae09-4998-a055-37f9e720cf27-kube-api-access-6m8vd\") pod \"must-gather-s4k4l\" (UID: \"ff1b4df7-ae09-4998-a055-37f9e720cf27\") " pod="openshift-must-gather-8czjn/must-gather-s4k4l" Nov 21 12:24:10 crc kubenswrapper[4972]: I1121 12:24:10.968098 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff1b4df7-ae09-4998-a055-37f9e720cf27-must-gather-output\") pod \"must-gather-s4k4l\" (UID: \"ff1b4df7-ae09-4998-a055-37f9e720cf27\") " pod="openshift-must-gather-8czjn/must-gather-s4k4l" Nov 21 12:24:11 crc kubenswrapper[4972]: I1121 12:24:11.070033 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m8vd\" (UniqueName: \"kubernetes.io/projected/ff1b4df7-ae09-4998-a055-37f9e720cf27-kube-api-access-6m8vd\") pod \"must-gather-s4k4l\" (UID: \"ff1b4df7-ae09-4998-a055-37f9e720cf27\") " pod="openshift-must-gather-8czjn/must-gather-s4k4l" Nov 21 12:24:11 crc kubenswrapper[4972]: I1121 12:24:11.070194 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff1b4df7-ae09-4998-a055-37f9e720cf27-must-gather-output\") pod \"must-gather-s4k4l\" (UID: \"ff1b4df7-ae09-4998-a055-37f9e720cf27\") " pod="openshift-must-gather-8czjn/must-gather-s4k4l" Nov 21 12:24:11 crc kubenswrapper[4972]: I1121 12:24:11.070801 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff1b4df7-ae09-4998-a055-37f9e720cf27-must-gather-output\") pod \"must-gather-s4k4l\" (UID: \"ff1b4df7-ae09-4998-a055-37f9e720cf27\") " pod="openshift-must-gather-8czjn/must-gather-s4k4l" Nov 21 12:24:11 crc kubenswrapper[4972]: I1121 12:24:11.098454 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m8vd\" (UniqueName: \"kubernetes.io/projected/ff1b4df7-ae09-4998-a055-37f9e720cf27-kube-api-access-6m8vd\") pod \"must-gather-s4k4l\" (UID: \"ff1b4df7-ae09-4998-a055-37f9e720cf27\") " pod="openshift-must-gather-8czjn/must-gather-s4k4l" Nov 21 12:24:11 crc kubenswrapper[4972]: I1121 12:24:11.200599 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8czjn/must-gather-s4k4l" Nov 21 12:24:11 crc kubenswrapper[4972]: I1121 12:24:11.827758 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8czjn/must-gather-s4k4l"] Nov 21 12:24:12 crc kubenswrapper[4972]: I1121 12:24:12.601306 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8czjn/must-gather-s4k4l" event={"ID":"ff1b4df7-ae09-4998-a055-37f9e720cf27","Type":"ContainerStarted","Data":"e9e56c9ded9f7a9b6fc7f43bc04a4b2a993999538bbf6f688e59c9e34520986e"} Nov 21 12:24:23 crc kubenswrapper[4972]: I1121 12:24:23.757894 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8czjn/must-gather-s4k4l" event={"ID":"ff1b4df7-ae09-4998-a055-37f9e720cf27","Type":"ContainerStarted","Data":"887ec4dfba30b660965cc14aca57290072b9a046eb2f888187e967ec8420481b"} Nov 21 12:24:23 crc kubenswrapper[4972]: I1121 12:24:23.777402 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8czjn/must-gather-s4k4l" event={"ID":"ff1b4df7-ae09-4998-a055-37f9e720cf27","Type":"ContainerStarted","Data":"3c6800bdd755e014bd5759e9e0b2dcfb90495aaaa848c0090c696378c4f7d087"} Nov 21 12:24:23 crc kubenswrapper[4972]: I1121 12:24:23.778320 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8czjn/must-gather-s4k4l" podStartSLOduration=2.7284008269999998 podStartE2EDuration="13.778299053s" podCreationTimestamp="2025-11-21 12:24:10 +0000 UTC" firstStartedPulling="2025-11-21 12:24:11.829687887 +0000 UTC m=+9796.938830385" lastFinishedPulling="2025-11-21 12:24:22.879586093 +0000 UTC m=+9807.988728611" observedRunningTime="2025-11-21 12:24:23.772741746 +0000 UTC m=+9808.881884264" watchObservedRunningTime="2025-11-21 12:24:23.778299053 +0000 UTC m=+9808.887441561" Nov 21 12:24:26 crc kubenswrapper[4972]: I1121 12:24:26.180076 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:24:26 crc kubenswrapper[4972]: I1121 12:24:26.180658 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:24:27 crc kubenswrapper[4972]: I1121 12:24:27.700302 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8czjn/crc-debug-rclb9"] Nov 21 12:24:27 crc kubenswrapper[4972]: I1121 12:24:27.702677 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8czjn/crc-debug-rclb9" Nov 21 12:24:27 crc kubenswrapper[4972]: I1121 12:24:27.757608 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gbc\" (UniqueName: \"kubernetes.io/projected/76adac9e-3573-40b2-b5f0-80951545b0da-kube-api-access-w6gbc\") pod \"crc-debug-rclb9\" (UID: \"76adac9e-3573-40b2-b5f0-80951545b0da\") " pod="openshift-must-gather-8czjn/crc-debug-rclb9" Nov 21 12:24:27 crc kubenswrapper[4972]: I1121 12:24:27.757785 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76adac9e-3573-40b2-b5f0-80951545b0da-host\") pod \"crc-debug-rclb9\" (UID: \"76adac9e-3573-40b2-b5f0-80951545b0da\") " pod="openshift-must-gather-8czjn/crc-debug-rclb9" Nov 21 12:24:27 crc kubenswrapper[4972]: I1121 12:24:27.860403 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76adac9e-3573-40b2-b5f0-80951545b0da-host\") pod \"crc-debug-rclb9\" (UID: \"76adac9e-3573-40b2-b5f0-80951545b0da\") " pod="openshift-must-gather-8czjn/crc-debug-rclb9" Nov 21 12:24:27 crc kubenswrapper[4972]: I1121 12:24:27.860592 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6gbc\" (UniqueName: \"kubernetes.io/projected/76adac9e-3573-40b2-b5f0-80951545b0da-kube-api-access-w6gbc\") pod \"crc-debug-rclb9\" (UID: \"76adac9e-3573-40b2-b5f0-80951545b0da\") " pod="openshift-must-gather-8czjn/crc-debug-rclb9" Nov 21 12:24:27 crc kubenswrapper[4972]: I1121 12:24:27.860795 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76adac9e-3573-40b2-b5f0-80951545b0da-host\") pod \"crc-debug-rclb9\" (UID: \"76adac9e-3573-40b2-b5f0-80951545b0da\") " pod="openshift-must-gather-8czjn/crc-debug-rclb9" Nov 21 12:24:27 crc kubenswrapper[4972]: I1121 12:24:27.890009 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6gbc\" (UniqueName: \"kubernetes.io/projected/76adac9e-3573-40b2-b5f0-80951545b0da-kube-api-access-w6gbc\") pod \"crc-debug-rclb9\" (UID: \"76adac9e-3573-40b2-b5f0-80951545b0da\") " pod="openshift-must-gather-8czjn/crc-debug-rclb9" Nov 21 12:24:28 crc kubenswrapper[4972]: I1121 12:24:28.026581 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8czjn/crc-debug-rclb9" Nov 21 12:24:28 crc kubenswrapper[4972]: W1121 12:24:28.070970 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76adac9e_3573_40b2_b5f0_80951545b0da.slice/crio-b3dc8367a27f58a54ab9a395cd6f95c6141c05114e1a0c18e33d661c836512ed WatchSource:0}: Error finding container b3dc8367a27f58a54ab9a395cd6f95c6141c05114e1a0c18e33d661c836512ed: Status 404 returned error can't find the container with id b3dc8367a27f58a54ab9a395cd6f95c6141c05114e1a0c18e33d661c836512ed Nov 21 12:24:28 crc kubenswrapper[4972]: I1121 12:24:28.807963 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8czjn/crc-debug-rclb9" event={"ID":"76adac9e-3573-40b2-b5f0-80951545b0da","Type":"ContainerStarted","Data":"b3dc8367a27f58a54ab9a395cd6f95c6141c05114e1a0c18e33d661c836512ed"} Nov 21 12:24:41 crc kubenswrapper[4972]: I1121 12:24:41.958612 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8czjn/crc-debug-rclb9" event={"ID":"76adac9e-3573-40b2-b5f0-80951545b0da","Type":"ContainerStarted","Data":"f23c45ba579aaf98f64c9d10885592dd40fb02c3e93dbbd44fe8f565b07fdcd2"} Nov 21 12:24:41 crc kubenswrapper[4972]: I1121 12:24:41.973325 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8czjn/crc-debug-rclb9" podStartSLOduration=2.306602475 podStartE2EDuration="14.97330648s" podCreationTimestamp="2025-11-21 12:24:27 +0000 UTC" firstStartedPulling="2025-11-21 12:24:28.073157444 +0000 UTC m=+9813.182299942" lastFinishedPulling="2025-11-21 12:24:40.739861449 +0000 UTC m=+9825.849003947" observedRunningTime="2025-11-21 12:24:41.970904066 +0000 UTC m=+9827.080046584" watchObservedRunningTime="2025-11-21 12:24:41.97330648 +0000 UTC m=+9827.082448978" Nov 21 12:24:56 crc kubenswrapper[4972]: I1121 12:24:56.179090 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:24:56 crc kubenswrapper[4972]: I1121 12:24:56.179730 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.307700 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jvx4s"] Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.310425 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.338009 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/773899b9-d381-4d9d-8f49-8c9cd48acaf2-catalog-content\") pod \"certified-operators-jvx4s\" (UID: \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\") " pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.338455 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/773899b9-d381-4d9d-8f49-8c9cd48acaf2-utilities\") pod \"certified-operators-jvx4s\" (UID: \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\") " pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.338531 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9h5z\" (UniqueName: \"kubernetes.io/projected/773899b9-d381-4d9d-8f49-8c9cd48acaf2-kube-api-access-d9h5z\") pod \"certified-operators-jvx4s\" (UID: \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\") " pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.361692 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jvx4s"] Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.440986 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/773899b9-d381-4d9d-8f49-8c9cd48acaf2-catalog-content\") pod \"certified-operators-jvx4s\" (UID: \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\") " pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.441261 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/773899b9-d381-4d9d-8f49-8c9cd48acaf2-utilities\") pod \"certified-operators-jvx4s\" (UID: \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\") " pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.441304 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9h5z\" (UniqueName: \"kubernetes.io/projected/773899b9-d381-4d9d-8f49-8c9cd48acaf2-kube-api-access-d9h5z\") pod \"certified-operators-jvx4s\" (UID: \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\") " pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.442601 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/773899b9-d381-4d9d-8f49-8c9cd48acaf2-catalog-content\") pod \"certified-operators-jvx4s\" (UID: \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\") " pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.442668 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/773899b9-d381-4d9d-8f49-8c9cd48acaf2-utilities\") pod \"certified-operators-jvx4s\" (UID: \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\") " pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.472327 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9h5z\" (UniqueName: \"kubernetes.io/projected/773899b9-d381-4d9d-8f49-8c9cd48acaf2-kube-api-access-d9h5z\") pod \"certified-operators-jvx4s\" (UID: \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\") " pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:08 crc kubenswrapper[4972]: I1121 12:25:08.641318 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:09 crc kubenswrapper[4972]: I1121 12:25:09.781164 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jvx4s"] Nov 21 12:25:10 crc kubenswrapper[4972]: I1121 12:25:10.267320 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvx4s" event={"ID":"773899b9-d381-4d9d-8f49-8c9cd48acaf2","Type":"ContainerStarted","Data":"4a5352d1e380954e8c3effa58835692170f6ec21ff19368f848a077f42001d32"} Nov 21 12:25:12 crc kubenswrapper[4972]: I1121 12:25:12.288749 4972 generic.go:334] "Generic (PLEG): container finished" podID="773899b9-d381-4d9d-8f49-8c9cd48acaf2" containerID="e87f5673b1d4f4b821bef55426492020150c0593c7d2b09ff4f06a8511c71ef3" exitCode=0 Nov 21 12:25:12 crc kubenswrapper[4972]: I1121 12:25:12.288815 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvx4s" event={"ID":"773899b9-d381-4d9d-8f49-8c9cd48acaf2","Type":"ContainerDied","Data":"e87f5673b1d4f4b821bef55426492020150c0593c7d2b09ff4f06a8511c71ef3"} Nov 21 12:25:13 crc kubenswrapper[4972]: I1121 12:25:13.320776 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvx4s" event={"ID":"773899b9-d381-4d9d-8f49-8c9cd48acaf2","Type":"ContainerStarted","Data":"45dacae711cf0efb53c403758e0ad9a44cabc022bca92e85e46fac989ee6ef0f"} Nov 21 12:25:17 crc kubenswrapper[4972]: I1121 12:25:17.359861 4972 generic.go:334] "Generic (PLEG): container finished" podID="76adac9e-3573-40b2-b5f0-80951545b0da" containerID="f23c45ba579aaf98f64c9d10885592dd40fb02c3e93dbbd44fe8f565b07fdcd2" exitCode=0 Nov 21 12:25:17 crc kubenswrapper[4972]: I1121 12:25:17.359972 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8czjn/crc-debug-rclb9" event={"ID":"76adac9e-3573-40b2-b5f0-80951545b0da","Type":"ContainerDied","Data":"f23c45ba579aaf98f64c9d10885592dd40fb02c3e93dbbd44fe8f565b07fdcd2"} Nov 21 12:25:18 crc kubenswrapper[4972]: I1121 12:25:18.372411 4972 generic.go:334] "Generic (PLEG): container finished" podID="773899b9-d381-4d9d-8f49-8c9cd48acaf2" containerID="45dacae711cf0efb53c403758e0ad9a44cabc022bca92e85e46fac989ee6ef0f" exitCode=0 Nov 21 12:25:18 crc kubenswrapper[4972]: I1121 12:25:18.372487 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvx4s" event={"ID":"773899b9-d381-4d9d-8f49-8c9cd48acaf2","Type":"ContainerDied","Data":"45dacae711cf0efb53c403758e0ad9a44cabc022bca92e85e46fac989ee6ef0f"} Nov 21 12:25:18 crc kubenswrapper[4972]: I1121 12:25:18.494040 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8czjn/crc-debug-rclb9" Nov 21 12:25:18 crc kubenswrapper[4972]: I1121 12:25:18.533514 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8czjn/crc-debug-rclb9"] Nov 21 12:25:18 crc kubenswrapper[4972]: I1121 12:25:18.542448 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8czjn/crc-debug-rclb9"] Nov 21 12:25:18 crc kubenswrapper[4972]: I1121 12:25:18.601115 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76adac9e-3573-40b2-b5f0-80951545b0da-host\") pod \"76adac9e-3573-40b2-b5f0-80951545b0da\" (UID: \"76adac9e-3573-40b2-b5f0-80951545b0da\") " Nov 21 12:25:18 crc kubenswrapper[4972]: I1121 12:25:18.601227 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76adac9e-3573-40b2-b5f0-80951545b0da-host" (OuterVolumeSpecName: "host") pod "76adac9e-3573-40b2-b5f0-80951545b0da" (UID: "76adac9e-3573-40b2-b5f0-80951545b0da"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 12:25:18 crc kubenswrapper[4972]: I1121 12:25:18.601247 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6gbc\" (UniqueName: \"kubernetes.io/projected/76adac9e-3573-40b2-b5f0-80951545b0da-kube-api-access-w6gbc\") pod \"76adac9e-3573-40b2-b5f0-80951545b0da\" (UID: \"76adac9e-3573-40b2-b5f0-80951545b0da\") " Nov 21 12:25:18 crc kubenswrapper[4972]: I1121 12:25:18.601939 4972 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76adac9e-3573-40b2-b5f0-80951545b0da-host\") on node \"crc\" DevicePath \"\"" Nov 21 12:25:18 crc kubenswrapper[4972]: I1121 12:25:18.610017 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76adac9e-3573-40b2-b5f0-80951545b0da-kube-api-access-w6gbc" (OuterVolumeSpecName: "kube-api-access-w6gbc") pod "76adac9e-3573-40b2-b5f0-80951545b0da" (UID: "76adac9e-3573-40b2-b5f0-80951545b0da"). InnerVolumeSpecName "kube-api-access-w6gbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:25:18 crc kubenswrapper[4972]: I1121 12:25:18.704222 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6gbc\" (UniqueName: \"kubernetes.io/projected/76adac9e-3573-40b2-b5f0-80951545b0da-kube-api-access-w6gbc\") on node \"crc\" DevicePath \"\"" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.387516 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3dc8367a27f58a54ab9a395cd6f95c6141c05114e1a0c18e33d661c836512ed" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.387576 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8czjn/crc-debug-rclb9" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.709273 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8czjn/crc-debug-vjcxm"] Nov 21 12:25:19 crc kubenswrapper[4972]: E1121 12:25:19.709914 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76adac9e-3573-40b2-b5f0-80951545b0da" containerName="container-00" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.709937 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="76adac9e-3573-40b2-b5f0-80951545b0da" containerName="container-00" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.710199 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="76adac9e-3573-40b2-b5f0-80951545b0da" containerName="container-00" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.711336 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8czjn/crc-debug-vjcxm" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.772958 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76adac9e-3573-40b2-b5f0-80951545b0da" path="/var/lib/kubelet/pods/76adac9e-3573-40b2-b5f0-80951545b0da/volumes" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.825995 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e-host\") pod \"crc-debug-vjcxm\" (UID: \"1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e\") " pod="openshift-must-gather-8czjn/crc-debug-vjcxm" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.826114 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56mkb\" (UniqueName: \"kubernetes.io/projected/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e-kube-api-access-56mkb\") pod \"crc-debug-vjcxm\" (UID: \"1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e\") " pod="openshift-must-gather-8czjn/crc-debug-vjcxm" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.928069 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56mkb\" (UniqueName: \"kubernetes.io/projected/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e-kube-api-access-56mkb\") pod \"crc-debug-vjcxm\" (UID: \"1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e\") " pod="openshift-must-gather-8czjn/crc-debug-vjcxm" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.928262 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e-host\") pod \"crc-debug-vjcxm\" (UID: \"1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e\") " pod="openshift-must-gather-8czjn/crc-debug-vjcxm" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.928383 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e-host\") pod \"crc-debug-vjcxm\" (UID: \"1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e\") " pod="openshift-must-gather-8czjn/crc-debug-vjcxm" Nov 21 12:25:19 crc kubenswrapper[4972]: I1121 12:25:19.949564 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56mkb\" (UniqueName: \"kubernetes.io/projected/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e-kube-api-access-56mkb\") pod \"crc-debug-vjcxm\" (UID: \"1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e\") " pod="openshift-must-gather-8czjn/crc-debug-vjcxm" Nov 21 12:25:20 crc kubenswrapper[4972]: I1121 12:25:20.030931 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8czjn/crc-debug-vjcxm" Nov 21 12:25:20 crc kubenswrapper[4972]: I1121 12:25:20.400874 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvx4s" event={"ID":"773899b9-d381-4d9d-8f49-8c9cd48acaf2","Type":"ContainerStarted","Data":"7bcb93531064eda1049a8c0193e38f6c5a05cb85b6888ca91fc4b4cd818454b3"} Nov 21 12:25:20 crc kubenswrapper[4972]: I1121 12:25:20.402251 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8czjn/crc-debug-vjcxm" event={"ID":"1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e","Type":"ContainerStarted","Data":"fc42e7a3d1537a9ecb812f1256176678d836ec0e4a6d275bced0e8bc3cec2bac"} Nov 21 12:25:20 crc kubenswrapper[4972]: I1121 12:25:20.425316 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jvx4s" podStartSLOduration=5.691887565 podStartE2EDuration="12.425298487s" podCreationTimestamp="2025-11-21 12:25:08 +0000 UTC" firstStartedPulling="2025-11-21 12:25:12.29206492 +0000 UTC m=+9857.401207418" lastFinishedPulling="2025-11-21 12:25:19.025475842 +0000 UTC m=+9864.134618340" observedRunningTime="2025-11-21 12:25:20.419311218 +0000 UTC m=+9865.528453726" watchObservedRunningTime="2025-11-21 12:25:20.425298487 +0000 UTC m=+9865.534440985" Nov 21 12:25:21 crc kubenswrapper[4972]: I1121 12:25:21.412703 4972 generic.go:334] "Generic (PLEG): container finished" podID="1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e" containerID="fb89478cf2432e32ba60c80e9f9b5102370f4d122ca6ebf39c4dd04be4c70ad5" exitCode=1 Nov 21 12:25:21 crc kubenswrapper[4972]: I1121 12:25:21.412895 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8czjn/crc-debug-vjcxm" event={"ID":"1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e","Type":"ContainerDied","Data":"fb89478cf2432e32ba60c80e9f9b5102370f4d122ca6ebf39c4dd04be4c70ad5"} Nov 21 12:25:21 crc kubenswrapper[4972]: I1121 12:25:21.463430 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8czjn/crc-debug-vjcxm"] Nov 21 12:25:21 crc kubenswrapper[4972]: I1121 12:25:21.473626 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8czjn/crc-debug-vjcxm"] Nov 21 12:25:22 crc kubenswrapper[4972]: I1121 12:25:22.544713 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8czjn/crc-debug-vjcxm" Nov 21 12:25:22 crc kubenswrapper[4972]: I1121 12:25:22.587506 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56mkb\" (UniqueName: \"kubernetes.io/projected/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e-kube-api-access-56mkb\") pod \"1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e\" (UID: \"1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e\") " Nov 21 12:25:22 crc kubenswrapper[4972]: I1121 12:25:22.587807 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e-host\") pod \"1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e\" (UID: \"1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e\") " Nov 21 12:25:22 crc kubenswrapper[4972]: I1121 12:25:22.587901 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e-host" (OuterVolumeSpecName: "host") pod "1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e" (UID: "1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 12:25:22 crc kubenswrapper[4972]: I1121 12:25:22.588427 4972 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e-host\") on node \"crc\" DevicePath \"\"" Nov 21 12:25:22 crc kubenswrapper[4972]: I1121 12:25:22.598034 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e-kube-api-access-56mkb" (OuterVolumeSpecName: "kube-api-access-56mkb") pod "1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e" (UID: "1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e"). InnerVolumeSpecName "kube-api-access-56mkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:25:22 crc kubenswrapper[4972]: I1121 12:25:22.690215 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56mkb\" (UniqueName: \"kubernetes.io/projected/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e-kube-api-access-56mkb\") on node \"crc\" DevicePath \"\"" Nov 21 12:25:23 crc kubenswrapper[4972]: I1121 12:25:23.434275 4972 scope.go:117] "RemoveContainer" containerID="fb89478cf2432e32ba60c80e9f9b5102370f4d122ca6ebf39c4dd04be4c70ad5" Nov 21 12:25:23 crc kubenswrapper[4972]: I1121 12:25:23.434300 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8czjn/crc-debug-vjcxm" Nov 21 12:25:23 crc kubenswrapper[4972]: I1121 12:25:23.859909 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e" path="/var/lib/kubelet/pods/1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e/volumes" Nov 21 12:25:26 crc kubenswrapper[4972]: I1121 12:25:26.179373 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:25:26 crc kubenswrapper[4972]: I1121 12:25:26.180050 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:25:26 crc kubenswrapper[4972]: I1121 12:25:26.180111 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 12:25:26 crc kubenswrapper[4972]: I1121 12:25:26.181098 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 12:25:26 crc kubenswrapper[4972]: I1121 12:25:26.181168 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" gracePeriod=600 Nov 21 12:25:26 crc kubenswrapper[4972]: E1121 12:25:26.348351 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:25:26 crc kubenswrapper[4972]: I1121 12:25:26.488779 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" exitCode=0 Nov 21 12:25:26 crc kubenswrapper[4972]: I1121 12:25:26.488871 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb"} Nov 21 12:25:26 crc kubenswrapper[4972]: I1121 12:25:26.489337 4972 scope.go:117] "RemoveContainer" containerID="019de17bd234fdb2e5375ae3b63fffc29c9876c0b1467405e993de03653eb3a4" Nov 21 12:25:26 crc kubenswrapper[4972]: I1121 12:25:26.490495 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:25:26 crc kubenswrapper[4972]: E1121 12:25:26.490998 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:25:28 crc kubenswrapper[4972]: I1121 12:25:28.642118 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:28 crc kubenswrapper[4972]: I1121 12:25:28.642712 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:28 crc kubenswrapper[4972]: I1121 12:25:28.692934 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:29 crc kubenswrapper[4972]: I1121 12:25:29.570902 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:29 crc kubenswrapper[4972]: I1121 12:25:29.622566 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jvx4s"] Nov 21 12:25:31 crc kubenswrapper[4972]: I1121 12:25:31.541387 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jvx4s" podUID="773899b9-d381-4d9d-8f49-8c9cd48acaf2" containerName="registry-server" containerID="cri-o://7bcb93531064eda1049a8c0193e38f6c5a05cb85b6888ca91fc4b4cd818454b3" gracePeriod=2 Nov 21 12:25:32 crc kubenswrapper[4972]: I1121 12:25:32.557521 4972 generic.go:334] "Generic (PLEG): container finished" podID="773899b9-d381-4d9d-8f49-8c9cd48acaf2" containerID="7bcb93531064eda1049a8c0193e38f6c5a05cb85b6888ca91fc4b4cd818454b3" exitCode=0 Nov 21 12:25:32 crc kubenswrapper[4972]: I1121 12:25:32.557605 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvx4s" event={"ID":"773899b9-d381-4d9d-8f49-8c9cd48acaf2","Type":"ContainerDied","Data":"7bcb93531064eda1049a8c0193e38f6c5a05cb85b6888ca91fc4b4cd818454b3"} Nov 21 12:25:32 crc kubenswrapper[4972]: I1121 12:25:32.747267 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:32 crc kubenswrapper[4972]: I1121 12:25:32.821506 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/773899b9-d381-4d9d-8f49-8c9cd48acaf2-utilities\") pod \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\" (UID: \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\") " Nov 21 12:25:32 crc kubenswrapper[4972]: I1121 12:25:32.821892 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/773899b9-d381-4d9d-8f49-8c9cd48acaf2-catalog-content\") pod \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\" (UID: \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\") " Nov 21 12:25:32 crc kubenswrapper[4972]: I1121 12:25:32.822011 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9h5z\" (UniqueName: \"kubernetes.io/projected/773899b9-d381-4d9d-8f49-8c9cd48acaf2-kube-api-access-d9h5z\") pod \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\" (UID: \"773899b9-d381-4d9d-8f49-8c9cd48acaf2\") " Nov 21 12:25:32 crc kubenswrapper[4972]: I1121 12:25:32.822483 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773899b9-d381-4d9d-8f49-8c9cd48acaf2-utilities" (OuterVolumeSpecName: "utilities") pod "773899b9-d381-4d9d-8f49-8c9cd48acaf2" (UID: "773899b9-d381-4d9d-8f49-8c9cd48acaf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:25:32 crc kubenswrapper[4972]: I1121 12:25:32.822756 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/773899b9-d381-4d9d-8f49-8c9cd48acaf2-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:25:32 crc kubenswrapper[4972]: I1121 12:25:32.827806 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773899b9-d381-4d9d-8f49-8c9cd48acaf2-kube-api-access-d9h5z" (OuterVolumeSpecName: "kube-api-access-d9h5z") pod "773899b9-d381-4d9d-8f49-8c9cd48acaf2" (UID: "773899b9-d381-4d9d-8f49-8c9cd48acaf2"). InnerVolumeSpecName "kube-api-access-d9h5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:25:32 crc kubenswrapper[4972]: I1121 12:25:32.867991 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773899b9-d381-4d9d-8f49-8c9cd48acaf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "773899b9-d381-4d9d-8f49-8c9cd48acaf2" (UID: "773899b9-d381-4d9d-8f49-8c9cd48acaf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:25:32 crc kubenswrapper[4972]: I1121 12:25:32.924645 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/773899b9-d381-4d9d-8f49-8c9cd48acaf2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:25:32 crc kubenswrapper[4972]: I1121 12:25:32.924686 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9h5z\" (UniqueName: \"kubernetes.io/projected/773899b9-d381-4d9d-8f49-8c9cd48acaf2-kube-api-access-d9h5z\") on node \"crc\" DevicePath \"\"" Nov 21 12:25:33 crc kubenswrapper[4972]: I1121 12:25:33.570721 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvx4s" event={"ID":"773899b9-d381-4d9d-8f49-8c9cd48acaf2","Type":"ContainerDied","Data":"4a5352d1e380954e8c3effa58835692170f6ec21ff19368f848a077f42001d32"} Nov 21 12:25:33 crc kubenswrapper[4972]: I1121 12:25:33.571121 4972 scope.go:117] "RemoveContainer" containerID="7bcb93531064eda1049a8c0193e38f6c5a05cb85b6888ca91fc4b4cd818454b3" Nov 21 12:25:33 crc kubenswrapper[4972]: I1121 12:25:33.570775 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvx4s" Nov 21 12:25:33 crc kubenswrapper[4972]: I1121 12:25:33.605812 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jvx4s"] Nov 21 12:25:33 crc kubenswrapper[4972]: I1121 12:25:33.612948 4972 scope.go:117] "RemoveContainer" containerID="45dacae711cf0efb53c403758e0ad9a44cabc022bca92e85e46fac989ee6ef0f" Nov 21 12:25:33 crc kubenswrapper[4972]: I1121 12:25:33.614035 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jvx4s"] Nov 21 12:25:33 crc kubenswrapper[4972]: I1121 12:25:33.644353 4972 scope.go:117] "RemoveContainer" containerID="e87f5673b1d4f4b821bef55426492020150c0593c7d2b09ff4f06a8511c71ef3" Nov 21 12:25:33 crc kubenswrapper[4972]: I1121 12:25:33.786370 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773899b9-d381-4d9d-8f49-8c9cd48acaf2" path="/var/lib/kubelet/pods/773899b9-d381-4d9d-8f49-8c9cd48acaf2/volumes" Nov 21 12:25:39 crc kubenswrapper[4972]: I1121 12:25:39.760538 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:25:39 crc kubenswrapper[4972]: E1121 12:25:39.761366 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:25:54 crc kubenswrapper[4972]: I1121 12:25:54.760819 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:25:54 crc kubenswrapper[4972]: E1121 12:25:54.762151 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:26:05 crc kubenswrapper[4972]: I1121 12:26:05.766502 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:26:05 crc kubenswrapper[4972]: E1121 12:26:05.767816 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:26:19 crc kubenswrapper[4972]: I1121 12:26:19.760481 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:26:19 crc kubenswrapper[4972]: E1121 12:26:19.762858 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:26:32 crc kubenswrapper[4972]: I1121 12:26:32.760552 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:26:32 crc kubenswrapper[4972]: E1121 12:26:32.761764 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:26:44 crc kubenswrapper[4972]: I1121 12:26:44.759319 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:26:44 crc kubenswrapper[4972]: E1121 12:26:44.760603 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:26:55 crc kubenswrapper[4972]: I1121 12:26:55.766043 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:26:55 crc kubenswrapper[4972]: E1121 12:26:55.767926 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:27:07 crc kubenswrapper[4972]: I1121 12:27:07.760339 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:27:07 crc kubenswrapper[4972]: E1121 12:27:07.764419 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:27:18 crc kubenswrapper[4972]: I1121 12:27:18.759950 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:27:18 crc kubenswrapper[4972]: E1121 12:27:18.761149 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:27:33 crc kubenswrapper[4972]: I1121 12:27:33.760603 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:27:33 crc kubenswrapper[4972]: E1121 12:27:33.761580 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:27:46 crc kubenswrapper[4972]: I1121 12:27:46.759667 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:27:46 crc kubenswrapper[4972]: E1121 12:27:46.761006 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:27:59 crc kubenswrapper[4972]: I1121 12:27:59.759189 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:27:59 crc kubenswrapper[4972]: E1121 12:27:59.760531 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:28:14 crc kubenswrapper[4972]: I1121 12:28:14.525141 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6/init-config-reloader/0.log" Nov 21 12:28:14 crc kubenswrapper[4972]: I1121 12:28:14.699699 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6/init-config-reloader/0.log" Nov 21 12:28:14 crc kubenswrapper[4972]: I1121 12:28:14.737494 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6/alertmanager/0.log" Nov 21 12:28:14 crc kubenswrapper[4972]: I1121 12:28:14.760557 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:28:14 crc kubenswrapper[4972]: E1121 12:28:14.760922 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:28:14 crc kubenswrapper[4972]: I1121 12:28:14.778516 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7b5a1d02-23d0-4c6b-ac16-ef6eb3f609a6/config-reloader/0.log" Nov 21 12:28:15 crc kubenswrapper[4972]: I1121 12:28:15.371398 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_eef56310-c266-4fdc-b4c1-fb03319b5196/aodh-listener/0.log" Nov 21 12:28:15 crc kubenswrapper[4972]: I1121 12:28:15.469385 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_eef56310-c266-4fdc-b4c1-fb03319b5196/aodh-api/0.log" Nov 21 12:28:15 crc kubenswrapper[4972]: I1121 12:28:15.516560 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_eef56310-c266-4fdc-b4c1-fb03319b5196/aodh-evaluator/0.log" Nov 21 12:28:15 crc kubenswrapper[4972]: I1121 12:28:15.659206 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_eef56310-c266-4fdc-b4c1-fb03319b5196/aodh-notifier/0.log" Nov 21 12:28:15 crc kubenswrapper[4972]: I1121 12:28:15.797171 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-747d6fb59b-5vqj5_5bdf9f42-bfbe-474f-86d1-3edbe94c09ac/barbican-api/0.log" Nov 21 12:28:15 crc kubenswrapper[4972]: I1121 12:28:15.903205 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-747d6fb59b-5vqj5_5bdf9f42-bfbe-474f-86d1-3edbe94c09ac/barbican-api-log/0.log" Nov 21 12:28:16 crc kubenswrapper[4972]: I1121 12:28:16.022333 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6946b977f8-97tkv_ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5/barbican-keystone-listener/0.log" Nov 21 12:28:16 crc kubenswrapper[4972]: I1121 12:28:16.066648 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6946b977f8-97tkv_ad0adfda-1be9-4fdd-b1c5-6d33c0afcea5/barbican-keystone-listener-log/0.log" Nov 21 12:28:16 crc kubenswrapper[4972]: I1121 12:28:16.150254 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5ddd898675-msv98_956664ed-d3c8-467a-ba7e-ced0e72d00a4/barbican-worker/0.log" Nov 21 12:28:16 crc kubenswrapper[4972]: I1121 12:28:16.218437 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5ddd898675-msv98_956664ed-d3c8-467a-ba7e-ced0e72d00a4/barbican-worker-log/0.log" Nov 21 12:28:16 crc kubenswrapper[4972]: I1121 12:28:16.389363 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-2mfkz_c828a840-b09c-419e-ab5c-1771ecceeed8/bootstrap-openstack-openstack-cell1/0.log" Nov 21 12:28:16 crc kubenswrapper[4972]: I1121 12:28:16.470966 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4a94fb30-1130-45e4-8ce8-9b0cdf0401b4/ceilometer-central-agent/0.log" Nov 21 12:28:16 crc kubenswrapper[4972]: I1121 12:28:16.558157 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4a94fb30-1130-45e4-8ce8-9b0cdf0401b4/ceilometer-notification-agent/0.log" Nov 21 12:28:16 crc kubenswrapper[4972]: I1121 12:28:16.608533 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4a94fb30-1130-45e4-8ce8-9b0cdf0401b4/proxy-httpd/0.log" Nov 21 12:28:16 crc kubenswrapper[4972]: I1121 12:28:16.672950 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4a94fb30-1130-45e4-8ce8-9b0cdf0401b4/sg-core/0.log" Nov 21 12:28:16 crc kubenswrapper[4972]: I1121 12:28:16.764481 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-openstack-openstack-cell1-qwz69_cfae87d2-f93e-49af-9b66-59d9e7208dd1/ceph-client-openstack-openstack-cell1/0.log" Nov 21 12:28:16 crc kubenswrapper[4972]: I1121 12:28:16.986889 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ad8b7f64-b9fa-494f-a895-f6d04406fcb6/cinder-api-log/0.log" Nov 21 12:28:17 crc kubenswrapper[4972]: I1121 12:28:17.009366 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ad8b7f64-b9fa-494f-a895-f6d04406fcb6/cinder-api/0.log" Nov 21 12:28:17 crc kubenswrapper[4972]: I1121 12:28:17.863014 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a70ae08c-10f1-4552-92a3-c23ede059504/cinder-scheduler/0.log" Nov 21 12:28:17 crc kubenswrapper[4972]: I1121 12:28:17.924755 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_b5b6b234-7ee4-4997-87e7-2b70b5da72dc/probe/0.log" Nov 21 12:28:18 crc kubenswrapper[4972]: I1121 12:28:18.091509 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_b5b6b234-7ee4-4997-87e7-2b70b5da72dc/cinder-backup/0.log" Nov 21 12:28:18 crc kubenswrapper[4972]: I1121 12:28:18.177989 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a70ae08c-10f1-4552-92a3-c23ede059504/probe/0.log" Nov 21 12:28:18 crc kubenswrapper[4972]: I1121 12:28:18.189101 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1d618b5b-1e2f-4608-a7c0-d9fea9f72d46/cinder-volume/0.log" Nov 21 12:28:18 crc kubenswrapper[4972]: I1121 12:28:18.327191 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1d618b5b-1e2f-4608-a7c0-d9fea9f72d46/probe/0.log" Nov 21 12:28:18 crc kubenswrapper[4972]: I1121 12:28:18.363088 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-ckn27_6ec9a5c1-3620-499d-8a2c-f5a1cfb3d2d3/configure-network-openstack-openstack-cell1/0.log" Nov 21 12:28:18 crc kubenswrapper[4972]: I1121 12:28:18.475688 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-7mgs5_536c00fd-ea0e-42f4-b0d9-4d6f9ee96097/configure-os-openstack-openstack-cell1/0.log" Nov 21 12:28:18 crc kubenswrapper[4972]: I1121 12:28:18.569126 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5fc98f54cc-v6tgv_0fddcd19-6012-4953-a450-4230dc94d51c/init/0.log" Nov 21 12:28:18 crc kubenswrapper[4972]: I1121 12:28:18.796392 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5fc98f54cc-v6tgv_0fddcd19-6012-4953-a450-4230dc94d51c/init/0.log" Nov 21 12:28:18 crc kubenswrapper[4972]: I1121 12:28:18.820575 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5fc98f54cc-v6tgv_0fddcd19-6012-4953-a450-4230dc94d51c/dnsmasq-dns/0.log" Nov 21 12:28:18 crc kubenswrapper[4972]: I1121 12:28:18.833937 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-6zqxn_e18adcb1-7956-4cae-874f-40130f05621b/download-cache-openstack-openstack-cell1/0.log" Nov 21 12:28:18 crc kubenswrapper[4972]: I1121 12:28:18.981212 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_012b4555-c9ab-48c0-ad99-d3a77d7d3c2b/glance-httpd/0.log" Nov 21 12:28:19 crc kubenswrapper[4972]: I1121 12:28:19.016784 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_012b4555-c9ab-48c0-ad99-d3a77d7d3c2b/glance-log/0.log" Nov 21 12:28:19 crc kubenswrapper[4972]: I1121 12:28:19.066383 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_71978ab6-d300-431a-8075-07dd3ed1a95a/glance-log/0.log" Nov 21 12:28:19 crc kubenswrapper[4972]: I1121 12:28:19.120978 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_71978ab6-d300-431a-8075-07dd3ed1a95a/glance-httpd/0.log" Nov 21 12:28:19 crc kubenswrapper[4972]: I1121 12:28:19.340969 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6547f5bb66-pppjx_1a001046-7b32-4075-8e39-d4d358bba56c/heat-api/0.log" Nov 21 12:28:19 crc kubenswrapper[4972]: I1121 12:28:19.432144 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7f5df49c6f-7t5gf_9bd80395-ea39-4363-9474-949cf53049aa/heat-cfnapi/0.log" Nov 21 12:28:19 crc kubenswrapper[4972]: I1121 12:28:19.533780 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5445969844-vchhd_a00d0391-8581-4b9d-810e-1e68dedc2718/heat-engine/0.log" Nov 21 12:28:19 crc kubenswrapper[4972]: I1121 12:28:19.747242 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-58456df47f-c7fp7_6ef37c3f-7753-44c3-88d7-b8ebf40a687e/horizon-log/0.log" Nov 21 12:28:19 crc kubenswrapper[4972]: I1121 12:28:19.757563 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-58456df47f-c7fp7_6ef37c3f-7753-44c3-88d7-b8ebf40a687e/horizon/0.log" Nov 21 12:28:19 crc kubenswrapper[4972]: I1121 12:28:19.780063 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-46gr5_96c9f1ec-3df3-45ef-be1e-b12185862f03/install-certs-openstack-openstack-cell1/0.log" Nov 21 12:28:19 crc kubenswrapper[4972]: I1121 12:28:19.975272 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-cp4kb_a312b75b-8159-44e4-a2aa-c83ef0991eb4/install-os-openstack-openstack-cell1/0.log" Nov 21 12:28:20 crc kubenswrapper[4972]: I1121 12:28:20.110025 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29395441-9lbdd_95c7d372-ef45-4c62-9d8c-81438229c9f4/keystone-cron/0.log" Nov 21 12:28:20 crc kubenswrapper[4972]: I1121 12:28:20.115522 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-745ff65c64-zdzmt_100b4921-687c-4d6e-97a7-e191b0b8c7d8/keystone-api/0.log" Nov 21 12:28:20 crc kubenswrapper[4972]: I1121 12:28:20.243392 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_2c758da9-ebfb-4259-877b-c0cdd0aadad5/kube-state-metrics/0.log" Nov 21 12:28:20 crc kubenswrapper[4972]: I1121 12:28:20.341566 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-dmptj_1f21798f-b3ad-4a9f-abba-62a3da5ce59a/libvirt-openstack-openstack-cell1/0.log" Nov 21 12:28:20 crc kubenswrapper[4972]: I1121 12:28:20.440557 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_256a4f9c-eb68-4be2-b6a9-eb534faa149d/manila-api-log/0.log" Nov 21 12:28:20 crc kubenswrapper[4972]: I1121 12:28:20.608257 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_256a4f9c-eb68-4be2-b6a9-eb534faa149d/manila-api/0.log" Nov 21 12:28:20 crc kubenswrapper[4972]: I1121 12:28:20.608872 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d1a24497-62c6-4d99-8b51-cab3f4dbcdf6/probe/0.log" Nov 21 12:28:20 crc kubenswrapper[4972]: I1121 12:28:20.628719 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_d1a24497-62c6-4d99-8b51-cab3f4dbcdf6/manila-scheduler/0.log" Nov 21 12:28:20 crc kubenswrapper[4972]: I1121 12:28:20.793875 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_2ed9e47f-280b-42d2-916c-ae5c437794ed/manila-share/0.log" Nov 21 12:28:20 crc kubenswrapper[4972]: I1121 12:28:20.850492 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_2ed9e47f-280b-42d2-916c-ae5c437794ed/probe/0.log" Nov 21 12:28:20 crc kubenswrapper[4972]: I1121 12:28:20.883036 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_9c79ff96-e437-4d94-8749-a3c53ff6a366/adoption/0.log" Nov 21 12:28:21 crc kubenswrapper[4972]: I1121 12:28:21.206588 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-697b5fbdf-gf62f_1c8983ed-b01a-41ce-9f06-cf429e74f0c3/neutron-api/0.log" Nov 21 12:28:21 crc kubenswrapper[4972]: I1121 12:28:21.281851 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-697b5fbdf-gf62f_1c8983ed-b01a-41ce-9f06-cf429e74f0c3/neutron-httpd/0.log" Nov 21 12:28:21 crc kubenswrapper[4972]: I1121 12:28:21.540407 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-b4mxp_99666862-2043-4347-b3a1-7b16e424137e/neutron-dhcp-openstack-openstack-cell1/0.log" Nov 21 12:28:21 crc kubenswrapper[4972]: I1121 12:28:21.680586 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-9mr8k_af381684-084b-4e6c-990d-db256b17820f/neutron-metadata-openstack-openstack-cell1/0.log" Nov 21 12:28:21 crc kubenswrapper[4972]: I1121 12:28:21.787526 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-g76jm_c602c118-ca24-4a6a-ada2-b76d3e4f7e25/neutron-sriov-openstack-openstack-cell1/0.log" Nov 21 12:28:21 crc kubenswrapper[4972]: I1121 12:28:21.957416 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2348570c-90b9-4153-98fa-49aea6529eb1/nova-api-api/0.log" Nov 21 12:28:22 crc kubenswrapper[4972]: I1121 12:28:22.038440 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2348570c-90b9-4153-98fa-49aea6529eb1/nova-api-log/0.log" Nov 21 12:28:22 crc kubenswrapper[4972]: I1121 12:28:22.103650 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_912d0781-7cd0-435a-af6c-3e64c68f94bc/nova-cell0-conductor-conductor/0.log" Nov 21 12:28:22 crc kubenswrapper[4972]: I1121 12:28:22.643696 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d4843821-347e-44d1-8c71-9f637fa97d72/nova-cell1-conductor-conductor/0.log" Nov 21 12:28:22 crc kubenswrapper[4972]: I1121 12:28:22.731784 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_51317036-994c-4651-86d8-4e1a15036ffd/nova-cell1-novncproxy-novncproxy/0.log" Nov 21 12:28:23 crc kubenswrapper[4972]: I1121 12:28:23.060647 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-celld6d7w_ee8b138b-b9f9-4dc2-be8d-a366f0077e9e/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Nov 21 12:28:23 crc kubenswrapper[4972]: I1121 12:28:23.129967 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-ddt9q_78198d69-4fb2-403d-8efb-8fe435b9351b/nova-cell1-openstack-openstack-cell1/0.log" Nov 21 12:28:23 crc kubenswrapper[4972]: I1121 12:28:23.274116 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205/nova-metadata-log/0.log" Nov 21 12:28:23 crc kubenswrapper[4972]: I1121 12:28:23.365631 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7bbbe9d3-ca15-45c2-9cf1-0cd0d9e5b205/nova-metadata-metadata/0.log" Nov 21 12:28:23 crc kubenswrapper[4972]: I1121 12:28:23.576354 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9f7e4d32-26f0-447a-97eb-17942860126a/nova-scheduler-scheduler/0.log" Nov 21 12:28:23 crc kubenswrapper[4972]: I1121 12:28:23.621618 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-db9c5b5bd-gdkr4_a80e0cc1-c14a-45ec-97bf-5c7401ce569f/init/0.log" Nov 21 12:28:23 crc kubenswrapper[4972]: I1121 12:28:23.873154 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-db9c5b5bd-gdkr4_a80e0cc1-c14a-45ec-97bf-5c7401ce569f/octavia-api-provider-agent/0.log" Nov 21 12:28:23 crc kubenswrapper[4972]: I1121 12:28:23.901510 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-db9c5b5bd-gdkr4_a80e0cc1-c14a-45ec-97bf-5c7401ce569f/init/0.log" Nov 21 12:28:24 crc kubenswrapper[4972]: I1121 12:28:24.117216 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-db9c5b5bd-gdkr4_a80e0cc1-c14a-45ec-97bf-5c7401ce569f/octavia-api/0.log" Nov 21 12:28:24 crc kubenswrapper[4972]: I1121 12:28:24.131529 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-r4jnw_e5bf7530-6fe3-4a93-b44a-0665818a4fd8/init/0.log" Nov 21 12:28:24 crc kubenswrapper[4972]: I1121 12:28:24.343759 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-r4jnw_e5bf7530-6fe3-4a93-b44a-0665818a4fd8/init/0.log" Nov 21 12:28:24 crc kubenswrapper[4972]: I1121 12:28:24.439452 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-pk5n9_26a73c9b-7334-4441-9eab-090821afdf46/init/0.log" Nov 21 12:28:24 crc kubenswrapper[4972]: I1121 12:28:24.476568 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-r4jnw_e5bf7530-6fe3-4a93-b44a-0665818a4fd8/octavia-healthmanager/0.log" Nov 21 12:28:24 crc kubenswrapper[4972]: I1121 12:28:24.701320 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-pk5n9_26a73c9b-7334-4441-9eab-090821afdf46/init/0.log" Nov 21 12:28:24 crc kubenswrapper[4972]: I1121 12:28:24.710115 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-bkc5d_a8e270d1-6354-4941-831f-d5e3e2605206/init/0.log" Nov 21 12:28:24 crc kubenswrapper[4972]: I1121 12:28:24.723711 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-pk5n9_26a73c9b-7334-4441-9eab-090821afdf46/octavia-housekeeping/0.log" Nov 21 12:28:25 crc kubenswrapper[4972]: I1121 12:28:25.118928 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-bkc5d_a8e270d1-6354-4941-831f-d5e3e2605206/init/0.log" Nov 21 12:28:25 crc kubenswrapper[4972]: I1121 12:28:25.164801 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-bkc5d_a8e270d1-6354-4941-831f-d5e3e2605206/octavia-rsyslog/0.log" Nov 21 12:28:25 crc kubenswrapper[4972]: I1121 12:28:25.269898 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-g7k56_98c40da9-47c4-449c-82c0-f09f09f7006b/init/0.log" Nov 21 12:28:25 crc kubenswrapper[4972]: I1121 12:28:25.542422 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-g7k56_98c40da9-47c4-449c-82c0-f09f09f7006b/init/0.log" Nov 21 12:28:25 crc kubenswrapper[4972]: I1121 12:28:25.550438 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bc01c508-b649-456b-8b19-22661b56f192/mysql-bootstrap/0.log" Nov 21 12:28:25 crc kubenswrapper[4972]: I1121 12:28:25.673058 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-g7k56_98c40da9-47c4-449c-82c0-f09f09f7006b/octavia-worker/0.log" Nov 21 12:28:25 crc kubenswrapper[4972]: I1121 12:28:25.856429 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bc01c508-b649-456b-8b19-22661b56f192/mysql-bootstrap/0.log" Nov 21 12:28:25 crc kubenswrapper[4972]: I1121 12:28:25.867136 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bc01c508-b649-456b-8b19-22661b56f192/galera/0.log" Nov 21 12:28:25 crc kubenswrapper[4972]: I1121 12:28:25.921539 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_83330eac-fef8-4cfc-9e9b-2ff1fea0d559/mysql-bootstrap/0.log" Nov 21 12:28:26 crc kubenswrapper[4972]: I1121 12:28:26.750314 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_83330eac-fef8-4cfc-9e9b-2ff1fea0d559/mysql-bootstrap/0.log" Nov 21 12:28:26 crc kubenswrapper[4972]: I1121 12:28:26.783183 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_874ea433-29fb-461e-825b-deba8bce7e6d/openstackclient/0.log" Nov 21 12:28:26 crc kubenswrapper[4972]: I1121 12:28:26.814347 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_83330eac-fef8-4cfc-9e9b-2ff1fea0d559/galera/0.log" Nov 21 12:28:27 crc kubenswrapper[4972]: I1121 12:28:27.017795 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bvbth_e1cf5e01-48da-4c9e-a4e5-937bea851491/openstack-network-exporter/0.log" Nov 21 12:28:27 crc kubenswrapper[4972]: I1121 12:28:27.069069 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-29k84_19a45089-70ad-464f-88c1-2ede6d0e1265/ovsdb-server-init/0.log" Nov 21 12:28:27 crc kubenswrapper[4972]: I1121 12:28:27.270278 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-29k84_19a45089-70ad-464f-88c1-2ede6d0e1265/ovs-vswitchd/0.log" Nov 21 12:28:27 crc kubenswrapper[4972]: I1121 12:28:27.284214 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-29k84_19a45089-70ad-464f-88c1-2ede6d0e1265/ovsdb-server-init/0.log" Nov 21 12:28:27 crc kubenswrapper[4972]: I1121 12:28:27.307601 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-29k84_19a45089-70ad-464f-88c1-2ede6d0e1265/ovsdb-server/0.log" Nov 21 12:28:27 crc kubenswrapper[4972]: I1121 12:28:27.520681 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-pxnhj_7e864747-789f-4b61-88e7-24523b70fe33/ovn-controller/0.log" Nov 21 12:28:27 crc kubenswrapper[4972]: I1121 12:28:27.609433 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_0488161c-9098-4c62-9860-e6c06608a1df/adoption/0.log" Nov 21 12:28:27 crc kubenswrapper[4972]: I1121 12:28:27.763954 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9ed97c80-e5e5-4056-8dbd-00d598824e0a/openstack-network-exporter/0.log" Nov 21 12:28:27 crc kubenswrapper[4972]: I1121 12:28:27.794927 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9ed97c80-e5e5-4056-8dbd-00d598824e0a/ovn-northd/0.log" Nov 21 12:28:28 crc kubenswrapper[4972]: I1121 12:28:28.634221 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3f1b59de-109b-4f8b-9104-6b93a7beea77/openstack-network-exporter/0.log" Nov 21 12:28:28 crc kubenswrapper[4972]: I1121 12:28:28.766287 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-mbltd_a889438a-ad36-4ef0-9994-da151114b722/ovn-openstack-openstack-cell1/0.log" Nov 21 12:28:29 crc kubenswrapper[4972]: I1121 12:28:29.048363 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_11962c16-54cd-4cd8-8794-49e5e30520c8/openstack-network-exporter/0.log" Nov 21 12:28:29 crc kubenswrapper[4972]: I1121 12:28:29.072484 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3f1b59de-109b-4f8b-9104-6b93a7beea77/ovsdbserver-nb/0.log" Nov 21 12:28:29 crc kubenswrapper[4972]: I1121 12:28:29.174901 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_11962c16-54cd-4cd8-8794-49e5e30520c8/ovsdbserver-nb/0.log" Nov 21 12:28:29 crc kubenswrapper[4972]: I1121 12:28:29.350353 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_2419184d-1b5f-43ea-8184-5856c198c4fa/openstack-network-exporter/0.log" Nov 21 12:28:29 crc kubenswrapper[4972]: I1121 12:28:29.389789 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_2419184d-1b5f-43ea-8184-5856c198c4fa/ovsdbserver-nb/0.log" Nov 21 12:28:29 crc kubenswrapper[4972]: I1121 12:28:29.541930 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ddd35b90-b72d-4b9c-8380-71c6b39a8a75/openstack-network-exporter/0.log" Nov 21 12:28:29 crc kubenswrapper[4972]: I1121 12:28:29.616883 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ddd35b90-b72d-4b9c-8380-71c6b39a8a75/ovsdbserver-sb/0.log" Nov 21 12:28:29 crc kubenswrapper[4972]: I1121 12:28:29.760846 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:28:29 crc kubenswrapper[4972]: E1121 12:28:29.761166 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:28:29 crc kubenswrapper[4972]: I1121 12:28:29.774811 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_799f30f3-66a3-440a-891c-fb28258284f1/ovsdbserver-sb/0.log" Nov 21 12:28:29 crc kubenswrapper[4972]: I1121 12:28:29.835651 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_799f30f3-66a3-440a-891c-fb28258284f1/openstack-network-exporter/0.log" Nov 21 12:28:29 crc kubenswrapper[4972]: I1121 12:28:29.966325 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_cfa07300-1e16-406a-9376-362cf3324e4d/openstack-network-exporter/0.log" Nov 21 12:28:29 crc kubenswrapper[4972]: I1121 12:28:29.971715 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_cfa07300-1e16-406a-9376-362cf3324e4d/ovsdbserver-sb/0.log" Nov 21 12:28:30 crc kubenswrapper[4972]: I1121 12:28:30.248211 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-559fdb5d84-5bqm6_2b1646b4-4971-4f8c-9198-65ff1e995e5d/placement-api/0.log" Nov 21 12:28:30 crc kubenswrapper[4972]: I1121 12:28:30.295576 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-559fdb5d84-5bqm6_2b1646b4-4971-4f8c-9198-65ff1e995e5d/placement-log/0.log" Nov 21 12:28:30 crc kubenswrapper[4972]: I1121 12:28:30.368970 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-c57vt9_ec6ae156-2743-48fa-afda-8fcce08e9588/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Nov 21 12:28:31 crc kubenswrapper[4972]: I1121 12:28:31.501177 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a9908d4-118e-43f8-8042-354780fe4db1/init-config-reloader/0.log" Nov 21 12:28:31 crc kubenswrapper[4972]: I1121 12:28:31.848308 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a9908d4-118e-43f8-8042-354780fe4db1/config-reloader/0.log" Nov 21 12:28:31 crc kubenswrapper[4972]: I1121 12:28:31.862023 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a9908d4-118e-43f8-8042-354780fe4db1/init-config-reloader/0.log" Nov 21 12:28:31 crc kubenswrapper[4972]: I1121 12:28:31.940629 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a9908d4-118e-43f8-8042-354780fe4db1/prometheus/0.log" Nov 21 12:28:31 crc kubenswrapper[4972]: I1121 12:28:31.959972 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a9908d4-118e-43f8-8042-354780fe4db1/thanos-sidecar/0.log" Nov 21 12:28:32 crc kubenswrapper[4972]: I1121 12:28:32.092958 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8476a6e2-19c1-4070-a33c-690fad3f8c1b/memcached/0.log" Nov 21 12:28:32 crc kubenswrapper[4972]: I1121 12:28:32.097042 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_375f2bb7-3c9f-46f2-812b-fc5325524d0b/setup-container/0.log" Nov 21 12:28:32 crc kubenswrapper[4972]: I1121 12:28:32.300867 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_375f2bb7-3c9f-46f2-812b-fc5325524d0b/setup-container/0.log" Nov 21 12:28:32 crc kubenswrapper[4972]: I1121 12:28:32.320047 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2f18efe1-6a41-4cbd-9ed4-889624248484/setup-container/0.log" Nov 21 12:28:32 crc kubenswrapper[4972]: I1121 12:28:32.361899 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_375f2bb7-3c9f-46f2-812b-fc5325524d0b/rabbitmq/0.log" Nov 21 12:28:32 crc kubenswrapper[4972]: I1121 12:28:32.589902 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2f18efe1-6a41-4cbd-9ed4-889624248484/setup-container/0.log" Nov 21 12:28:32 crc kubenswrapper[4972]: I1121 12:28:32.717706 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2f18efe1-6a41-4cbd-9ed4-889624248484/rabbitmq/0.log" Nov 21 12:28:32 crc kubenswrapper[4972]: I1121 12:28:32.779476 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-p4vfc_9269b8c1-5c0a-4bd4-9cea-210af0f082d0/reboot-os-openstack-openstack-cell1/0.log" Nov 21 12:28:32 crc kubenswrapper[4972]: I1121 12:28:32.781821 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-45bfc_0489a8cf-eca4-430a-a3e2-73fcfdc437c1/run-os-openstack-openstack-cell1/0.log" Nov 21 12:28:32 crc kubenswrapper[4972]: I1121 12:28:32.967865 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-jhnpb_d5bab5af-1d1c-4eb9-bae8-b02c4d98fd38/ssh-known-hosts-openstack/0.log" Nov 21 12:28:33 crc kubenswrapper[4972]: I1121 12:28:33.048166 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-j6j8b_2f0c2649-60a5-4c8b-8469-bd17ee2fac3f/telemetry-openstack-openstack-cell1/0.log" Nov 21 12:28:33 crc kubenswrapper[4972]: I1121 12:28:33.496388 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-x5qbs_ea334c61-e29f-432c-99c2-6e8463dde290/validate-network-openstack-openstack-cell1/0.log" Nov 21 12:28:33 crc kubenswrapper[4972]: I1121 12:28:33.555191 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-d5m54_0b686359-266d-4e3b-b383-2ed81fb826ed/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Nov 21 12:28:44 crc kubenswrapper[4972]: I1121 12:28:44.760099 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:28:44 crc kubenswrapper[4972]: E1121 12:28:44.761646 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:28:55 crc kubenswrapper[4972]: I1121 12:28:55.440416 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm_aa3bf531-41ff-449f-a660-0886a7e8e87c/util/0.log" Nov 21 12:28:55 crc kubenswrapper[4972]: I1121 12:28:55.643452 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm_aa3bf531-41ff-449f-a660-0886a7e8e87c/pull/0.log" Nov 21 12:28:55 crc kubenswrapper[4972]: I1121 12:28:55.676098 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm_aa3bf531-41ff-449f-a660-0886a7e8e87c/pull/0.log" Nov 21 12:28:55 crc kubenswrapper[4972]: I1121 12:28:55.679289 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm_aa3bf531-41ff-449f-a660-0886a7e8e87c/util/0.log" Nov 21 12:28:55 crc kubenswrapper[4972]: I1121 12:28:55.770198 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:28:55 crc kubenswrapper[4972]: E1121 12:28:55.770855 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:28:55 crc kubenswrapper[4972]: I1121 12:28:55.902275 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm_aa3bf531-41ff-449f-a660-0886a7e8e87c/pull/0.log" Nov 21 12:28:55 crc kubenswrapper[4972]: I1121 12:28:55.911744 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm_aa3bf531-41ff-449f-a660-0886a7e8e87c/util/0.log" Nov 21 12:28:55 crc kubenswrapper[4972]: I1121 12:28:55.925370 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f600d40645c3b5d5826f2c9954a3807468e61602beafe19f4b735f42d9w2sm_aa3bf531-41ff-449f-a660-0886a7e8e87c/extract/0.log" Nov 21 12:28:56 crc kubenswrapper[4972]: I1121 12:28:56.104491 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7768f8c84f-t6wbg_9304bb08-b481-42e5-89ed-f215f0102662/kube-rbac-proxy/0.log" Nov 21 12:28:56 crc kubenswrapper[4972]: I1121 12:28:56.166956 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6d8fd67bf7-g4s6z_6aea9dc6-2e85-470c-9897-064111a4661e/kube-rbac-proxy/0.log" Nov 21 12:28:56 crc kubenswrapper[4972]: I1121 12:28:56.196366 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7768f8c84f-t6wbg_9304bb08-b481-42e5-89ed-f215f0102662/manager/0.log" Nov 21 12:28:56 crc kubenswrapper[4972]: I1121 12:28:56.380281 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6d8fd67bf7-g4s6z_6aea9dc6-2e85-470c-9897-064111a4661e/manager/0.log" Nov 21 12:28:56 crc kubenswrapper[4972]: I1121 12:28:56.408542 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-56dfb6b67f-vhv8d_647ff257-bc23-44e5-9397-2696f04520d4/kube-rbac-proxy/0.log" Nov 21 12:28:56 crc kubenswrapper[4972]: I1121 12:28:56.409430 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-56dfb6b67f-vhv8d_647ff257-bc23-44e5-9397-2696f04520d4/manager/0.log" Nov 21 12:28:56 crc kubenswrapper[4972]: I1121 12:28:56.545413 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8667fbf6f6-nkg8q_56043c1e-adb8-4d37-9067-cb28a5103fdd/kube-rbac-proxy/0.log" Nov 21 12:28:56 crc kubenswrapper[4972]: I1121 12:28:56.745177 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8667fbf6f6-nkg8q_56043c1e-adb8-4d37-9067-cb28a5103fdd/manager/0.log" Nov 21 12:28:56 crc kubenswrapper[4972]: I1121 12:28:56.821051 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-bf4c6585d-jbvkg_4d93267d-02e5-4045-aa0a-5edf3730b4cf/kube-rbac-proxy/0.log" Nov 21 12:28:56 crc kubenswrapper[4972]: I1121 12:28:56.868121 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-bf4c6585d-jbvkg_4d93267d-02e5-4045-aa0a-5edf3730b4cf/manager/0.log" Nov 21 12:28:56 crc kubenswrapper[4972]: I1121 12:28:56.964445 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d86b44686-fdwgn_1d892fbc-e66c-405d-9d42-e306d9b652b8/kube-rbac-proxy/0.log" Nov 21 12:28:57 crc kubenswrapper[4972]: I1121 12:28:57.067046 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d86b44686-fdwgn_1d892fbc-e66c-405d-9d42-e306d9b652b8/manager/0.log" Nov 21 12:28:57 crc kubenswrapper[4972]: I1121 12:28:57.159810 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-769d9c7585-lx4b9_e8805826-cc65-49b4-ace4-44f25c209b4e/kube-rbac-proxy/0.log" Nov 21 12:28:57 crc kubenswrapper[4972]: I1121 12:28:57.311705 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5c75d7c94b-llzpm_902981fa-5b52-4436-b685-18372dd43999/kube-rbac-proxy/0.log" Nov 21 12:28:57 crc kubenswrapper[4972]: I1121 12:28:57.414954 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5c75d7c94b-llzpm_902981fa-5b52-4436-b685-18372dd43999/manager/0.log" Nov 21 12:28:57 crc kubenswrapper[4972]: I1121 12:28:57.431627 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-769d9c7585-lx4b9_e8805826-cc65-49b4-ace4-44f25c209b4e/manager/0.log" Nov 21 12:28:57 crc kubenswrapper[4972]: I1121 12:28:57.535410 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7879fb76fd-n4vrp_0828ae93-40ce-46ac-a769-f5e4e735b186/kube-rbac-proxy/0.log" Nov 21 12:28:57 crc kubenswrapper[4972]: I1121 12:28:57.766045 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7879fb76fd-n4vrp_0828ae93-40ce-46ac-a769-f5e4e735b186/manager/0.log" Nov 21 12:28:58 crc kubenswrapper[4972]: I1121 12:28:58.309224 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7bb88cb858-7cn8g_40fab427-66b5-4519-b416-0d5f253e2c10/manager/0.log" Nov 21 12:28:58 crc kubenswrapper[4972]: I1121 12:28:58.348522 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7bb88cb858-7cn8g_40fab427-66b5-4519-b416-0d5f253e2c10/kube-rbac-proxy/0.log" Nov 21 12:28:58 crc kubenswrapper[4972]: I1121 12:28:58.508521 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6f8c5b86cb-chz4z_2a54b288-8941-4f22-bfe1-d99802311c60/kube-rbac-proxy/0.log" Nov 21 12:28:58 crc kubenswrapper[4972]: I1121 12:28:58.542408 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-66b7d6f598-7dh7w_6bbfd974-9b27-4143-9c5c-031c5a4f28b2/kube-rbac-proxy/0.log" Nov 21 12:28:58 crc kubenswrapper[4972]: I1121 12:28:58.558041 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6f8c5b86cb-chz4z_2a54b288-8941-4f22-bfe1-d99802311c60/manager/0.log" Nov 21 12:28:58 crc kubenswrapper[4972]: I1121 12:28:58.768863 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-66b7d6f598-7dh7w_6bbfd974-9b27-4143-9c5c-031c5a4f28b2/manager/0.log" Nov 21 12:28:58 crc kubenswrapper[4972]: I1121 12:28:58.781152 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-86d796d84d-ddgk9_2bfc0746-c4ed-4b34-991b-3b63f96614e6/kube-rbac-proxy/0.log" Nov 21 12:28:58 crc kubenswrapper[4972]: I1121 12:28:58.988709 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6fdc856c5d-lcbxj_237d24ba-9f0a-4d48-b416-7a7aa7692bbf/kube-rbac-proxy/0.log" Nov 21 12:28:59 crc kubenswrapper[4972]: I1121 12:28:59.043469 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-86d796d84d-ddgk9_2bfc0746-c4ed-4b34-991b-3b63f96614e6/manager/0.log" Nov 21 12:28:59 crc kubenswrapper[4972]: I1121 12:28:59.091257 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6fdc856c5d-lcbxj_237d24ba-9f0a-4d48-b416-7a7aa7692bbf/manager/0.log" Nov 21 12:28:59 crc kubenswrapper[4972]: I1121 12:28:59.214608 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk_65d1f303-dffc-4de3-8192-4c74f4c33750/kube-rbac-proxy/0.log" Nov 21 12:28:59 crc kubenswrapper[4972]: I1121 12:28:59.226849 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6c655cdc6ctzxwk_65d1f303-dffc-4de3-8192-4c74f4c33750/manager/0.log" Nov 21 12:28:59 crc kubenswrapper[4972]: I1121 12:28:59.389361 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7755d5f8cc-z86dh_06f6d748-6a75-4e4a-b642-790efd655fac/kube-rbac-proxy/0.log" Nov 21 12:28:59 crc kubenswrapper[4972]: I1121 12:28:59.439417 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-77c7f689f5-np6v4_7a19389a-b34a-4c15-9dbe-b3b4da3b51d4/kube-rbac-proxy/0.log" Nov 21 12:28:59 crc kubenswrapper[4972]: I1121 12:28:59.694140 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-77c7f689f5-np6v4_7a19389a-b34a-4c15-9dbe-b3b4da3b51d4/operator/0.log" Nov 21 12:29:00 crc kubenswrapper[4972]: I1121 12:29:00.215172 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5bdf4f7f7f-nsdgb_a354b71d-0aa7-4cac-8022-90de180af97d/kube-rbac-proxy/0.log" Nov 21 12:29:00 crc kubenswrapper[4972]: I1121 12:29:00.299066 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-s7pn8_cdbf6692-280c-4fa4-8f94-cc2a0e29ef5f/registry-server/0.log" Nov 21 12:29:00 crc kubenswrapper[4972]: I1121 12:29:00.471966 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-6dc664666c-bxcdb_7a574f0d-6d00-487d-af74-19d886ccc174/kube-rbac-proxy/0.log" Nov 21 12:29:00 crc kubenswrapper[4972]: I1121 12:29:00.514339 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5bdf4f7f7f-nsdgb_a354b71d-0aa7-4cac-8022-90de180af97d/manager/0.log" Nov 21 12:29:00 crc kubenswrapper[4972]: I1121 12:29:00.576663 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-6dc664666c-bxcdb_7a574f0d-6d00-487d-af74-19d886ccc174/manager/0.log" Nov 21 12:29:00 crc kubenswrapper[4972]: I1121 12:29:00.717004 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-xsp8k_a4700bb1-902c-40bc-b02f-c8efe4893180/operator/0.log" Nov 21 12:29:00 crc kubenswrapper[4972]: I1121 12:29:00.848517 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-799cb6ffd6-92vm8_43051777-1676-4b54-8ddb-ba534e3d1b51/kube-rbac-proxy/0.log" Nov 21 12:29:00 crc kubenswrapper[4972]: I1121 12:29:00.953356 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-799cb6ffd6-92vm8_43051777-1676-4b54-8ddb-ba534e3d1b51/manager/0.log" Nov 21 12:29:01 crc kubenswrapper[4972]: I1121 12:29:01.080902 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7798859c74-b2644_220363ab-e1b7-44ee-962d-e2de79b22ab6/kube-rbac-proxy/0.log" Nov 21 12:29:01 crc kubenswrapper[4972]: I1121 12:29:01.265055 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8464cf66df-66m2r_12710259-8a41-4aa6-8842-54b6ac9aad22/kube-rbac-proxy/0.log" Nov 21 12:29:01 crc kubenswrapper[4972]: I1121 12:29:01.451404 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7798859c74-b2644_220363ab-e1b7-44ee-962d-e2de79b22ab6/manager/0.log" Nov 21 12:29:01 crc kubenswrapper[4972]: I1121 12:29:01.567780 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8464cf66df-66m2r_12710259-8a41-4aa6-8842-54b6ac9aad22/manager/0.log" Nov 21 12:29:01 crc kubenswrapper[4972]: I1121 12:29:01.689767 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cd4fb6f79-2kq6z_3efec5f8-69ac-4971-a9e5-2c53352cabed/kube-rbac-proxy/0.log" Nov 21 12:29:01 crc kubenswrapper[4972]: I1121 12:29:01.785168 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cd4fb6f79-2kq6z_3efec5f8-69ac-4971-a9e5-2c53352cabed/manager/0.log" Nov 21 12:29:01 crc kubenswrapper[4972]: I1121 12:29:01.870043 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7755d5f8cc-z86dh_06f6d748-6a75-4e4a-b642-790efd655fac/manager/0.log" Nov 21 12:29:09 crc kubenswrapper[4972]: I1121 12:29:09.759580 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:29:09 crc kubenswrapper[4972]: E1121 12:29:09.761710 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:29:18 crc kubenswrapper[4972]: I1121 12:29:18.054696 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6hlzb_1fc3fe65-482e-43ed-9669-7849bfc0bfd2/control-plane-machine-set-operator/0.log" Nov 21 12:29:18 crc kubenswrapper[4972]: I1121 12:29:18.235524 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jdqq6_3897d9bc-e576-4575-8451-10a0e3a73517/kube-rbac-proxy/0.log" Nov 21 12:29:18 crc kubenswrapper[4972]: I1121 12:29:18.298627 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jdqq6_3897d9bc-e576-4575-8451-10a0e3a73517/machine-api-operator/0.log" Nov 21 12:29:23 crc kubenswrapper[4972]: I1121 12:29:23.759880 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:29:23 crc kubenswrapper[4972]: E1121 12:29:23.760801 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:29:30 crc kubenswrapper[4972]: I1121 12:29:30.244447 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-zbmqj_619fcbfa-aa5a-4e42-8c95-6bf5ad357cee/cert-manager-controller/0.log" Nov 21 12:29:30 crc kubenswrapper[4972]: I1121 12:29:30.309154 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-sfwsh_dd3b7a0f-6e23-4379-bc20-83d489a6a650/cert-manager-cainjector/0.log" Nov 21 12:29:30 crc kubenswrapper[4972]: I1121 12:29:30.348365 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-25b5p_bc21339d-081f-4b15-b46f-fd322a3c938d/cert-manager-webhook/0.log" Nov 21 12:29:37 crc kubenswrapper[4972]: I1121 12:29:37.760031 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:29:37 crc kubenswrapper[4972]: E1121 12:29:37.760714 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:29:44 crc kubenswrapper[4972]: I1121 12:29:44.485366 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-j4g7f_d45e5a89-dbd0-49f3-a285-a8d14e35d7de/nmstate-console-plugin/0.log" Nov 21 12:29:44 crc kubenswrapper[4972]: I1121 12:29:44.671257 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-8ddtn_8db64828-f701-4320-9c0e-1d2897bdfa94/kube-rbac-proxy/0.log" Nov 21 12:29:44 crc kubenswrapper[4972]: I1121 12:29:44.707500 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-zj289_289a84f1-0b97-4282-8a9a-643bfb19b117/nmstate-handler/0.log" Nov 21 12:29:44 crc kubenswrapper[4972]: I1121 12:29:44.750032 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-8ddtn_8db64828-f701-4320-9c0e-1d2897bdfa94/nmstate-metrics/0.log" Nov 21 12:29:44 crc kubenswrapper[4972]: I1121 12:29:44.940123 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-m89vw_35df56e5-4739-474c-af19-8b79bd18c12c/nmstate-operator/0.log" Nov 21 12:29:44 crc kubenswrapper[4972]: I1121 12:29:44.995646 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-76brj_875c8c14-4fbd-4041-93d1-9fc99e815156/nmstate-webhook/0.log" Nov 21 12:29:50 crc kubenswrapper[4972]: I1121 12:29:50.759522 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:29:50 crc kubenswrapper[4972]: E1121 12:29:50.760756 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.169028 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d"] Nov 21 12:30:00 crc kubenswrapper[4972]: E1121 12:30:00.170480 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773899b9-d381-4d9d-8f49-8c9cd48acaf2" containerName="extract-utilities" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.170508 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="773899b9-d381-4d9d-8f49-8c9cd48acaf2" containerName="extract-utilities" Nov 21 12:30:00 crc kubenswrapper[4972]: E1121 12:30:00.170535 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e" containerName="container-00" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.170543 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e" containerName="container-00" Nov 21 12:30:00 crc kubenswrapper[4972]: E1121 12:30:00.170581 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773899b9-d381-4d9d-8f49-8c9cd48acaf2" containerName="extract-content" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.170590 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="773899b9-d381-4d9d-8f49-8c9cd48acaf2" containerName="extract-content" Nov 21 12:30:00 crc kubenswrapper[4972]: E1121 12:30:00.170605 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773899b9-d381-4d9d-8f49-8c9cd48acaf2" containerName="registry-server" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.170612 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="773899b9-d381-4d9d-8f49-8c9cd48acaf2" containerName="registry-server" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.170910 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dbc4533-0d6e-4bf5-9fc6-e0448c83ee8e" containerName="container-00" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.170925 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="773899b9-d381-4d9d-8f49-8c9cd48acaf2" containerName="registry-server" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.173784 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.178333 4972 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.186881 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d"] Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.188479 4972 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.323746 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z65qq\" (UniqueName: \"kubernetes.io/projected/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-kube-api-access-z65qq\") pod \"collect-profiles-29395470-xz88d\" (UID: \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.324140 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-config-volume\") pod \"collect-profiles-29395470-xz88d\" (UID: \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.324266 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-secret-volume\") pod \"collect-profiles-29395470-xz88d\" (UID: \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.426726 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z65qq\" (UniqueName: \"kubernetes.io/projected/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-kube-api-access-z65qq\") pod \"collect-profiles-29395470-xz88d\" (UID: \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.426878 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-config-volume\") pod \"collect-profiles-29395470-xz88d\" (UID: \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.426920 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-secret-volume\") pod \"collect-profiles-29395470-xz88d\" (UID: \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.428059 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-config-volume\") pod \"collect-profiles-29395470-xz88d\" (UID: \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.434609 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-secret-volume\") pod \"collect-profiles-29395470-xz88d\" (UID: \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.458458 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z65qq\" (UniqueName: \"kubernetes.io/projected/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-kube-api-access-z65qq\") pod \"collect-profiles-29395470-xz88d\" (UID: \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.505785 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:00 crc kubenswrapper[4972]: I1121 12:30:00.939112 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-gvl4b_515ef80a-c079-44e2-ba9a-cef67b0a5965/kube-rbac-proxy/0.log" Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.101687 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d"] Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.236450 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/cp-frr-files/0.log" Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.421325 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-gvl4b_515ef80a-c079-44e2-ba9a-cef67b0a5965/controller/0.log" Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.486307 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/cp-reloader/0.log" Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.489338 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/cp-frr-files/0.log" Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.525858 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/cp-metrics/0.log" Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.549866 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" event={"ID":"2d7237fa-9246-48fa-9bfd-2c4574fc89e3","Type":"ContainerStarted","Data":"94ca7249699256571b05e8f228edd1cb5f20624b0f61e409471d071033e4857e"} Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.550152 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" event={"ID":"2d7237fa-9246-48fa-9bfd-2c4574fc89e3","Type":"ContainerStarted","Data":"474b7029af58d98774d2ac359b6816917e448dbc386106beab34e5fb20c1d7a0"} Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.573362 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" podStartSLOduration=1.573341919 podStartE2EDuration="1.573341919s" podCreationTimestamp="2025-11-21 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 12:30:01.568287835 +0000 UTC m=+10146.677430343" watchObservedRunningTime="2025-11-21 12:30:01.573341919 +0000 UTC m=+10146.682484417" Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.719498 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/cp-reloader/0.log" Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.761149 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:30:01 crc kubenswrapper[4972]: E1121 12:30:01.761408 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.932732 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/cp-metrics/0.log" Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.955011 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/cp-frr-files/0.log" Nov 21 12:30:01 crc kubenswrapper[4972]: I1121 12:30:01.962859 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/cp-reloader/0.log" Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.006704 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/cp-metrics/0.log" Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.127470 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/cp-frr-files/0.log" Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.134018 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/cp-reloader/0.log" Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.182551 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/cp-metrics/0.log" Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.231889 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/controller/0.log" Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.312758 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/frr-metrics/0.log" Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.459624 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/kube-rbac-proxy/0.log" Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.459791 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/kube-rbac-proxy-frr/0.log" Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.572902 4972 generic.go:334] "Generic (PLEG): container finished" podID="2d7237fa-9246-48fa-9bfd-2c4574fc89e3" containerID="94ca7249699256571b05e8f228edd1cb5f20624b0f61e409471d071033e4857e" exitCode=0 Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.573249 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" event={"ID":"2d7237fa-9246-48fa-9bfd-2c4574fc89e3","Type":"ContainerDied","Data":"94ca7249699256571b05e8f228edd1cb5f20624b0f61e409471d071033e4857e"} Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.605961 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/reloader/0.log" Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.738428 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-c8llb_e74efb92-2741-41d9-a2aa-01e53dc1492c/frr-k8s-webhook-server/0.log" Nov 21 12:30:02 crc kubenswrapper[4972]: I1121 12:30:02.924562 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-55cdd8d9bf-nk57h_d67eba6d-5d34-4253-8718-a833c7b43c41/manager/0.log" Nov 21 12:30:03 crc kubenswrapper[4972]: I1121 12:30:03.112152 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-dcc4dbb97-cc8w9_b654f52d-03b3-4047-bc73-89cbcf0a1d00/webhook-server/0.log" Nov 21 12:30:03 crc kubenswrapper[4972]: I1121 12:30:03.155599 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-n54ks_1d3b3072-9fd2-451d-83e1-4a7962179659/kube-rbac-proxy/0.log" Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.061789 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.217956 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-secret-volume\") pod \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\" (UID: \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\") " Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.218127 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-config-volume\") pod \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\" (UID: \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\") " Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.218271 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z65qq\" (UniqueName: \"kubernetes.io/projected/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-kube-api-access-z65qq\") pod \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\" (UID: \"2d7237fa-9246-48fa-9bfd-2c4574fc89e3\") " Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.222152 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d7237fa-9246-48fa-9bfd-2c4574fc89e3" (UID: "2d7237fa-9246-48fa-9bfd-2c4574fc89e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.228991 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d7237fa-9246-48fa-9bfd-2c4574fc89e3" (UID: "2d7237fa-9246-48fa-9bfd-2c4574fc89e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.231311 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-kube-api-access-z65qq" (OuterVolumeSpecName: "kube-api-access-z65qq") pod "2d7237fa-9246-48fa-9bfd-2c4574fc89e3" (UID: "2d7237fa-9246-48fa-9bfd-2c4574fc89e3"). InnerVolumeSpecName "kube-api-access-z65qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.320912 4972 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.320950 4972 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.320960 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z65qq\" (UniqueName: \"kubernetes.io/projected/2d7237fa-9246-48fa-9bfd-2c4574fc89e3-kube-api-access-z65qq\") on node \"crc\" DevicePath \"\"" Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.503334 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-n54ks_1d3b3072-9fd2-451d-83e1-4a7962179659/speaker/0.log" Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.598659 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" event={"ID":"2d7237fa-9246-48fa-9bfd-2c4574fc89e3","Type":"ContainerDied","Data":"474b7029af58d98774d2ac359b6816917e448dbc386106beab34e5fb20c1d7a0"} Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.598709 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="474b7029af58d98774d2ac359b6816917e448dbc386106beab34e5fb20c1d7a0" Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.598788 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395470-xz88d" Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.653301 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5"] Nov 21 12:30:04 crc kubenswrapper[4972]: I1121 12:30:04.663967 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395425-xv7q5"] Nov 21 12:30:05 crc kubenswrapper[4972]: I1121 12:30:05.770668 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7c92b46-f7f0-4914-8c54-2192c7997dee" path="/var/lib/kubelet/pods/c7c92b46-f7f0-4914-8c54-2192c7997dee/volumes" Nov 21 12:30:05 crc kubenswrapper[4972]: I1121 12:30:05.828860 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-q5hqx_7a4cb27c-a1d4-49dd-935d-8ee648d8349f/frr/0.log" Nov 21 12:30:14 crc kubenswrapper[4972]: I1121 12:30:14.759190 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:30:14 crc kubenswrapper[4972]: E1121 12:30:14.760164 4972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9l6cj_openshift-machine-config-operator(ec41c003-c1ce-4c2f-8eed-62ff2974cd8a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" Nov 21 12:30:19 crc kubenswrapper[4972]: I1121 12:30:19.214098 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg_58bf36c5-f150-4320-ba2f-7b728bd1cc43/util/0.log" Nov 21 12:30:19 crc kubenswrapper[4972]: I1121 12:30:19.438990 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg_58bf36c5-f150-4320-ba2f-7b728bd1cc43/pull/0.log" Nov 21 12:30:19 crc kubenswrapper[4972]: I1121 12:30:19.449357 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg_58bf36c5-f150-4320-ba2f-7b728bd1cc43/util/0.log" Nov 21 12:30:19 crc kubenswrapper[4972]: I1121 12:30:19.473928 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg_58bf36c5-f150-4320-ba2f-7b728bd1cc43/pull/0.log" Nov 21 12:30:19 crc kubenswrapper[4972]: I1121 12:30:19.606192 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg_58bf36c5-f150-4320-ba2f-7b728bd1cc43/extract/0.log" Nov 21 12:30:19 crc kubenswrapper[4972]: I1121 12:30:19.644146 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg_58bf36c5-f150-4320-ba2f-7b728bd1cc43/pull/0.log" Nov 21 12:30:19 crc kubenswrapper[4972]: I1121 12:30:19.661821 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adlnkg_58bf36c5-f150-4320-ba2f-7b728bd1cc43/util/0.log" Nov 21 12:30:19 crc kubenswrapper[4972]: I1121 12:30:19.860410 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8_93746499-c9a7-415b-9525-d4f061c35e89/util/0.log" Nov 21 12:30:19 crc kubenswrapper[4972]: I1121 12:30:19.977515 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8_93746499-c9a7-415b-9525-d4f061c35e89/util/0.log" Nov 21 12:30:19 crc kubenswrapper[4972]: I1121 12:30:19.983415 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8_93746499-c9a7-415b-9525-d4f061c35e89/pull/0.log" Nov 21 12:30:19 crc kubenswrapper[4972]: I1121 12:30:19.998941 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8_93746499-c9a7-415b-9525-d4f061c35e89/pull/0.log" Nov 21 12:30:20 crc kubenswrapper[4972]: I1121 12:30:20.177627 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8_93746499-c9a7-415b-9525-d4f061c35e89/extract/0.log" Nov 21 12:30:20 crc kubenswrapper[4972]: I1121 12:30:20.229161 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8_93746499-c9a7-415b-9525-d4f061c35e89/pull/0.log" Nov 21 12:30:20 crc kubenswrapper[4972]: I1121 12:30:20.249212 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772effdk8_93746499-c9a7-415b-9525-d4f061c35e89/util/0.log" Nov 21 12:30:20 crc kubenswrapper[4972]: I1121 12:30:20.800538 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x_00d4fc91-afa5-4d86-ab17-1f1d77fba16a/util/0.log" Nov 21 12:30:20 crc kubenswrapper[4972]: I1121 12:30:20.966876 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x_00d4fc91-afa5-4d86-ab17-1f1d77fba16a/util/0.log" Nov 21 12:30:21 crc kubenswrapper[4972]: I1121 12:30:21.009705 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x_00d4fc91-afa5-4d86-ab17-1f1d77fba16a/pull/0.log" Nov 21 12:30:21 crc kubenswrapper[4972]: I1121 12:30:21.009776 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x_00d4fc91-afa5-4d86-ab17-1f1d77fba16a/pull/0.log" Nov 21 12:30:21 crc kubenswrapper[4972]: I1121 12:30:21.157735 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x_00d4fc91-afa5-4d86-ab17-1f1d77fba16a/util/0.log" Nov 21 12:30:21 crc kubenswrapper[4972]: I1121 12:30:21.158172 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x_00d4fc91-afa5-4d86-ab17-1f1d77fba16a/pull/0.log" Nov 21 12:30:21 crc kubenswrapper[4972]: I1121 12:30:21.217726 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107vk9x_00d4fc91-afa5-4d86-ab17-1f1d77fba16a/extract/0.log" Nov 21 12:30:21 crc kubenswrapper[4972]: I1121 12:30:21.359597 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-284zp_79521035-8282-43fd-9325-623a9e8d0a5e/extract-utilities/0.log" Nov 21 12:30:21 crc kubenswrapper[4972]: I1121 12:30:21.521768 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-284zp_79521035-8282-43fd-9325-623a9e8d0a5e/extract-content/0.log" Nov 21 12:30:21 crc kubenswrapper[4972]: I1121 12:30:21.535602 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-284zp_79521035-8282-43fd-9325-623a9e8d0a5e/extract-content/0.log" Nov 21 12:30:21 crc kubenswrapper[4972]: I1121 12:30:21.545383 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-284zp_79521035-8282-43fd-9325-623a9e8d0a5e/extract-utilities/0.log" Nov 21 12:30:21 crc kubenswrapper[4972]: I1121 12:30:21.743434 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-284zp_79521035-8282-43fd-9325-623a9e8d0a5e/extract-utilities/0.log" Nov 21 12:30:21 crc kubenswrapper[4972]: I1121 12:30:21.800491 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-284zp_79521035-8282-43fd-9325-623a9e8d0a5e/extract-content/0.log" Nov 21 12:30:21 crc kubenswrapper[4972]: I1121 12:30:21.964323 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fhnp6_319a09ca-a083-4445-b93d-033057a1949e/extract-utilities/0.log" Nov 21 12:30:22 crc kubenswrapper[4972]: I1121 12:30:22.139994 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fhnp6_319a09ca-a083-4445-b93d-033057a1949e/extract-utilities/0.log" Nov 21 12:30:22 crc kubenswrapper[4972]: I1121 12:30:22.229505 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fhnp6_319a09ca-a083-4445-b93d-033057a1949e/extract-content/0.log" Nov 21 12:30:22 crc kubenswrapper[4972]: I1121 12:30:22.248578 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fhnp6_319a09ca-a083-4445-b93d-033057a1949e/extract-content/0.log" Nov 21 12:30:22 crc kubenswrapper[4972]: I1121 12:30:22.418314 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fhnp6_319a09ca-a083-4445-b93d-033057a1949e/extract-utilities/0.log" Nov 21 12:30:22 crc kubenswrapper[4972]: I1121 12:30:22.472053 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fhnp6_319a09ca-a083-4445-b93d-033057a1949e/extract-content/0.log" Nov 21 12:30:22 crc kubenswrapper[4972]: I1121 12:30:22.674040 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs_d16d14b4-fe18-4865-a9db-e203aeb6ed09/util/0.log" Nov 21 12:30:22 crc kubenswrapper[4972]: I1121 12:30:22.948499 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs_d16d14b4-fe18-4865-a9db-e203aeb6ed09/util/0.log" Nov 21 12:30:23 crc kubenswrapper[4972]: I1121 12:30:23.000813 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs_d16d14b4-fe18-4865-a9db-e203aeb6ed09/pull/0.log" Nov 21 12:30:23 crc kubenswrapper[4972]: I1121 12:30:23.014368 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs_d16d14b4-fe18-4865-a9db-e203aeb6ed09/pull/0.log" Nov 21 12:30:23 crc kubenswrapper[4972]: I1121 12:30:23.177678 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-284zp_79521035-8282-43fd-9325-623a9e8d0a5e/registry-server/0.log" Nov 21 12:30:23 crc kubenswrapper[4972]: I1121 12:30:23.238384 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs_d16d14b4-fe18-4865-a9db-e203aeb6ed09/extract/0.log" Nov 21 12:30:23 crc kubenswrapper[4972]: I1121 12:30:23.261693 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs_d16d14b4-fe18-4865-a9db-e203aeb6ed09/util/0.log" Nov 21 12:30:23 crc kubenswrapper[4972]: I1121 12:30:23.301103 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6695qs_d16d14b4-fe18-4865-a9db-e203aeb6ed09/pull/0.log" Nov 21 12:30:23 crc kubenswrapper[4972]: I1121 12:30:23.676563 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fhnp6_319a09ca-a083-4445-b93d-033057a1949e/registry-server/0.log" Nov 21 12:30:23 crc kubenswrapper[4972]: I1121 12:30:23.742511 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2wpdc_6e85a9ad-624e-40f7-9084-3be164ba8fb2/marketplace-operator/0.log" Nov 21 12:30:23 crc kubenswrapper[4972]: I1121 12:30:23.844416 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4qfs_b698e278-793a-414e-9b74-54abd348e37a/extract-utilities/0.log" Nov 21 12:30:23 crc kubenswrapper[4972]: I1121 12:30:23.958554 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4qfs_b698e278-793a-414e-9b74-54abd348e37a/extract-utilities/0.log" Nov 21 12:30:24 crc kubenswrapper[4972]: I1121 12:30:24.013242 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4qfs_b698e278-793a-414e-9b74-54abd348e37a/extract-content/0.log" Nov 21 12:30:24 crc kubenswrapper[4972]: I1121 12:30:24.032128 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4qfs_b698e278-793a-414e-9b74-54abd348e37a/extract-content/0.log" Nov 21 12:30:24 crc kubenswrapper[4972]: I1121 12:30:24.154394 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4qfs_b698e278-793a-414e-9b74-54abd348e37a/extract-utilities/0.log" Nov 21 12:30:24 crc kubenswrapper[4972]: I1121 12:30:24.207678 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4qfs_b698e278-793a-414e-9b74-54abd348e37a/extract-content/0.log" Nov 21 12:30:24 crc kubenswrapper[4972]: I1121 12:30:24.232512 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tp9g9_e4909061-0974-4269-bdb7-5617d42e01af/extract-utilities/0.log" Nov 21 12:30:24 crc kubenswrapper[4972]: I1121 12:30:24.512464 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tp9g9_e4909061-0974-4269-bdb7-5617d42e01af/extract-content/0.log" Nov 21 12:30:24 crc kubenswrapper[4972]: I1121 12:30:24.543208 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tp9g9_e4909061-0974-4269-bdb7-5617d42e01af/extract-utilities/0.log" Nov 21 12:30:24 crc kubenswrapper[4972]: I1121 12:30:24.550318 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tp9g9_e4909061-0974-4269-bdb7-5617d42e01af/extract-content/0.log" Nov 21 12:30:24 crc kubenswrapper[4972]: I1121 12:30:24.650436 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m4qfs_b698e278-793a-414e-9b74-54abd348e37a/registry-server/0.log" Nov 21 12:30:24 crc kubenswrapper[4972]: I1121 12:30:24.704367 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tp9g9_e4909061-0974-4269-bdb7-5617d42e01af/extract-utilities/0.log" Nov 21 12:30:24 crc kubenswrapper[4972]: I1121 12:30:24.721243 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tp9g9_e4909061-0974-4269-bdb7-5617d42e01af/extract-content/0.log" Nov 21 12:30:26 crc kubenswrapper[4972]: I1121 12:30:26.411501 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tp9g9_e4909061-0974-4269-bdb7-5617d42e01af/registry-server/0.log" Nov 21 12:30:29 crc kubenswrapper[4972]: I1121 12:30:29.760007 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:30:30 crc kubenswrapper[4972]: I1121 12:30:30.890430 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"99df28f1f738cddd09769c2aa4ce824fee9a646c6ffeb05938958550f2dc3b19"} Nov 21 12:30:38 crc kubenswrapper[4972]: I1121 12:30:38.999308 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-s2rhw_c0081501-c78f-4c85-9d02-643b0f84c963/prometheus-operator/0.log" Nov 21 12:30:39 crc kubenswrapper[4972]: I1121 12:30:39.231595 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57589db548-pw8l8_093e7801-c881-4ace-9120-e0495a2b1dc8/prometheus-operator-admission-webhook/0.log" Nov 21 12:30:39 crc kubenswrapper[4972]: I1121 12:30:39.240321 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57589db548-cqfwh_2262607f-d34e-4a47-877a-d04cbf0e72c6/prometheus-operator-admission-webhook/0.log" Nov 21 12:30:39 crc kubenswrapper[4972]: I1121 12:30:39.482562 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-xlg99_f313fdb1-cc0d-4907-baee-4e61a4b7e209/operator/0.log" Nov 21 12:30:39 crc kubenswrapper[4972]: I1121 12:30:39.507058 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-v6fh4_f4486942-dcd4-4a64-8490-190bb54e8fd9/perses-operator/0.log" Nov 21 12:30:40 crc kubenswrapper[4972]: I1121 12:30:40.656344 4972 scope.go:117] "RemoveContainer" containerID="9c19f18541a59b02b79ea06c99f10cb13c912040e39757c1bbf891d3e2f3c14e" Nov 21 12:30:43 crc kubenswrapper[4972]: I1121 12:30:43.775978 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ld5bk"] Nov 21 12:30:43 crc kubenswrapper[4972]: E1121 12:30:43.777721 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7237fa-9246-48fa-9bfd-2c4574fc89e3" containerName="collect-profiles" Nov 21 12:30:43 crc kubenswrapper[4972]: I1121 12:30:43.777737 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7237fa-9246-48fa-9bfd-2c4574fc89e3" containerName="collect-profiles" Nov 21 12:30:43 crc kubenswrapper[4972]: I1121 12:30:43.778010 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7237fa-9246-48fa-9bfd-2c4574fc89e3" containerName="collect-profiles" Nov 21 12:30:43 crc kubenswrapper[4972]: I1121 12:30:43.779546 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:43 crc kubenswrapper[4972]: I1121 12:30:43.791065 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ld5bk"] Nov 21 12:30:43 crc kubenswrapper[4972]: I1121 12:30:43.937548 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0938dc34-ad96-4092-a78f-ba799c180128-utilities\") pod \"redhat-operators-ld5bk\" (UID: \"0938dc34-ad96-4092-a78f-ba799c180128\") " pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:43 crc kubenswrapper[4972]: I1121 12:30:43.937624 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrkq8\" (UniqueName: \"kubernetes.io/projected/0938dc34-ad96-4092-a78f-ba799c180128-kube-api-access-mrkq8\") pod \"redhat-operators-ld5bk\" (UID: \"0938dc34-ad96-4092-a78f-ba799c180128\") " pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:43 crc kubenswrapper[4972]: I1121 12:30:43.937654 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0938dc34-ad96-4092-a78f-ba799c180128-catalog-content\") pod \"redhat-operators-ld5bk\" (UID: \"0938dc34-ad96-4092-a78f-ba799c180128\") " pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:44 crc kubenswrapper[4972]: I1121 12:30:44.039341 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0938dc34-ad96-4092-a78f-ba799c180128-utilities\") pod \"redhat-operators-ld5bk\" (UID: \"0938dc34-ad96-4092-a78f-ba799c180128\") " pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:44 crc kubenswrapper[4972]: I1121 12:30:44.039870 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0938dc34-ad96-4092-a78f-ba799c180128-utilities\") pod \"redhat-operators-ld5bk\" (UID: \"0938dc34-ad96-4092-a78f-ba799c180128\") " pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:44 crc kubenswrapper[4972]: I1121 12:30:44.039967 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrkq8\" (UniqueName: \"kubernetes.io/projected/0938dc34-ad96-4092-a78f-ba799c180128-kube-api-access-mrkq8\") pod \"redhat-operators-ld5bk\" (UID: \"0938dc34-ad96-4092-a78f-ba799c180128\") " pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:44 crc kubenswrapper[4972]: I1121 12:30:44.040017 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0938dc34-ad96-4092-a78f-ba799c180128-catalog-content\") pod \"redhat-operators-ld5bk\" (UID: \"0938dc34-ad96-4092-a78f-ba799c180128\") " pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:44 crc kubenswrapper[4972]: I1121 12:30:44.040356 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0938dc34-ad96-4092-a78f-ba799c180128-catalog-content\") pod \"redhat-operators-ld5bk\" (UID: \"0938dc34-ad96-4092-a78f-ba799c180128\") " pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:44 crc kubenswrapper[4972]: I1121 12:30:44.077886 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrkq8\" (UniqueName: \"kubernetes.io/projected/0938dc34-ad96-4092-a78f-ba799c180128-kube-api-access-mrkq8\") pod \"redhat-operators-ld5bk\" (UID: \"0938dc34-ad96-4092-a78f-ba799c180128\") " pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:44 crc kubenswrapper[4972]: I1121 12:30:44.116326 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:44 crc kubenswrapper[4972]: I1121 12:30:44.716400 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ld5bk"] Nov 21 12:30:45 crc kubenswrapper[4972]: I1121 12:30:45.036529 4972 generic.go:334] "Generic (PLEG): container finished" podID="0938dc34-ad96-4092-a78f-ba799c180128" containerID="3c525ef129f5c61467c54272b41d1d1f726c642056a4c0dccfee3bd8b8910ac7" exitCode=0 Nov 21 12:30:45 crc kubenswrapper[4972]: I1121 12:30:45.036723 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ld5bk" event={"ID":"0938dc34-ad96-4092-a78f-ba799c180128","Type":"ContainerDied","Data":"3c525ef129f5c61467c54272b41d1d1f726c642056a4c0dccfee3bd8b8910ac7"} Nov 21 12:30:45 crc kubenswrapper[4972]: I1121 12:30:45.036895 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ld5bk" event={"ID":"0938dc34-ad96-4092-a78f-ba799c180128","Type":"ContainerStarted","Data":"fec930a8201572cfb3160ece988bd7d51dbd3f0acfa9858b170feac37504b87e"} Nov 21 12:30:45 crc kubenswrapper[4972]: I1121 12:30:45.040848 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 12:30:47 crc kubenswrapper[4972]: I1121 12:30:47.081608 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ld5bk" event={"ID":"0938dc34-ad96-4092-a78f-ba799c180128","Type":"ContainerStarted","Data":"85364d1cf945fa480016946819a36636ab6079b67d650d6779c544b7e8389d9b"} Nov 21 12:30:52 crc kubenswrapper[4972]: I1121 12:30:52.133480 4972 generic.go:334] "Generic (PLEG): container finished" podID="0938dc34-ad96-4092-a78f-ba799c180128" containerID="85364d1cf945fa480016946819a36636ab6079b67d650d6779c544b7e8389d9b" exitCode=0 Nov 21 12:30:52 crc kubenswrapper[4972]: I1121 12:30:52.133556 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ld5bk" event={"ID":"0938dc34-ad96-4092-a78f-ba799c180128","Type":"ContainerDied","Data":"85364d1cf945fa480016946819a36636ab6079b67d650d6779c544b7e8389d9b"} Nov 21 12:30:53 crc kubenswrapper[4972]: I1121 12:30:53.147819 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ld5bk" event={"ID":"0938dc34-ad96-4092-a78f-ba799c180128","Type":"ContainerStarted","Data":"31d02dc4025c893bdb37c59c00623dae5fb9eb8e27c627af65e4e01da1ec1083"} Nov 21 12:30:53 crc kubenswrapper[4972]: I1121 12:30:53.178818 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ld5bk" podStartSLOduration=2.625642353 podStartE2EDuration="10.178797685s" podCreationTimestamp="2025-11-21 12:30:43 +0000 UTC" firstStartedPulling="2025-11-21 12:30:45.040554075 +0000 UTC m=+10190.149696573" lastFinishedPulling="2025-11-21 12:30:52.593709397 +0000 UTC m=+10197.702851905" observedRunningTime="2025-11-21 12:30:53.165611316 +0000 UTC m=+10198.274753844" watchObservedRunningTime="2025-11-21 12:30:53.178797685 +0000 UTC m=+10198.287940173" Nov 21 12:30:54 crc kubenswrapper[4972]: I1121 12:30:54.116554 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:54 crc kubenswrapper[4972]: I1121 12:30:54.117043 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:30:55 crc kubenswrapper[4972]: I1121 12:30:55.180100 4972 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ld5bk" podUID="0938dc34-ad96-4092-a78f-ba799c180128" containerName="registry-server" probeResult="failure" output=< Nov 21 12:30:55 crc kubenswrapper[4972]: timeout: failed to connect service ":50051" within 1s Nov 21 12:30:55 crc kubenswrapper[4972]: > Nov 21 12:31:04 crc kubenswrapper[4972]: I1121 12:31:04.185964 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:31:04 crc kubenswrapper[4972]: I1121 12:31:04.258540 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:31:04 crc kubenswrapper[4972]: I1121 12:31:04.429708 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ld5bk"] Nov 21 12:31:05 crc kubenswrapper[4972]: I1121 12:31:05.310927 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ld5bk" podUID="0938dc34-ad96-4092-a78f-ba799c180128" containerName="registry-server" containerID="cri-o://31d02dc4025c893bdb37c59c00623dae5fb9eb8e27c627af65e4e01da1ec1083" gracePeriod=2 Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.326821 4972 generic.go:334] "Generic (PLEG): container finished" podID="0938dc34-ad96-4092-a78f-ba799c180128" containerID="31d02dc4025c893bdb37c59c00623dae5fb9eb8e27c627af65e4e01da1ec1083" exitCode=0 Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.327096 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ld5bk" event={"ID":"0938dc34-ad96-4092-a78f-ba799c180128","Type":"ContainerDied","Data":"31d02dc4025c893bdb37c59c00623dae5fb9eb8e27c627af65e4e01da1ec1083"} Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.327482 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ld5bk" event={"ID":"0938dc34-ad96-4092-a78f-ba799c180128","Type":"ContainerDied","Data":"fec930a8201572cfb3160ece988bd7d51dbd3f0acfa9858b170feac37504b87e"} Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.327502 4972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fec930a8201572cfb3160ece988bd7d51dbd3f0acfa9858b170feac37504b87e" Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.398686 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.555689 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0938dc34-ad96-4092-a78f-ba799c180128-catalog-content\") pod \"0938dc34-ad96-4092-a78f-ba799c180128\" (UID: \"0938dc34-ad96-4092-a78f-ba799c180128\") " Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.556141 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0938dc34-ad96-4092-a78f-ba799c180128-utilities\") pod \"0938dc34-ad96-4092-a78f-ba799c180128\" (UID: \"0938dc34-ad96-4092-a78f-ba799c180128\") " Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.556321 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrkq8\" (UniqueName: \"kubernetes.io/projected/0938dc34-ad96-4092-a78f-ba799c180128-kube-api-access-mrkq8\") pod \"0938dc34-ad96-4092-a78f-ba799c180128\" (UID: \"0938dc34-ad96-4092-a78f-ba799c180128\") " Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.556991 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0938dc34-ad96-4092-a78f-ba799c180128-utilities" (OuterVolumeSpecName: "utilities") pod "0938dc34-ad96-4092-a78f-ba799c180128" (UID: "0938dc34-ad96-4092-a78f-ba799c180128"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.562119 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0938dc34-ad96-4092-a78f-ba799c180128-kube-api-access-mrkq8" (OuterVolumeSpecName: "kube-api-access-mrkq8") pod "0938dc34-ad96-4092-a78f-ba799c180128" (UID: "0938dc34-ad96-4092-a78f-ba799c180128"). InnerVolumeSpecName "kube-api-access-mrkq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.658933 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0938dc34-ad96-4092-a78f-ba799c180128-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.658974 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrkq8\" (UniqueName: \"kubernetes.io/projected/0938dc34-ad96-4092-a78f-ba799c180128-kube-api-access-mrkq8\") on node \"crc\" DevicePath \"\"" Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.659039 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0938dc34-ad96-4092-a78f-ba799c180128-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0938dc34-ad96-4092-a78f-ba799c180128" (UID: "0938dc34-ad96-4092-a78f-ba799c180128"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:31:06 crc kubenswrapper[4972]: I1121 12:31:06.761112 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0938dc34-ad96-4092-a78f-ba799c180128-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:31:07 crc kubenswrapper[4972]: I1121 12:31:07.336378 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ld5bk" Nov 21 12:31:07 crc kubenswrapper[4972]: I1121 12:31:07.394576 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ld5bk"] Nov 21 12:31:07 crc kubenswrapper[4972]: I1121 12:31:07.406698 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ld5bk"] Nov 21 12:31:07 crc kubenswrapper[4972]: I1121 12:31:07.781961 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0938dc34-ad96-4092-a78f-ba799c180128" path="/var/lib/kubelet/pods/0938dc34-ad96-4092-a78f-ba799c180128/volumes" Nov 21 12:31:08 crc kubenswrapper[4972]: E1121 12:31:08.555410 4972 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.176:47314->38.102.83.176:41589: write tcp 38.102.83.176:47314->38.102.83.176:41589: write: broken pipe Nov 21 12:31:40 crc kubenswrapper[4972]: I1121 12:31:40.766526 4972 scope.go:117] "RemoveContainer" containerID="f23c45ba579aaf98f64c9d10885592dd40fb02c3e93dbbd44fe8f565b07fdcd2" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.038437 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z7g6w"] Nov 21 12:32:04 crc kubenswrapper[4972]: E1121 12:32:04.039884 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0938dc34-ad96-4092-a78f-ba799c180128" containerName="registry-server" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.039909 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0938dc34-ad96-4092-a78f-ba799c180128" containerName="registry-server" Nov 21 12:32:04 crc kubenswrapper[4972]: E1121 12:32:04.039929 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0938dc34-ad96-4092-a78f-ba799c180128" containerName="extract-content" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.039940 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0938dc34-ad96-4092-a78f-ba799c180128" containerName="extract-content" Nov 21 12:32:04 crc kubenswrapper[4972]: E1121 12:32:04.040018 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0938dc34-ad96-4092-a78f-ba799c180128" containerName="extract-utilities" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.040029 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="0938dc34-ad96-4092-a78f-ba799c180128" containerName="extract-utilities" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.040357 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="0938dc34-ad96-4092-a78f-ba799c180128" containerName="registry-server" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.042665 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.057744 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7g6w"] Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.161018 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8006fb4b-afce-4baa-a053-24d9f5843e67-catalog-content\") pod \"redhat-marketplace-z7g6w\" (UID: \"8006fb4b-afce-4baa-a053-24d9f5843e67\") " pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.161444 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8006fb4b-afce-4baa-a053-24d9f5843e67-utilities\") pod \"redhat-marketplace-z7g6w\" (UID: \"8006fb4b-afce-4baa-a053-24d9f5843e67\") " pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.161745 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gztqm\" (UniqueName: \"kubernetes.io/projected/8006fb4b-afce-4baa-a053-24d9f5843e67-kube-api-access-gztqm\") pod \"redhat-marketplace-z7g6w\" (UID: \"8006fb4b-afce-4baa-a053-24d9f5843e67\") " pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.263735 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gztqm\" (UniqueName: \"kubernetes.io/projected/8006fb4b-afce-4baa-a053-24d9f5843e67-kube-api-access-gztqm\") pod \"redhat-marketplace-z7g6w\" (UID: \"8006fb4b-afce-4baa-a053-24d9f5843e67\") " pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.265787 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8006fb4b-afce-4baa-a053-24d9f5843e67-catalog-content\") pod \"redhat-marketplace-z7g6w\" (UID: \"8006fb4b-afce-4baa-a053-24d9f5843e67\") " pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.266160 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8006fb4b-afce-4baa-a053-24d9f5843e67-utilities\") pod \"redhat-marketplace-z7g6w\" (UID: \"8006fb4b-afce-4baa-a053-24d9f5843e67\") " pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.266938 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8006fb4b-afce-4baa-a053-24d9f5843e67-utilities\") pod \"redhat-marketplace-z7g6w\" (UID: \"8006fb4b-afce-4baa-a053-24d9f5843e67\") " pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.267476 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8006fb4b-afce-4baa-a053-24d9f5843e67-catalog-content\") pod \"redhat-marketplace-z7g6w\" (UID: \"8006fb4b-afce-4baa-a053-24d9f5843e67\") " pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.655879 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gztqm\" (UniqueName: \"kubernetes.io/projected/8006fb4b-afce-4baa-a053-24d9f5843e67-kube-api-access-gztqm\") pod \"redhat-marketplace-z7g6w\" (UID: \"8006fb4b-afce-4baa-a053-24d9f5843e67\") " pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.675323 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:04 crc kubenswrapper[4972]: I1121 12:32:04.743978 4972 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="83330eac-fef8-4cfc-9e9b-2ff1fea0d559" containerName="galera" probeResult="failure" output="command timed out" Nov 21 12:32:05 crc kubenswrapper[4972]: I1121 12:32:05.240007 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7g6w"] Nov 21 12:32:05 crc kubenswrapper[4972]: I1121 12:32:05.979236 4972 generic.go:334] "Generic (PLEG): container finished" podID="8006fb4b-afce-4baa-a053-24d9f5843e67" containerID="ad2e262b3d74d215372f17175020899c9f967d33c056f3229ae8a9fbc1841832" exitCode=0 Nov 21 12:32:05 crc kubenswrapper[4972]: I1121 12:32:05.979325 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7g6w" event={"ID":"8006fb4b-afce-4baa-a053-24d9f5843e67","Type":"ContainerDied","Data":"ad2e262b3d74d215372f17175020899c9f967d33c056f3229ae8a9fbc1841832"} Nov 21 12:32:05 crc kubenswrapper[4972]: I1121 12:32:05.979570 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7g6w" event={"ID":"8006fb4b-afce-4baa-a053-24d9f5843e67","Type":"ContainerStarted","Data":"2a3f754a63212e93a9e21bc285cd97f06f4817c1cce9ebba0f69e83baf2a6e2e"} Nov 21 12:32:07 crc kubenswrapper[4972]: I1121 12:32:07.997605 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7g6w" event={"ID":"8006fb4b-afce-4baa-a053-24d9f5843e67","Type":"ContainerStarted","Data":"4fc2fb3e189caa82d307b6dd054fab3c99ecddbc548016ee4e44bfb05ca26f32"} Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.035569 4972 generic.go:334] "Generic (PLEG): container finished" podID="8006fb4b-afce-4baa-a053-24d9f5843e67" containerID="4fc2fb3e189caa82d307b6dd054fab3c99ecddbc548016ee4e44bfb05ca26f32" exitCode=0 Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.035653 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7g6w" event={"ID":"8006fb4b-afce-4baa-a053-24d9f5843e67","Type":"ContainerDied","Data":"4fc2fb3e189caa82d307b6dd054fab3c99ecddbc548016ee4e44bfb05ca26f32"} Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.632472 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hm8pn"] Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.635116 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.676551 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hm8pn"] Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.709870 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/603475a0-5d73-4161-bfc7-95ae8a4824ea-catalog-content\") pod \"community-operators-hm8pn\" (UID: \"603475a0-5d73-4161-bfc7-95ae8a4824ea\") " pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.710279 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/603475a0-5d73-4161-bfc7-95ae8a4824ea-utilities\") pod \"community-operators-hm8pn\" (UID: \"603475a0-5d73-4161-bfc7-95ae8a4824ea\") " pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.710428 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc7gv\" (UniqueName: \"kubernetes.io/projected/603475a0-5d73-4161-bfc7-95ae8a4824ea-kube-api-access-vc7gv\") pod \"community-operators-hm8pn\" (UID: \"603475a0-5d73-4161-bfc7-95ae8a4824ea\") " pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.812711 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/603475a0-5d73-4161-bfc7-95ae8a4824ea-utilities\") pod \"community-operators-hm8pn\" (UID: \"603475a0-5d73-4161-bfc7-95ae8a4824ea\") " pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.813015 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc7gv\" (UniqueName: \"kubernetes.io/projected/603475a0-5d73-4161-bfc7-95ae8a4824ea-kube-api-access-vc7gv\") pod \"community-operators-hm8pn\" (UID: \"603475a0-5d73-4161-bfc7-95ae8a4824ea\") " pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.813246 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/603475a0-5d73-4161-bfc7-95ae8a4824ea-catalog-content\") pod \"community-operators-hm8pn\" (UID: \"603475a0-5d73-4161-bfc7-95ae8a4824ea\") " pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.813650 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/603475a0-5d73-4161-bfc7-95ae8a4824ea-utilities\") pod \"community-operators-hm8pn\" (UID: \"603475a0-5d73-4161-bfc7-95ae8a4824ea\") " pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.813733 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/603475a0-5d73-4161-bfc7-95ae8a4824ea-catalog-content\") pod \"community-operators-hm8pn\" (UID: \"603475a0-5d73-4161-bfc7-95ae8a4824ea\") " pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.845365 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc7gv\" (UniqueName: \"kubernetes.io/projected/603475a0-5d73-4161-bfc7-95ae8a4824ea-kube-api-access-vc7gv\") pod \"community-operators-hm8pn\" (UID: \"603475a0-5d73-4161-bfc7-95ae8a4824ea\") " pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:09 crc kubenswrapper[4972]: I1121 12:32:09.976423 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:10 crc kubenswrapper[4972]: I1121 12:32:10.056578 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7g6w" event={"ID":"8006fb4b-afce-4baa-a053-24d9f5843e67","Type":"ContainerStarted","Data":"56dea2826c959c92a60dd30cdbf9eaf58541ac7e1c3a4675d8e9d8b11af7c416"} Nov 21 12:32:10 crc kubenswrapper[4972]: I1121 12:32:10.080033 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z7g6w" podStartSLOduration=2.5902880169999998 podStartE2EDuration="6.080011205s" podCreationTimestamp="2025-11-21 12:32:04 +0000 UTC" firstStartedPulling="2025-11-21 12:32:05.982147179 +0000 UTC m=+10271.091289667" lastFinishedPulling="2025-11-21 12:32:09.471870357 +0000 UTC m=+10274.581012855" observedRunningTime="2025-11-21 12:32:10.075773513 +0000 UTC m=+10275.184916031" watchObservedRunningTime="2025-11-21 12:32:10.080011205 +0000 UTC m=+10275.189153713" Nov 21 12:32:10 crc kubenswrapper[4972]: I1121 12:32:10.580074 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hm8pn"] Nov 21 12:32:10 crc kubenswrapper[4972]: W1121 12:32:10.590092 4972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod603475a0_5d73_4161_bfc7_95ae8a4824ea.slice/crio-e0caeb004ae862ee4fd69e331b4bb6c7e2ed180f3198f25767c3e8759030394d WatchSource:0}: Error finding container e0caeb004ae862ee4fd69e331b4bb6c7e2ed180f3198f25767c3e8759030394d: Status 404 returned error can't find the container with id e0caeb004ae862ee4fd69e331b4bb6c7e2ed180f3198f25767c3e8759030394d Nov 21 12:32:11 crc kubenswrapper[4972]: I1121 12:32:11.069922 4972 generic.go:334] "Generic (PLEG): container finished" podID="603475a0-5d73-4161-bfc7-95ae8a4824ea" containerID="bb451f60a5699de97b7b175c5c01a7a054f408351560f7ef3ae2a3a6bef7fab0" exitCode=0 Nov 21 12:32:11 crc kubenswrapper[4972]: I1121 12:32:11.070029 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8pn" event={"ID":"603475a0-5d73-4161-bfc7-95ae8a4824ea","Type":"ContainerDied","Data":"bb451f60a5699de97b7b175c5c01a7a054f408351560f7ef3ae2a3a6bef7fab0"} Nov 21 12:32:11 crc kubenswrapper[4972]: I1121 12:32:11.070455 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8pn" event={"ID":"603475a0-5d73-4161-bfc7-95ae8a4824ea","Type":"ContainerStarted","Data":"e0caeb004ae862ee4fd69e331b4bb6c7e2ed180f3198f25767c3e8759030394d"} Nov 21 12:32:12 crc kubenswrapper[4972]: I1121 12:32:12.079653 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8pn" event={"ID":"603475a0-5d73-4161-bfc7-95ae8a4824ea","Type":"ContainerStarted","Data":"8d740fb687461d35cfc3675999c4db5bdc379d1e8dfa68a924a356fd642e6e36"} Nov 21 12:32:14 crc kubenswrapper[4972]: I1121 12:32:14.676778 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:14 crc kubenswrapper[4972]: I1121 12:32:14.677587 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:14 crc kubenswrapper[4972]: I1121 12:32:14.758111 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:15 crc kubenswrapper[4972]: I1121 12:32:15.180976 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:17 crc kubenswrapper[4972]: I1121 12:32:17.027507 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7g6w"] Nov 21 12:32:17 crc kubenswrapper[4972]: I1121 12:32:17.139286 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z7g6w" podUID="8006fb4b-afce-4baa-a053-24d9f5843e67" containerName="registry-server" containerID="cri-o://56dea2826c959c92a60dd30cdbf9eaf58541ac7e1c3a4675d8e9d8b11af7c416" gracePeriod=2 Nov 21 12:32:18 crc kubenswrapper[4972]: I1121 12:32:18.158423 4972 generic.go:334] "Generic (PLEG): container finished" podID="8006fb4b-afce-4baa-a053-24d9f5843e67" containerID="56dea2826c959c92a60dd30cdbf9eaf58541ac7e1c3a4675d8e9d8b11af7c416" exitCode=0 Nov 21 12:32:18 crc kubenswrapper[4972]: I1121 12:32:18.158531 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7g6w" event={"ID":"8006fb4b-afce-4baa-a053-24d9f5843e67","Type":"ContainerDied","Data":"56dea2826c959c92a60dd30cdbf9eaf58541ac7e1c3a4675d8e9d8b11af7c416"} Nov 21 12:32:19 crc kubenswrapper[4972]: I1121 12:32:19.450274 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:19 crc kubenswrapper[4972]: I1121 12:32:19.546073 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gztqm\" (UniqueName: \"kubernetes.io/projected/8006fb4b-afce-4baa-a053-24d9f5843e67-kube-api-access-gztqm\") pod \"8006fb4b-afce-4baa-a053-24d9f5843e67\" (UID: \"8006fb4b-afce-4baa-a053-24d9f5843e67\") " Nov 21 12:32:19 crc kubenswrapper[4972]: I1121 12:32:19.547701 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8006fb4b-afce-4baa-a053-24d9f5843e67-catalog-content\") pod \"8006fb4b-afce-4baa-a053-24d9f5843e67\" (UID: \"8006fb4b-afce-4baa-a053-24d9f5843e67\") " Nov 21 12:32:19 crc kubenswrapper[4972]: I1121 12:32:19.547952 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8006fb4b-afce-4baa-a053-24d9f5843e67-utilities\") pod \"8006fb4b-afce-4baa-a053-24d9f5843e67\" (UID: \"8006fb4b-afce-4baa-a053-24d9f5843e67\") " Nov 21 12:32:19 crc kubenswrapper[4972]: I1121 12:32:19.548933 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8006fb4b-afce-4baa-a053-24d9f5843e67-utilities" (OuterVolumeSpecName: "utilities") pod "8006fb4b-afce-4baa-a053-24d9f5843e67" (UID: "8006fb4b-afce-4baa-a053-24d9f5843e67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:32:19 crc kubenswrapper[4972]: I1121 12:32:19.552730 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8006fb4b-afce-4baa-a053-24d9f5843e67-kube-api-access-gztqm" (OuterVolumeSpecName: "kube-api-access-gztqm") pod "8006fb4b-afce-4baa-a053-24d9f5843e67" (UID: "8006fb4b-afce-4baa-a053-24d9f5843e67"). InnerVolumeSpecName "kube-api-access-gztqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:32:19 crc kubenswrapper[4972]: I1121 12:32:19.585226 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8006fb4b-afce-4baa-a053-24d9f5843e67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8006fb4b-afce-4baa-a053-24d9f5843e67" (UID: "8006fb4b-afce-4baa-a053-24d9f5843e67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:32:19 crc kubenswrapper[4972]: I1121 12:32:19.650630 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gztqm\" (UniqueName: \"kubernetes.io/projected/8006fb4b-afce-4baa-a053-24d9f5843e67-kube-api-access-gztqm\") on node \"crc\" DevicePath \"\"" Nov 21 12:32:19 crc kubenswrapper[4972]: I1121 12:32:19.650673 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8006fb4b-afce-4baa-a053-24d9f5843e67-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:32:19 crc kubenswrapper[4972]: I1121 12:32:19.650688 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8006fb4b-afce-4baa-a053-24d9f5843e67-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:32:20 crc kubenswrapper[4972]: I1121 12:32:20.182771 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7g6w" event={"ID":"8006fb4b-afce-4baa-a053-24d9f5843e67","Type":"ContainerDied","Data":"2a3f754a63212e93a9e21bc285cd97f06f4817c1cce9ebba0f69e83baf2a6e2e"} Nov 21 12:32:20 crc kubenswrapper[4972]: I1121 12:32:20.182853 4972 scope.go:117] "RemoveContainer" containerID="56dea2826c959c92a60dd30cdbf9eaf58541ac7e1c3a4675d8e9d8b11af7c416" Nov 21 12:32:20 crc kubenswrapper[4972]: I1121 12:32:20.182973 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z7g6w" Nov 21 12:32:20 crc kubenswrapper[4972]: I1121 12:32:20.210001 4972 scope.go:117] "RemoveContainer" containerID="4fc2fb3e189caa82d307b6dd054fab3c99ecddbc548016ee4e44bfb05ca26f32" Nov 21 12:32:20 crc kubenswrapper[4972]: I1121 12:32:20.214218 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7g6w"] Nov 21 12:32:20 crc kubenswrapper[4972]: I1121 12:32:20.232634 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7g6w"] Nov 21 12:32:20 crc kubenswrapper[4972]: I1121 12:32:20.239690 4972 scope.go:117] "RemoveContainer" containerID="ad2e262b3d74d215372f17175020899c9f967d33c056f3229ae8a9fbc1841832" Nov 21 12:32:21 crc kubenswrapper[4972]: I1121 12:32:21.194386 4972 generic.go:334] "Generic (PLEG): container finished" podID="603475a0-5d73-4161-bfc7-95ae8a4824ea" containerID="8d740fb687461d35cfc3675999c4db5bdc379d1e8dfa68a924a356fd642e6e36" exitCode=0 Nov 21 12:32:21 crc kubenswrapper[4972]: I1121 12:32:21.194436 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8pn" event={"ID":"603475a0-5d73-4161-bfc7-95ae8a4824ea","Type":"ContainerDied","Data":"8d740fb687461d35cfc3675999c4db5bdc379d1e8dfa68a924a356fd642e6e36"} Nov 21 12:32:21 crc kubenswrapper[4972]: I1121 12:32:21.775132 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8006fb4b-afce-4baa-a053-24d9f5843e67" path="/var/lib/kubelet/pods/8006fb4b-afce-4baa-a053-24d9f5843e67/volumes" Nov 21 12:32:22 crc kubenswrapper[4972]: I1121 12:32:22.211020 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8pn" event={"ID":"603475a0-5d73-4161-bfc7-95ae8a4824ea","Type":"ContainerStarted","Data":"ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3"} Nov 21 12:32:22 crc kubenswrapper[4972]: I1121 12:32:22.240224 4972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hm8pn" podStartSLOduration=2.408570903 podStartE2EDuration="13.240209661s" podCreationTimestamp="2025-11-21 12:32:09 +0000 UTC" firstStartedPulling="2025-11-21 12:32:11.071550233 +0000 UTC m=+10276.180692731" lastFinishedPulling="2025-11-21 12:32:21.903188991 +0000 UTC m=+10287.012331489" observedRunningTime="2025-11-21 12:32:22.228961234 +0000 UTC m=+10287.338103762" watchObservedRunningTime="2025-11-21 12:32:22.240209661 +0000 UTC m=+10287.349352169" Nov 21 12:32:29 crc kubenswrapper[4972]: I1121 12:32:29.976556 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:29 crc kubenswrapper[4972]: I1121 12:32:29.977574 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:30 crc kubenswrapper[4972]: I1121 12:32:30.026744 4972 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:30 crc kubenswrapper[4972]: I1121 12:32:30.402787 4972 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:30 crc kubenswrapper[4972]: I1121 12:32:30.457075 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hm8pn"] Nov 21 12:32:32 crc kubenswrapper[4972]: I1121 12:32:32.354971 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hm8pn" podUID="603475a0-5d73-4161-bfc7-95ae8a4824ea" containerName="registry-server" containerID="cri-o://ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3" gracePeriod=2 Nov 21 12:32:32 crc kubenswrapper[4972]: I1121 12:32:32.897938 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.064222 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/603475a0-5d73-4161-bfc7-95ae8a4824ea-utilities\") pod \"603475a0-5d73-4161-bfc7-95ae8a4824ea\" (UID: \"603475a0-5d73-4161-bfc7-95ae8a4824ea\") " Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.064278 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc7gv\" (UniqueName: \"kubernetes.io/projected/603475a0-5d73-4161-bfc7-95ae8a4824ea-kube-api-access-vc7gv\") pod \"603475a0-5d73-4161-bfc7-95ae8a4824ea\" (UID: \"603475a0-5d73-4161-bfc7-95ae8a4824ea\") " Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.064336 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/603475a0-5d73-4161-bfc7-95ae8a4824ea-catalog-content\") pod \"603475a0-5d73-4161-bfc7-95ae8a4824ea\" (UID: \"603475a0-5d73-4161-bfc7-95ae8a4824ea\") " Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.065219 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/603475a0-5d73-4161-bfc7-95ae8a4824ea-utilities" (OuterVolumeSpecName: "utilities") pod "603475a0-5d73-4161-bfc7-95ae8a4824ea" (UID: "603475a0-5d73-4161-bfc7-95ae8a4824ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.075928 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603475a0-5d73-4161-bfc7-95ae8a4824ea-kube-api-access-vc7gv" (OuterVolumeSpecName: "kube-api-access-vc7gv") pod "603475a0-5d73-4161-bfc7-95ae8a4824ea" (UID: "603475a0-5d73-4161-bfc7-95ae8a4824ea"). InnerVolumeSpecName "kube-api-access-vc7gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.114361 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/603475a0-5d73-4161-bfc7-95ae8a4824ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "603475a0-5d73-4161-bfc7-95ae8a4824ea" (UID: "603475a0-5d73-4161-bfc7-95ae8a4824ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.167439 4972 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/603475a0-5d73-4161-bfc7-95ae8a4824ea-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.167467 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc7gv\" (UniqueName: \"kubernetes.io/projected/603475a0-5d73-4161-bfc7-95ae8a4824ea-kube-api-access-vc7gv\") on node \"crc\" DevicePath \"\"" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.167477 4972 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/603475a0-5d73-4161-bfc7-95ae8a4824ea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.364210 4972 generic.go:334] "Generic (PLEG): container finished" podID="603475a0-5d73-4161-bfc7-95ae8a4824ea" containerID="ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3" exitCode=0 Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.364981 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8pn" event={"ID":"603475a0-5d73-4161-bfc7-95ae8a4824ea","Type":"ContainerDied","Data":"ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3"} Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.365105 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8pn" event={"ID":"603475a0-5d73-4161-bfc7-95ae8a4824ea","Type":"ContainerDied","Data":"e0caeb004ae862ee4fd69e331b4bb6c7e2ed180f3198f25767c3e8759030394d"} Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.365293 4972 scope.go:117] "RemoveContainer" containerID="ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.365462 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm8pn" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.399472 4972 scope.go:117] "RemoveContainer" containerID="8d740fb687461d35cfc3675999c4db5bdc379d1e8dfa68a924a356fd642e6e36" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.404517 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hm8pn"] Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.413982 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hm8pn"] Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.438329 4972 scope.go:117] "RemoveContainer" containerID="bb451f60a5699de97b7b175c5c01a7a054f408351560f7ef3ae2a3a6bef7fab0" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.488924 4972 scope.go:117] "RemoveContainer" containerID="ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3" Nov 21 12:32:33 crc kubenswrapper[4972]: E1121 12:32:33.489344 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3\": container with ID starting with ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3 not found: ID does not exist" containerID="ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.489375 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3"} err="failed to get container status \"ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3\": rpc error: code = NotFound desc = could not find container \"ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3\": container with ID starting with ae74c2bbc62ddaa89892ef51883350f47418aba99f162b8fa760abac47e8fec3 not found: ID does not exist" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.489395 4972 scope.go:117] "RemoveContainer" containerID="8d740fb687461d35cfc3675999c4db5bdc379d1e8dfa68a924a356fd642e6e36" Nov 21 12:32:33 crc kubenswrapper[4972]: E1121 12:32:33.489769 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d740fb687461d35cfc3675999c4db5bdc379d1e8dfa68a924a356fd642e6e36\": container with ID starting with 8d740fb687461d35cfc3675999c4db5bdc379d1e8dfa68a924a356fd642e6e36 not found: ID does not exist" containerID="8d740fb687461d35cfc3675999c4db5bdc379d1e8dfa68a924a356fd642e6e36" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.489801 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d740fb687461d35cfc3675999c4db5bdc379d1e8dfa68a924a356fd642e6e36"} err="failed to get container status \"8d740fb687461d35cfc3675999c4db5bdc379d1e8dfa68a924a356fd642e6e36\": rpc error: code = NotFound desc = could not find container \"8d740fb687461d35cfc3675999c4db5bdc379d1e8dfa68a924a356fd642e6e36\": container with ID starting with 8d740fb687461d35cfc3675999c4db5bdc379d1e8dfa68a924a356fd642e6e36 not found: ID does not exist" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.489819 4972 scope.go:117] "RemoveContainer" containerID="bb451f60a5699de97b7b175c5c01a7a054f408351560f7ef3ae2a3a6bef7fab0" Nov 21 12:32:33 crc kubenswrapper[4972]: E1121 12:32:33.490071 4972 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb451f60a5699de97b7b175c5c01a7a054f408351560f7ef3ae2a3a6bef7fab0\": container with ID starting with bb451f60a5699de97b7b175c5c01a7a054f408351560f7ef3ae2a3a6bef7fab0 not found: ID does not exist" containerID="bb451f60a5699de97b7b175c5c01a7a054f408351560f7ef3ae2a3a6bef7fab0" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.490173 4972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb451f60a5699de97b7b175c5c01a7a054f408351560f7ef3ae2a3a6bef7fab0"} err="failed to get container status \"bb451f60a5699de97b7b175c5c01a7a054f408351560f7ef3ae2a3a6bef7fab0\": rpc error: code = NotFound desc = could not find container \"bb451f60a5699de97b7b175c5c01a7a054f408351560f7ef3ae2a3a6bef7fab0\": container with ID starting with bb451f60a5699de97b7b175c5c01a7a054f408351560f7ef3ae2a3a6bef7fab0 not found: ID does not exist" Nov 21 12:32:33 crc kubenswrapper[4972]: I1121 12:32:33.777134 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603475a0-5d73-4161-bfc7-95ae8a4824ea" path="/var/lib/kubelet/pods/603475a0-5d73-4161-bfc7-95ae8a4824ea/volumes" Nov 21 12:32:56 crc kubenswrapper[4972]: I1121 12:32:56.178548 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:32:56 crc kubenswrapper[4972]: I1121 12:32:56.179350 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:33:03 crc kubenswrapper[4972]: I1121 12:33:03.695754 4972 generic.go:334] "Generic (PLEG): container finished" podID="ff1b4df7-ae09-4998-a055-37f9e720cf27" containerID="3c6800bdd755e014bd5759e9e0b2dcfb90495aaaa848c0090c696378c4f7d087" exitCode=0 Nov 21 12:33:03 crc kubenswrapper[4972]: I1121 12:33:03.695883 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8czjn/must-gather-s4k4l" event={"ID":"ff1b4df7-ae09-4998-a055-37f9e720cf27","Type":"ContainerDied","Data":"3c6800bdd755e014bd5759e9e0b2dcfb90495aaaa848c0090c696378c4f7d087"} Nov 21 12:33:03 crc kubenswrapper[4972]: I1121 12:33:03.697099 4972 scope.go:117] "RemoveContainer" containerID="3c6800bdd755e014bd5759e9e0b2dcfb90495aaaa848c0090c696378c4f7d087" Nov 21 12:33:04 crc kubenswrapper[4972]: I1121 12:33:04.431796 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8czjn_must-gather-s4k4l_ff1b4df7-ae09-4998-a055-37f9e720cf27/gather/0.log" Nov 21 12:33:13 crc kubenswrapper[4972]: I1121 12:33:13.323825 4972 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8czjn/must-gather-s4k4l"] Nov 21 12:33:13 crc kubenswrapper[4972]: I1121 12:33:13.324921 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8czjn/must-gather-s4k4l" podUID="ff1b4df7-ae09-4998-a055-37f9e720cf27" containerName="copy" containerID="cri-o://887ec4dfba30b660965cc14aca57290072b9a046eb2f888187e967ec8420481b" gracePeriod=2 Nov 21 12:33:13 crc kubenswrapper[4972]: I1121 12:33:13.349473 4972 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8czjn/must-gather-s4k4l"] Nov 21 12:33:13 crc kubenswrapper[4972]: I1121 12:33:13.829257 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8czjn_must-gather-s4k4l_ff1b4df7-ae09-4998-a055-37f9e720cf27/copy/0.log" Nov 21 12:33:13 crc kubenswrapper[4972]: I1121 12:33:13.830074 4972 generic.go:334] "Generic (PLEG): container finished" podID="ff1b4df7-ae09-4998-a055-37f9e720cf27" containerID="887ec4dfba30b660965cc14aca57290072b9a046eb2f888187e967ec8420481b" exitCode=143 Nov 21 12:33:14 crc kubenswrapper[4972]: I1121 12:33:14.009683 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8czjn_must-gather-s4k4l_ff1b4df7-ae09-4998-a055-37f9e720cf27/copy/0.log" Nov 21 12:33:14 crc kubenswrapper[4972]: I1121 12:33:14.010162 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8czjn/must-gather-s4k4l" Nov 21 12:33:14 crc kubenswrapper[4972]: I1121 12:33:14.105297 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m8vd\" (UniqueName: \"kubernetes.io/projected/ff1b4df7-ae09-4998-a055-37f9e720cf27-kube-api-access-6m8vd\") pod \"ff1b4df7-ae09-4998-a055-37f9e720cf27\" (UID: \"ff1b4df7-ae09-4998-a055-37f9e720cf27\") " Nov 21 12:33:14 crc kubenswrapper[4972]: I1121 12:33:14.105479 4972 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff1b4df7-ae09-4998-a055-37f9e720cf27-must-gather-output\") pod \"ff1b4df7-ae09-4998-a055-37f9e720cf27\" (UID: \"ff1b4df7-ae09-4998-a055-37f9e720cf27\") " Nov 21 12:33:14 crc kubenswrapper[4972]: I1121 12:33:14.121377 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1b4df7-ae09-4998-a055-37f9e720cf27-kube-api-access-6m8vd" (OuterVolumeSpecName: "kube-api-access-6m8vd") pod "ff1b4df7-ae09-4998-a055-37f9e720cf27" (UID: "ff1b4df7-ae09-4998-a055-37f9e720cf27"). InnerVolumeSpecName "kube-api-access-6m8vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 12:33:14 crc kubenswrapper[4972]: I1121 12:33:14.208403 4972 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m8vd\" (UniqueName: \"kubernetes.io/projected/ff1b4df7-ae09-4998-a055-37f9e720cf27-kube-api-access-6m8vd\") on node \"crc\" DevicePath \"\"" Nov 21 12:33:14 crc kubenswrapper[4972]: I1121 12:33:14.405795 4972 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff1b4df7-ae09-4998-a055-37f9e720cf27-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ff1b4df7-ae09-4998-a055-37f9e720cf27" (UID: "ff1b4df7-ae09-4998-a055-37f9e720cf27"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 12:33:14 crc kubenswrapper[4972]: I1121 12:33:14.427275 4972 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff1b4df7-ae09-4998-a055-37f9e720cf27-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 21 12:33:14 crc kubenswrapper[4972]: I1121 12:33:14.846558 4972 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8czjn_must-gather-s4k4l_ff1b4df7-ae09-4998-a055-37f9e720cf27/copy/0.log" Nov 21 12:33:14 crc kubenswrapper[4972]: I1121 12:33:14.847281 4972 scope.go:117] "RemoveContainer" containerID="887ec4dfba30b660965cc14aca57290072b9a046eb2f888187e967ec8420481b" Nov 21 12:33:14 crc kubenswrapper[4972]: I1121 12:33:14.847333 4972 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8czjn/must-gather-s4k4l" Nov 21 12:33:14 crc kubenswrapper[4972]: I1121 12:33:14.903880 4972 scope.go:117] "RemoveContainer" containerID="3c6800bdd755e014bd5759e9e0b2dcfb90495aaaa848c0090c696378c4f7d087" Nov 21 12:33:15 crc kubenswrapper[4972]: I1121 12:33:15.772095 4972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1b4df7-ae09-4998-a055-37f9e720cf27" path="/var/lib/kubelet/pods/ff1b4df7-ae09-4998-a055-37f9e720cf27/volumes" Nov 21 12:33:26 crc kubenswrapper[4972]: I1121 12:33:26.178600 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:33:26 crc kubenswrapper[4972]: I1121 12:33:26.179434 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:33:56 crc kubenswrapper[4972]: I1121 12:33:56.178484 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:33:56 crc kubenswrapper[4972]: I1121 12:33:56.179038 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:33:56 crc kubenswrapper[4972]: I1121 12:33:56.179081 4972 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" Nov 21 12:33:56 crc kubenswrapper[4972]: I1121 12:33:56.179549 4972 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99df28f1f738cddd09769c2aa4ce824fee9a646c6ffeb05938958550f2dc3b19"} pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 12:33:56 crc kubenswrapper[4972]: I1121 12:33:56.179599 4972 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" containerID="cri-o://99df28f1f738cddd09769c2aa4ce824fee9a646c6ffeb05938958550f2dc3b19" gracePeriod=600 Nov 21 12:33:57 crc kubenswrapper[4972]: I1121 12:33:57.288427 4972 generic.go:334] "Generic (PLEG): container finished" podID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerID="99df28f1f738cddd09769c2aa4ce824fee9a646c6ffeb05938958550f2dc3b19" exitCode=0 Nov 21 12:33:57 crc kubenswrapper[4972]: I1121 12:33:57.288972 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerDied","Data":"99df28f1f738cddd09769c2aa4ce824fee9a646c6ffeb05938958550f2dc3b19"} Nov 21 12:33:57 crc kubenswrapper[4972]: I1121 12:33:57.289006 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" event={"ID":"ec41c003-c1ce-4c2f-8eed-62ff2974cd8a","Type":"ContainerStarted","Data":"9ed0cc4f17c2a4f0ac1c989748d7bca94f77258cb1abb9318a125f088d8883ad"} Nov 21 12:33:57 crc kubenswrapper[4972]: I1121 12:33:57.289027 4972 scope.go:117] "RemoveContainer" containerID="62358e04047854db13e8cdddd529ec57da548e7926ac8d475627216563bca0cb" Nov 21 12:35:56 crc kubenswrapper[4972]: I1121 12:35:56.179399 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:35:56 crc kubenswrapper[4972]: I1121 12:35:56.180181 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.179166 4972 patch_prober.go:28] interesting pod/machine-config-daemon-9l6cj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.181545 4972 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9l6cj" podUID="ec41c003-c1ce-4c2f-8eed-62ff2974cd8a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.878217 4972 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-crc8b"] Nov 21 12:36:26 crc kubenswrapper[4972]: E1121 12:36:26.879795 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1b4df7-ae09-4998-a055-37f9e720cf27" containerName="gather" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.879968 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1b4df7-ae09-4998-a055-37f9e720cf27" containerName="gather" Nov 21 12:36:26 crc kubenswrapper[4972]: E1121 12:36:26.880358 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8006fb4b-afce-4baa-a053-24d9f5843e67" containerName="extract-utilities" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.880471 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8006fb4b-afce-4baa-a053-24d9f5843e67" containerName="extract-utilities" Nov 21 12:36:26 crc kubenswrapper[4972]: E1121 12:36:26.880582 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603475a0-5d73-4161-bfc7-95ae8a4824ea" containerName="extract-utilities" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.880686 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="603475a0-5d73-4161-bfc7-95ae8a4824ea" containerName="extract-utilities" Nov 21 12:36:26 crc kubenswrapper[4972]: E1121 12:36:26.880874 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603475a0-5d73-4161-bfc7-95ae8a4824ea" containerName="registry-server" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.881037 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="603475a0-5d73-4161-bfc7-95ae8a4824ea" containerName="registry-server" Nov 21 12:36:26 crc kubenswrapper[4972]: E1121 12:36:26.881209 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603475a0-5d73-4161-bfc7-95ae8a4824ea" containerName="extract-content" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.881364 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="603475a0-5d73-4161-bfc7-95ae8a4824ea" containerName="extract-content" Nov 21 12:36:26 crc kubenswrapper[4972]: E1121 12:36:26.881513 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8006fb4b-afce-4baa-a053-24d9f5843e67" containerName="registry-server" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.881655 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8006fb4b-afce-4baa-a053-24d9f5843e67" containerName="registry-server" Nov 21 12:36:26 crc kubenswrapper[4972]: E1121 12:36:26.881815 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8006fb4b-afce-4baa-a053-24d9f5843e67" containerName="extract-content" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.881988 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="8006fb4b-afce-4baa-a053-24d9f5843e67" containerName="extract-content" Nov 21 12:36:26 crc kubenswrapper[4972]: E1121 12:36:26.882150 4972 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1b4df7-ae09-4998-a055-37f9e720cf27" containerName="copy" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.882263 4972 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1b4df7-ae09-4998-a055-37f9e720cf27" containerName="copy" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.882745 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="603475a0-5d73-4161-bfc7-95ae8a4824ea" containerName="registry-server" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.882901 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="8006fb4b-afce-4baa-a053-24d9f5843e67" containerName="registry-server" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.883032 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1b4df7-ae09-4998-a055-37f9e720cf27" containerName="copy" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.883245 4972 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1b4df7-ae09-4998-a055-37f9e720cf27" containerName="gather" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.886889 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crc8b" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.899853 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crc8b"] Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.980539 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rkhs\" (UniqueName: \"kubernetes.io/projected/2c8964e2-c355-4070-bbf3-c71f96d833b1-kube-api-access-4rkhs\") pod \"certified-operators-crc8b\" (UID: \"2c8964e2-c355-4070-bbf3-c71f96d833b1\") " pod="openshift-marketplace/certified-operators-crc8b" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.980935 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8964e2-c355-4070-bbf3-c71f96d833b1-utilities\") pod \"certified-operators-crc8b\" (UID: \"2c8964e2-c355-4070-bbf3-c71f96d833b1\") " pod="openshift-marketplace/certified-operators-crc8b" Nov 21 12:36:26 crc kubenswrapper[4972]: I1121 12:36:26.981070 4972 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8964e2-c355-4070-bbf3-c71f96d833b1-catalog-content\") pod \"certified-operators-crc8b\" (UID: \"2c8964e2-c355-4070-bbf3-c71f96d833b1\") " pod="openshift-marketplace/certified-operators-crc8b" Nov 21 12:36:27 crc kubenswrapper[4972]: I1121 12:36:27.082866 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rkhs\" (UniqueName: \"kubernetes.io/projected/2c8964e2-c355-4070-bbf3-c71f96d833b1-kube-api-access-4rkhs\") pod \"certified-operators-crc8b\" (UID: \"2c8964e2-c355-4070-bbf3-c71f96d833b1\") " pod="openshift-marketplace/certified-operators-crc8b" Nov 21 12:36:27 crc kubenswrapper[4972]: I1121 12:36:27.082969 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8964e2-c355-4070-bbf3-c71f96d833b1-utilities\") pod \"certified-operators-crc8b\" (UID: \"2c8964e2-c355-4070-bbf3-c71f96d833b1\") " pod="openshift-marketplace/certified-operators-crc8b" Nov 21 12:36:27 crc kubenswrapper[4972]: I1121 12:36:27.083098 4972 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8964e2-c355-4070-bbf3-c71f96d833b1-catalog-content\") pod \"certified-operators-crc8b\" (UID: \"2c8964e2-c355-4070-bbf3-c71f96d833b1\") " pod="openshift-marketplace/certified-operators-crc8b" Nov 21 12:36:27 crc kubenswrapper[4972]: I1121 12:36:27.083517 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8964e2-c355-4070-bbf3-c71f96d833b1-utilities\") pod \"certified-operators-crc8b\" (UID: \"2c8964e2-c355-4070-bbf3-c71f96d833b1\") " pod="openshift-marketplace/certified-operators-crc8b" Nov 21 12:36:27 crc kubenswrapper[4972]: I1121 12:36:27.083553 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8964e2-c355-4070-bbf3-c71f96d833b1-catalog-content\") pod \"certified-operators-crc8b\" (UID: \"2c8964e2-c355-4070-bbf3-c71f96d833b1\") " pod="openshift-marketplace/certified-operators-crc8b" Nov 21 12:36:27 crc kubenswrapper[4972]: I1121 12:36:27.105414 4972 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rkhs\" (UniqueName: \"kubernetes.io/projected/2c8964e2-c355-4070-bbf3-c71f96d833b1-kube-api-access-4rkhs\") pod \"certified-operators-crc8b\" (UID: \"2c8964e2-c355-4070-bbf3-c71f96d833b1\") " pod="openshift-marketplace/certified-operators-crc8b" Nov 21 12:36:27 crc kubenswrapper[4972]: I1121 12:36:27.217877 4972 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crc8b" Nov 21 12:36:27 crc kubenswrapper[4972]: I1121 12:36:27.829419 4972 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crc8b"] Nov 21 12:36:27 crc kubenswrapper[4972]: I1121 12:36:27.986927 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crc8b" event={"ID":"2c8964e2-c355-4070-bbf3-c71f96d833b1","Type":"ContainerStarted","Data":"75c7d03d1cba77cfe4ef0e49fedbf829ec9370880974304df5112508b0d4b39f"} Nov 21 12:36:28 crc kubenswrapper[4972]: I1121 12:36:28.999266 4972 generic.go:334] "Generic (PLEG): container finished" podID="2c8964e2-c355-4070-bbf3-c71f96d833b1" containerID="8b7fe271edaaa9b3b369e68f4810fb4126c1646028638a6da4ffd2c76514f705" exitCode=0 Nov 21 12:36:29 crc kubenswrapper[4972]: I1121 12:36:28.999395 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crc8b" event={"ID":"2c8964e2-c355-4070-bbf3-c71f96d833b1","Type":"ContainerDied","Data":"8b7fe271edaaa9b3b369e68f4810fb4126c1646028638a6da4ffd2c76514f705"} Nov 21 12:36:29 crc kubenswrapper[4972]: I1121 12:36:29.001484 4972 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 12:36:31 crc kubenswrapper[4972]: I1121 12:36:31.029650 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crc8b" event={"ID":"2c8964e2-c355-4070-bbf3-c71f96d833b1","Type":"ContainerStarted","Data":"7e4a0ed01ed18b8e10849775492bedd34572443df42cbffd21a91def11c85b31"} Nov 21 12:36:32 crc kubenswrapper[4972]: I1121 12:36:32.040114 4972 generic.go:334] "Generic (PLEG): container finished" podID="2c8964e2-c355-4070-bbf3-c71f96d833b1" containerID="7e4a0ed01ed18b8e10849775492bedd34572443df42cbffd21a91def11c85b31" exitCode=0 Nov 21 12:36:32 crc kubenswrapper[4972]: I1121 12:36:32.040175 4972 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crc8b" event={"ID":"2c8964e2-c355-4070-bbf3-c71f96d833b1","Type":"ContainerDied","Data":"7e4a0ed01ed18b8e10849775492bedd34572443df42cbffd21a91def11c85b31"}